Tidy up README.md

The readme was woefully old.  Make it more modern to match the current
state of the repository and development.

Change-Id: I9be10ca8565835c784afea63ee70978d10313e4a
This commit is contained in:
Steven Dake 2014-10-21 06:50:58 -07:00
parent 7d0f2d41eb
commit 4724eb11e2

201
README.md
View File

@ -12,7 +12,7 @@ Getting Started
===============
Kubernetes deployment on bare metal is a complex topic which is beyond the
scope of this project at this time. The developers still require a test
scope of this project at this time. The developers require a development test
environment. As a result, one of the developers has created a Heat based
deployment tool that can be
found [here](https://github.com/larsks/heat-kubernetes).
@ -21,73 +21,142 @@ found [here](https://github.com/larsks/heat-kubernetes).
Build Docker Images
-------------------
Within the docker directory is a tool called build. This tool will build
all of the docker images that have been implemented. Each OpenStack service is
implemented as a separate container that can later be registered with
Kubernetes. These containers are published to the the public docker registry and
are referenced in the kubernetes configuration files in this repo.
Images are built by the Kolla project maintainers. It is possible to build
unique images with specific changes, but these would end up in a personal
namespace. Read the docs directory image workflow documentation for more
details.
The Kolla developers build images in the kollaglue namespace for the following
services:
Glance
Heat
Keystone
Mariadb
Nova
Rabbitmq
```
[sdake@bigiron docker]$ sudo ./build
$ sudo docker search kollaglue
```
A 20-30 minute build process will begin where containers will be built for
each OpenStack service. Once finished the docker images can be examined with
the docker CLI.
```
[sdake@bigiron docker]$ sudo docker images
```
A list of the built docker images will be shown.
Note at this time the images do not yet work correctly or operate on their
defined environment variables. They are essentially placeholders.
A list of the upstream built docker images will be shown.
Use Kubernetes to Deploy OpenStack
----------------------------------
Keystone and MariaDB are the only pods that are implimented. They operate
just enough to verify that services are running and may have bugs in their configurations.
At this point, we believe the key features for a minimum viable feature set
are implemented. This includes the capability to launch virtual machines in
Nova. One key fact is that networking may not entirely work properly yet
until Neutron is finished, so the virtual machines may not actually behave
as expected for an end user deployment.
To get Keystone running start by downloading the pod and service json files for MariaDB
to a running kubernetes cluster.
```
curl https://raw.githubusercontent.com/stackforge/kolla/master/k8s/service/mariadb-service.json > mariadb-service.json
curl https://raw.githubusercontent.com/stackforge/kolla/master/k8s/pod/mariadb-pod.json > mariadb-pod.json
```
Two options exist for those without an existing Kubernetes environment:
Next launch the MariaDB pod and service files. The services are started first incase the pods reference
their addresses. You'll see the same thing in keystone when it's launched.
```
$ kubecfg -c mariadb-service.json create services
ID Labels Selector Port
---------- ---------- ---------- ----------
mariadb name=mariadb 3306
The upstream Kubernetes community provides instructions for running Kubernetes
using Vagrant, available from:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md
The Kolla developers develop Kolla in OpenStack, using Heat to provision the
necessary servers and other resources. If you are familiar with Heat and
have a correctly configured environment available, this allows deployment
of a working Kubernetes cluster automatically. The Heat templates are
available from https://github.com/larsks/heat-kubernetes/. The templates
require at least Heat 2014.1.3 (earlier versions have a bug that will prevent
the templates from working).
Here are some simple steps to get things rolling using the Heat templates:
1. git clone https://github.com/larsks/heat-kubernetes/; cd heat-kubernetes
2. Create an appropriate image by running the get_image.sh script in this repository. This will generate an image called "fedora-20-k8s.qcow2". Upload this image to Glance. You can also obtain an appropriate image from
https://fedorapeople.org/groups/heat/kolla/fedora-20-k8s.qcow2
3. Create a file "local.yaml" with settings appropriate to your OpenStack environment. It should look something like:
parameters:
server_image: fedora-20-k8s
ssh_key_name: sdake
dns_nameserver: 8.8.8.8
external_network_id: 6e7e7701-46a0-49c0-9f06-ac5abc79d6ae
number_of_minions: 1
server_flavor: m1.large
You *must* provide settings for external_network_id and ssh_key_name; these are
local to your environment. You will probably also need to provide a value for
server_image, which should be the name (or UUID) of a Fedora 20 cloud image or
derivative.
4. heat stack-create -f kubecluster.yaml -e local.yaml my-kube-cluster
5. Determine the ip addresses of your cluster hosts by running:
heat output-show my-kube-cluster kube_minions_external
6. ssh fedora@${minion-ip}
7. minion$ git clone http://github.com/stackforge/kolla
8. minion$ cd kolla
9. minion$ ./tools/start
Debugging
==========
A few commands for debugging the system.
$ kubecfg -c mariadb-pod.json create pods
ID Image(s) Host Labels Status
---------- ---------- ---------- ---------- ----------
mariadb kollaglue/fedora-rdo-mariadb / name=mariadb Waiting
```
To verify their creation and see their status use the list command. You are ready to move on when the
pod's status reaches **Running**.
$ sudo docker images
```
Lists all images that have been pulled from the upstream kollaglue repository
thus far. This can be run on the minion during the ./start operation to check
on the download progress.
```
$ sudo docker ps -a
```
This will show all processes that docker has started. Removing the -a will
show only active processes. This can be run on the minion during the ./start
operation to check that the containers are orchestrated.
```
$ sudo docker logs <containerid>
```
This shows the logging output of each service in a container. The containerid
can be obtained via the docker ps operation. This can be run on the minion
during the ./start operation to debug the container.
```
$ kubecfg list pods
ID Image(s) Host Labels Status
---------- ---------- ---------- ---------- ----------
mariadb kollaglue/fedora-rdo-mariadb 10.0.0.3/ name=mariadb Running
```
If MariaDB doesn't move to running within a few minutes use journalctl on the master and the nodes to
troubleshoot. The first command is for the master and the second is for the nodes.
This lists all pods of which Kubernetes is aware. This can be run on the
master or minion.
```
$ sudo systemctl restart kube-apiserver
$ sudo systemctl restart kube-scheduler
```
This command is needed on the master after heat finishes the creation of the
Kubernetes system (ie: my-kube-cluster is in CREATE_COMPLETE state). This is
just a workaround for a bug in kubernetes that should be fixed soon.
```
$ journalctl -f -l -xn -u kube-apiserver -u etcd -u kube-scheduler
```
This shows log output on the server for the various daemons and can be filed
in bug reports in the upstream launchpad tracker.
```
$ journalctl -f -l -xn -u kubelet.service -u kube-proxy -u docker
```
Once the pod's status reaches running you should verify that you can connect to the database through all the
kube-proxies. You can use telnet to do this. Telnet to 3306 on each node and make sure mysql responds.
This shows log output on the minion for the various daemons and can be filed
in bug reports in the upstream launchpad tracker.
```
$ telnet minion_ip 3306
```
This shows that the Mariadb service is running on the minions. Output should
appear as follows
$ telnet 10.0.0.4 3306
Trying 10.0.0.4...
Connected to 10.0.0.4.
@ -95,39 +164,21 @@ Escape character is '^]'.
5.5.39-MariaDB-wsrep
$ telnet 10.0.0.3 3306
Trying 10.0.0.3...
Connected to 10.0.0.3.
Escape character is '^]'.
If the connection closes before mysql responds then the proxy is not properly
connecting to the database. This can be seen by using jounalctl and watching
for a connection error on the node that you can't connect to mysql through.
5.5.39-MariaDB-wsrep
```
If the connection closes before mysql responds then the proxy is not properly connecting to the database.
This can be seen by using jounalctl and watching for a connection error on the node that you can't connect
to mysql through.
```
$ journalctl -f -l -xn -u kube-proxy
```
If you can conect though one and not the other there's probably a problem with the overlay network. Double
check that you're tunning kernel 3.16+ because vxlan support is required. If you kernel version is good
try restarting openvswitch on both nodes. This has usually fixed the connection issues for me.
If you're able to connect to mysql though both proxies then you're ready to launch keystone. Download and
use the pod and service files to launch the pods and services for keystone.
```
$ curl https://raw.githubusercontent.com/stackforge/kolla/master/k8s/keystone/keystone-admin-service.json > keystone-admin-service.json
$ curl https://raw.githubusercontent.com/stackforge/kolla/master/k8s/keystone/keystone-public-service.json > keystone-public-service.json
$ curl https://raw.githubusercontent.com/stackforge/kolla/master/k8s/pod/keystone-pod.json > keystone-pod.json
$ kubecfg -c keystone-keystone-public.json create services
$ kubecfg -c keystone-keystone-admin.json create services
$ kubecfg -c keystone-pod.json create pods
```
The keystone pod should become status running, if it doesn't you can troubleshoot it the same way that the
database was. Once keystone is running you should be able to use the keystone client to do a token-get
against one of the proxy's ip addresses.
If you can conect though one and not the other there's probably a problem with
the overlay network. Double check that you're tunning kernel 3.16+ because
vxlan support is required. If you kernel version is good try restarting
openvswitch on both nodes. This has usually fixed the connection issues.
Directories
===========
* docker - contains artifacts for use with docker build to build appropriate images
* k8s - contains service and pod configuration information for Kubernetes
* tools - contains different tools for interacting with Kolla