Run pandoc to convert the documentation
This converts all MD formatted docs that were renamed to RST to preserve git history into actual RST documentation. Some minor edits were made but in general the purpose of this patch is to *only* convert the documentation not rework the documentation. I do plan on reworking the documentation in further patch sets. All links were tested and a test rendering is available: http://github.com/sdake/kolla Change-Id: I3df430b14df1ede15407c7f4ba7afcbdc6f9d757
This commit is contained in:
parent
bbcf22cc12
commit
6e3127d043
149
README.rst
149
README.rst
@ -1,107 +1,110 @@
|
|||||||
Kolla Overview
|
Kolla Overview
|
||||||
==============
|
==============
|
||||||
|
|
||||||
The Kolla project is a member of the OpenStack [Big Tent Governance][].
|
The Kolla project is a member of the OpenStack `Big Tent
|
||||||
|
Governance <http://governance.openstack.org/reference/projects/index.html>`__.
|
||||||
Kolla's mission statement is:
|
Kolla's mission statement is:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
Kolla provides production-ready containers and deployment tools for
|
Kolla provides production-ready containers and deployment tools for
|
||||||
operating OpenStack clouds.
|
operating OpenStack clouds.
|
||||||
|
|
||||||
Kolla provides [Docker][] containers and [Ansible][] playbooks to meet Kolla's
|
Kolla provides `Docker <http://docker.com/>`__ containers and
|
||||||
mission. Kolla is highly opinionated out of the box, but allows for complete
|
`Ansible <http://ansible.com/>`__ playbooks to meet Kolla's mission.
|
||||||
customization. This permits operators with little experience to deploy
|
Kolla is highly opinionated out of the box, but allows for complete
|
||||||
OpenStack quickly and as experience grows modify the OpenStack configuration
|
customization. This permits operators with little experience to deploy
|
||||||
to suit the operator's exact requirements.
|
OpenStack quickly and as experience grows modify the OpenStack
|
||||||
|
configuration to suit the operator's exact requirements.
|
||||||
[Big Tent Governance]: http://governance.openstack.org/reference/projects/index.html
|
|
||||||
[Docker]: http://docker.com/
|
|
||||||
[Ansible]: http://ansible.com/
|
|
||||||
|
|
||||||
Getting Started
|
Getting Started
|
||||||
===============
|
===============
|
||||||
|
|
||||||
Please get started by reading the [Developer Quickstart][] followed by the
|
Please get started by reading the `Developer
|
||||||
[Ansible Deployment Guide][].
|
Quickstart <https://github.com/stackforge/kolla/blob/master/docs/dev-quickstart.md>`__
|
||||||
|
followed by the `Ansible Deployment
|
||||||
[Developer Quickstart]: https://github.com/stackforge/kolla/blob/master/docs/dev-quickstart.md
|
Guide <https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md>`__.
|
||||||
[Ansible Deployment guide]: https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md]
|
|
||||||
|
|
||||||
Docker Images
|
Docker Images
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
The [Docker images][] are built by the Kolla project maintainers. A detailed
|
The `Docker images <https://docs.docker.com/userguide/dockerimages/>`__
|
||||||
process for contributing to the images can be found in the
|
are built by the Kolla project maintainers. A detailed process for
|
||||||
[image building guide][]. Images reside in the Docker Hub [Kollaglue repo][].
|
contributing to the images can be found in the `image building
|
||||||
|
guide <https://github.com/stackforge/kolla/blob/master/docs/image-building.md>`__.
|
||||||
|
Images reside in the Docker Hub `Kollaglue
|
||||||
|
repo <https://registry.hub.docker.com/repos/kollaglue/>`__.
|
||||||
|
|
||||||
[image building guide]: https://github.com/stackforge/kolla/blob/master/docs/image-building.md
|
The Kolla developers build images in the kollaglue namespace for the
|
||||||
[Docker images]: https://docs.docker.com/userguide/dockerimages/
|
following services for every tagged release and implement Ansible
|
||||||
[Kollaglue repo]: https://registry.hub.docker.com/repos/kollaglue/
|
deployment for them:
|
||||||
|
|
||||||
The Kolla developers build images in the kollaglue namespace for the following
|
- Ceilometer
|
||||||
services for every tagged release and implement Ansible deployment for them:
|
- Cinder
|
||||||
|
- Glance
|
||||||
|
- Haproxy
|
||||||
|
- Heat
|
||||||
|
- Horizon
|
||||||
|
- Keepalived
|
||||||
|
- Keystone
|
||||||
|
- Mariadb + galera
|
||||||
|
- Mongodb
|
||||||
|
- Neutron (linuxbridge or neutron)
|
||||||
|
- Nova
|
||||||
|
- Openvswitch
|
||||||
|
- Rabbitmq
|
||||||
|
|
||||||
* Ceilometer
|
::
|
||||||
* Cinder
|
|
||||||
* Glance
|
$ sudo docker search kollaglue
|
||||||
* Haproxy
|
|
||||||
* Heat
|
|
||||||
* Horizon
|
|
||||||
* Keepalived
|
|
||||||
* Keystone
|
|
||||||
* Mariadb + galera
|
|
||||||
* Mongodb
|
|
||||||
* Neutron (linuxbridge or neutron)
|
|
||||||
* Nova
|
|
||||||
* Openvswitch
|
|
||||||
* Rabbitmq
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo docker search kollaglue
|
|
||||||
```
|
|
||||||
A list of the upstream built docker images will be shown.
|
A list of the upstream built docker images will be shown.
|
||||||
|
|
||||||
Directories
|
Directories
|
||||||
===========
|
===========
|
||||||
|
|
||||||
* ansible - Contains Anible playbooks to deploy Kolla in Docker containers.
|
- ansible - Contains Anible playbooks to deploy Kolla in Docker
|
||||||
* compose - Contains the docker-compose files serving as a compose reference.
|
containers.
|
||||||
Note compose support is removed from Kolla. These are for community members
|
- compose - Contains the docker-compose files serving as a compose
|
||||||
which want to use Kolla container content without Ansible.
|
reference. Note compose support is removed from Kolla. These are for
|
||||||
* demos - Contains a few demos to use with Kolla.
|
community members which want to use Kolla container content without
|
||||||
* devenv - Contains an OpenStack-Heat based development environment.
|
Ansible.
|
||||||
* docker - Contains a normal Dockerfile based set of artifacts for building
|
- demos - Contains a few demos to use with Kolla.
|
||||||
docker. This is planned for removal when docker_templates is completed.
|
- devenv - Contains an OpenStack-Heat based development environment.
|
||||||
* docs - Contains documentation.
|
- docker - Contains a normal Dockerfile based set of artifacts for
|
||||||
* etc - Contains a reference etc directory structure which requires
|
building docker. This is planned for removal when docker\_templates
|
||||||
configuration of a small number of configuration variables to achieve a
|
is completed.
|
||||||
working All-in-One (AIO) deployment.
|
- docs - Contains documentation.
|
||||||
* docker_templates - Contains jinja2 templates for the docker build system.
|
- etc - Contains a reference etc directory structure which requires
|
||||||
* tools - Contains tools for interacting with Kolla.
|
configuration of a small number of configuration variables to achieve
|
||||||
* specs - Contains the Kolla communities key arguments about architectural
|
a working All-in-One (AIO) deployment.
|
||||||
shifts in the code base.
|
- docker\_templates - Contains jinja2 templates for the docker build
|
||||||
* tests - Contains functional testing tools.
|
system.
|
||||||
* vagrant - Contains a vagrant VirtualBox-based development environment.
|
- tools - Contains tools for interacting with Kolla.
|
||||||
|
- specs - Contains the Kolla communities key arguments about
|
||||||
|
architectural shifts in the code base.
|
||||||
|
- tests - Contains functional testing tools.
|
||||||
|
- vagrant - Contains a vagrant VirtualBox-based development
|
||||||
|
environment.
|
||||||
|
|
||||||
Getting Involved
|
Getting Involved
|
||||||
================
|
================
|
||||||
|
|
||||||
Need a feature? Find a bug? Let us know! Contributions are much appreciated
|
Need a feature? Find a bug? Let us know! Contributions are much
|
||||||
and should follow the standard [Gerrit workflow][].
|
appreciated and should follow the standard `Gerrit
|
||||||
|
workflow <https://wiki.openstack.org/wiki/Gerrit_Workflow>`__.
|
||||||
|
|
||||||
- We communicate using the #kolla irc channel.
|
- We communicate using the #kolla irc channel.
|
||||||
- File bugs, blueprints, track releases, etc on [Launchpad][].
|
- File bugs, blueprints, track releases, etc on
|
||||||
- Attend weekly [meetings][].
|
`Launchpad <https://launchpad.net/kolla>`__.
|
||||||
- Contribute [code][]
|
- Attend weekly
|
||||||
|
`meetings <https://wiki.openstack.org/wiki/Meetings/Kolla>`__.
|
||||||
[Gerrit workflow]: https://wiki.openstack.org/wiki/Gerrit_Workflow
|
- Contribute `code <https://github.com/stackforge/kolla>`__
|
||||||
[Launchpad]: https://launchpad.net/kolla
|
|
||||||
[meetings]: https://wiki.openstack.org/wiki/Meetings/Kolla
|
|
||||||
[code]: https://github.com/stackforge/kolla
|
|
||||||
|
|
||||||
Contributors
|
Contributors
|
||||||
============
|
============
|
||||||
|
|
||||||
Check out who's [contributing code][] and [contributing reviews][].
|
Check out who's `contributing
|
||||||
|
code <http://stackalytics.com/?module=kolla-group&metric=commits>`__ and
|
||||||
[contributing code]: http://stackalytics.com/?module=kolla-group&metric=commits
|
`contributing
|
||||||
[contributing reviews]: http://stackalytics.com/?module=kolla-group&metric=marks
|
reviews <http://stackalytics.com/?module=kolla-group&metric=marks>`__.
|
||||||
|
@ -1,29 +1,8 @@
|
|||||||
Docker compose
|
Docker compose
|
||||||
==============
|
==============
|
||||||
|
|
||||||
These scripts and docker compose files can be used to stand up a simple
|
All compose support in Kolla has been completely removed as of liberty-3.
|
||||||
installation of openstack. Running the 'tools/genenv' script creates an
|
|
||||||
'openstack.env' suitable for running on a single host system as well as an
|
|
||||||
'openrc' to allow access to the installation.
|
|
||||||
|
|
||||||
Once you have run that you can either manually start the containers using the
|
The files in this directory are only for reference by the TripleO project.
|
||||||
'docker-compose' command or try the 'tools/kolla-compose start' script which tries to
|
As they stand today, they likely don't work. There is a blueprint to port
|
||||||
start them all in a reasonable order, waiting at key points for services to
|
them to support the CONFIG_EXTERNAL config strategy.
|
||||||
become available. Once stood up you can issue the typical openstack commands
|
|
||||||
to use the installation. If using nova networking use:
|
|
||||||
|
|
||||||
```
|
|
||||||
# source openrc
|
|
||||||
# tools/init-runonce
|
|
||||||
# nova boot --flavor m1.medium --key_name mykey --image puffy_clouds instance_name
|
|
||||||
# ssh cirros@<ip>
|
|
||||||
```
|
|
||||||
|
|
||||||
Else if using neutron networking use:
|
|
||||||
|
|
||||||
```
|
|
||||||
# source openrc
|
|
||||||
# tools/init-runonce
|
|
||||||
# nova boot --flavor m1.medium --key_name mykey --image puffy_clouds instance_name --nic net-id:<net id>
|
|
||||||
# ssh cirros@<ip>
|
|
||||||
```
|
|
||||||
|
@ -1,13 +1,15 @@
|
|||||||
A Kolla Demo using Heat
|
A Kolla Demo using Heat
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
By default, the launch script will spawn 3 Nova instances on a
|
By default, the launch script will spawn 3 Nova instances on a Neutron
|
||||||
Neutron network created from the tools/init-runonce script. Edit
|
network created from the tools/init-runonce script. Edit the VM\_COUNT
|
||||||
the VM_COUNT parameter in the launch script if you would like to
|
parameter in the launch script if you would like to spawn a different
|
||||||
spawn a different amount of Nova instances. Edit the IMAGE_FLAVOR
|
amount of Nova instances. Edit the IMAGE\_FLAVOR if you would like to
|
||||||
if you would like to launch images using a flavor other than
|
launch images using a flavor other than m1.tiny.
|
||||||
m1.tiny.
|
|
||||||
|
|
||||||
Then run the script:
|
Then run the script:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./launch
|
$ ./launch
|
||||||
|
|
||||||
|
@ -1,115 +1,197 @@
|
|||||||
## Reliable, Scalable Redis on Kubernetes
|
Reliable, Scalable Redis on Kubernetes
|
||||||
|
--------------------------------------
|
||||||
|
|
||||||
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.
|
The following document describes the deployment of a reliable,
|
||||||
|
multi-node Redis on Kubernetes. It deploys a master with replicated
|
||||||
|
slaves, as well as replicated redis sentinels which are use for health
|
||||||
|
checking and failover.
|
||||||
|
|
||||||
### Prerequisites
|
Prerequisites
|
||||||
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
|
~~~~~~~~~~~~~
|
||||||
|
|
||||||
### A note for the impatient
|
This example assumes that you have a Kubernetes cluster installed and
|
||||||
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
|
running, and that you have installed the ``kubectl`` command line tool
|
||||||
|
somewhere in your path. Please see the `getting
|
||||||
|
started <https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides>`__
|
||||||
|
for installation instructions for your platform.
|
||||||
|
|
||||||
### Turning up an initial master/sentinel pod.
|
A note for the impatient
|
||||||
is a [_Pod_](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
We will used the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
|
This is a somewhat long tutorial. If you want to jump straight to the
|
||||||
|
"do it now" commands, please see the `tl; dr <#tl-dr>`__ at the end.
|
||||||
|
|
||||||
Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml)
|
Turning up an initial master/sentinel pod.
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
is a
|
||||||
|
`*Pod* <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md>`__.
|
||||||
|
A Pod is one or more containers that *must* be scheduled onto the same
|
||||||
|
host. All containers in a pod share a network namespace, and may
|
||||||
|
optionally share mounted volumes.
|
||||||
|
|
||||||
|
We will used the shared network namespace to bootstrap our Redis
|
||||||
|
cluster. In particular, the very first sentinel needs to know how to
|
||||||
|
find the master (subsequent sentinels just ask the first sentinel).
|
||||||
|
Because all containers in a Pod share a network namespace, the sentinel
|
||||||
|
can simply look at ``$(hostname -i):6379``.
|
||||||
|
|
||||||
|
Here is the config for the initial master and sentinel pod:
|
||||||
|
`redis-master.yaml <redis-master.yaml>`__
|
||||||
|
|
||||||
Create this master as follows:
|
Create this master as follows:
|
||||||
```sh
|
|
||||||
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Turning up a sentinel service
|
.. code:: sh
|
||||||
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
|
|
||||||
|
|
||||||
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
|
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
||||||
|
|
||||||
Here is the definition of the sentinel service:[redis-sentinel-service.yaml](redis-sentinel-service.yaml)
|
Turning up a sentinel service
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
In Kubernetes a *Service* describes a set of Pods that perform the same
|
||||||
|
task. For example, the set of nodes in a Cassandra cluster, or even the
|
||||||
|
single node we created above. An important use for a Service is to
|
||||||
|
create a load balancer which distributes traffic across members of the
|
||||||
|
set. But a *Service* can also be used as a standing query which makes a
|
||||||
|
dynamically changing set of Pods (or the single Pod we've already
|
||||||
|
created) available via the Kubernetes API.
|
||||||
|
|
||||||
|
In Redis, we will use a Kubernetes Service to provide a discoverable
|
||||||
|
endpoints for the Redis sentinels in the cluster. From the sentinels
|
||||||
|
Redis clients can find the master, and then the slaves and other
|
||||||
|
relevant info for the cluster. This enables new members to join the
|
||||||
|
cluster when failures occur.
|
||||||
|
|
||||||
|
Here is the definition of the sentinel
|
||||||
|
service:\ `redis-sentinel-service.yaml <redis-sentinel-service.yaml>`__
|
||||||
|
|
||||||
Create this service:
|
Create this service:
|
||||||
```sh
|
|
||||||
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Turning up replicated redis servers
|
.. code:: sh
|
||||||
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
|
|
||||||
|
|
||||||
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
||||||
|
|
||||||
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server.
|
Turning up replicated redis servers
|
||||||
[redis-controller.yaml](redis-controller.yaml)
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.
|
So far, what we have done is pretty manual, and not very fault-tolerant.
|
||||||
|
If the ``redis-master`` pod that we previously created is destroyed for
|
||||||
|
some reason (e.g. a machine dying) our Redis service goes away with it.
|
||||||
|
|
||||||
|
In Kubernetes a *Replication Controller* is responsible for replicating
|
||||||
|
sets of identical pods. Like a *Service* it has a selector query which
|
||||||
|
identifies the members of it's set. Unlike a *Service* it also has a
|
||||||
|
desired number of replicas, and it will create or delete *Pods* to
|
||||||
|
ensure that the number of *Pods* matches up with it's desired state.
|
||||||
|
|
||||||
|
Replication Controllers will "adopt" existing pods that match their
|
||||||
|
selector query, so let's create a Replication Controller with a single
|
||||||
|
replica to adopt our existing Redis server.
|
||||||
|
`redis-controller.yaml <redis-controller.yaml>`__
|
||||||
|
|
||||||
|
The bulk of this controller config is actually identical to the
|
||||||
|
redis-master pod definition above. It forms the template or "cookie
|
||||||
|
cutter" that defines what it means to be a member of this set.
|
||||||
|
|
||||||
Create this controller:
|
Create this controller:
|
||||||
|
|
||||||
```sh
|
.. code:: sh
|
||||||
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
We'll do the same thing for the sentinel. Here is the controller config:[redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
|
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
||||||
|
|
||||||
|
We'll do the same thing for the sentinel. Here is the controller
|
||||||
|
config:\ `redis-sentinel-controller.yaml <redis-sentinel-controller.yaml>`__
|
||||||
|
|
||||||
We create it as follows:
|
We create it as follows:
|
||||||
```sh
|
|
||||||
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### Resize our replicated pods
|
.. code:: sh
|
||||||
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
|
|
||||||
|
|
||||||
```sh
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
||||||
kubectl resize rc redis --replicas=3
|
|
||||||
```
|
|
||||||
|
|
||||||
```sh
|
Resize our replicated pods
|
||||||
kubectl resize rc redis-sentinel --replicas=3
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
```
|
|
||||||
|
|
||||||
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
|
Initially creating those pods didn't actually do anything, since we only
|
||||||
|
asked for one sentinel and one redis server, and they already existed,
|
||||||
|
nothing changed. Now we will add more replicas:
|
||||||
|
|
||||||
Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster.
|
.. code:: sh
|
||||||
|
|
||||||
### Delete our manual pod
|
kubectl resize rc redis --replicas=3
|
||||||
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
|
|
||||||
|
.. code:: sh
|
||||||
|
|
||||||
|
kubectl resize rc redis-sentinel --replicas=3
|
||||||
|
|
||||||
|
This will create two additional replicas of the redis server and two
|
||||||
|
additional replicas of the redis sentinel.
|
||||||
|
|
||||||
|
Unlike our original redis-master pod, these pods exist independently,
|
||||||
|
and they use the ``redis-sentinel-service`` that we defined above to
|
||||||
|
discover and join the cluster.
|
||||||
|
|
||||||
|
Delete our manual pod
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
The final step in the cluster turn up is to delete the original
|
||||||
|
redis-master pod that we created manually. While it was useful for
|
||||||
|
bootstrapping discovery in the cluster, we really don't want the
|
||||||
|
lifespan of our sentinel to be tied to the lifespan of one of our redis
|
||||||
|
servers, and now that we have a successful, replicated redis sentinel
|
||||||
|
service up and running, the binding is unnecessary.
|
||||||
|
|
||||||
Delete the master as follows:
|
Delete the master as follows:
|
||||||
```sh
|
|
||||||
kubectl delete pods redis-master
|
|
||||||
```
|
|
||||||
|
|
||||||
Now let's take a close look at what happens after this pod is deleted. There are three things that happen:
|
.. code:: sh
|
||||||
|
|
||||||
1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3
|
kubectl delete pods redis-master
|
||||||
2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel.
|
|
||||||
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
|
|
||||||
|
|
||||||
### Conclusion
|
Now let's take a close look at what happens after this pod is deleted.
|
||||||
At this point we now have a reliable, scalable Redis installation. By resizing the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
|
There are three things that happen:
|
||||||
|
|
||||||
### tl; dr
|
1. The redis replication controller notices that its desired state is 3
|
||||||
For those of you who are impatient, here is the summary of commands we ran in this tutorial
|
replicas, but there are currently only 2 replicas, and so it creates
|
||||||
|
a new redis server to bring the replica count back up to 3
|
||||||
|
2. The redis-sentinel replication controller likewise notices the
|
||||||
|
missing sentinel, and also creates a new sentinel.
|
||||||
|
3. The redis sentinels themselves, realize that the master has
|
||||||
|
disappeared from the cluster, and begin the election procedure for
|
||||||
|
selecting a new master. They perform this election and selection, and
|
||||||
|
chose one of the existing redis server replicas to be the new master.
|
||||||
|
|
||||||
```sh
|
Conclusion
|
||||||
# Create a bootstrap master
|
~~~~~~~~~~
|
||||||
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
|
||||||
|
|
||||||
# Create a service to track the sentinels
|
At this point we now have a reliable, scalable Redis installation. By
|
||||||
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
resizing the replication controller for redis servers, we can increase
|
||||||
|
or decrease the number of read-slaves in our cluster. Likewise, if
|
||||||
|
failures occur, the redis-sentinels will perform master election and
|
||||||
|
select a new master.
|
||||||
|
|
||||||
# Create a replication controller for redis servers
|
tl; dr
|
||||||
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
~~~~~~
|
||||||
|
|
||||||
# Create a replication controller for redis sentinels
|
For those of you who are impatient, here is the summary of commands we
|
||||||
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
ran in this tutorial
|
||||||
|
|
||||||
# Resize both replication controllers
|
.. code:: sh
|
||||||
kubectl resize rc redis --replicas=3
|
|
||||||
kubectl resize rc redis-sentinel --replicas=3
|
|
||||||
|
|
||||||
# Delete the original master pod
|
# Create a bootstrap master
|
||||||
kubectl delete pods redis-master
|
kubectl create -f examples/redis/v1beta3/redis-master.yaml
|
||||||
```
|
|
||||||
|
|
||||||
|
# Create a service to track the sentinels
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-service.yaml
|
||||||
|
|
||||||
|
# Create a replication controller for redis servers
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-controller.yaml
|
||||||
|
|
||||||
|
# Create a replication controller for redis sentinels
|
||||||
|
kubectl create -f examples/redis/v1beta3/redis-sentinel-controller.yaml
|
||||||
|
|
||||||
|
# Resize both replication controllers
|
||||||
|
kubectl resize rc redis --replicas=3
|
||||||
|
kubectl resize rc redis-sentinel --replicas=3
|
||||||
|
|
||||||
|
# Delete the original master pod
|
||||||
|
kubectl delete pods redis-master
|
||||||
|
|
||||||
|
@ -1,50 +1,55 @@
|
|||||||
A Kolla Cluster with Heat
|
A Kolla Cluster with Heat
|
||||||
=========================
|
=========================
|
||||||
|
|
||||||
These [Heat][] templates will deploy an *N*-node [Kolla][] cluster,
|
These `Heat <https://wiki.openstack.org/wiki/Heat>`__ templates will
|
||||||
where *N* is the value of the `number_of_nodes` parameter you
|
deploy an *N*-node `Kolla <https://launchpad.net/kolla>`__ cluster,
|
||||||
specify when creating the stack.
|
where *N* is the value of the ``number_of_nodes`` parameter you specify
|
||||||
|
when creating the stack.
|
||||||
|
|
||||||
Kolla has recently undergone a considerable design change. The details
|
Kolla has recently undergone a considerable design change. The details
|
||||||
of the design change is addressed in this [spec][]. As part of the
|
of the design change is addressed in this
|
||||||
design change, containers share pid and networking namespaces with
|
`spec <https://review.openstack.org/#/c/153798/>`__. As part of the
|
||||||
the Docker host. Therefore, containers no longer connect to a docker0
|
design change, containers share pid and networking namespaces with the
|
||||||
bridge and have separate networking from the host. As a result, Kolla
|
Docker host. Therefore, containers no longer connect to a docker0 bridge
|
||||||
|
and have separate networking from the host. As a result, Kolla
|
||||||
networking has a configuration similar to:
|
networking has a configuration similar to:
|
||||||
|
|
||||||
![Image](https://raw.githubusercontent.com/stackforge/kolla/master/devenv/kollanet.png)
|
.. figure:: https://raw.githubusercontent.com/stackforge/kolla/master/devenv/kollanet.png
|
||||||
|
:alt: Image
|
||||||
|
|
||||||
Sharing pid and networking namespaces is detailed in the
|
Image
|
||||||
[super privileged containers][] concept.
|
Sharing pid and networking namespaces is detailed in the `super
|
||||||
|
privileged
|
||||||
|
containers <http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/>`__
|
||||||
|
concept.
|
||||||
|
|
||||||
The Kolla cluster is based on Fedora 21, requires the minimum Docker version of 1.7.0
|
The Kolla cluster is based on Fedora 21, requires the minimum Docker
|
||||||
[binary][].
|
version of 1.7.0
|
||||||
|
`binary <https://docs.docker.com/installation/binaries/>`__.
|
||||||
|
|
||||||
These templates are designed to work with the Icehouse or Juno
|
These templates are designed to work with the Icehouse or Juno versions
|
||||||
versions of Heat. If using Icehouse Heat, this [patch][] is
|
of Heat. If using Icehouse Heat, this
|
||||||
required to correct a bug with template validation when using the
|
`patch <https://review.openstack.org/#/c/121139/>`__ is required to
|
||||||
"Fn::Join" function).
|
correct a bug with template validation when using the "Fn::Join"
|
||||||
|
function).
|
||||||
[heat]: https://wiki.openstack.org/wiki/Heat
|
|
||||||
[kolla]: https://launchpad.net/kolla
|
|
||||||
[binary]: https://docs.docker.com/installation/binaries/
|
|
||||||
[copr]: https://copr.fedoraproject.org/
|
|
||||||
[spec]: https://review.openstack.org/#/c/153798/
|
|
||||||
[super privileged containers]: http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/
|
|
||||||
[patch]: https://review.openstack.org/#/c/121139/
|
|
||||||
|
|
||||||
Create the Glance Image
|
Create the Glance Image
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
After cloning the project, run the get-image.sh script from the project's
|
After cloning the project, run the get-image.sh script from the
|
||||||
devenv directory:
|
project's devenv directory:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./get-image.sh
|
$ ./get-image.sh
|
||||||
|
|
||||||
The script will create a Fedora 21 image with the required modifications.
|
The script will create a Fedora 21 image with the required
|
||||||
|
modifications.
|
||||||
|
|
||||||
Add the image to your Glance image store:
|
Add the image to your Glance image store:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ glance image-create --name "fedora-21-x86_64" \
|
$ glance image-create --name "fedora-21-x86_64" \
|
||||||
--file /var/lib/libvirt/images/fedora-21-x86_64 \
|
--file /var/lib/libvirt/images/fedora-21-x86_64 \
|
||||||
--disk-format qcow2 --container-format bare \
|
--disk-format qcow2 --container-format bare \
|
||||||
@ -57,6 +62,8 @@ Copy local.yaml.example to local.yaml and edit the contents to match
|
|||||||
your deployment environment. Here is an example of a customized
|
your deployment environment. Here is an example of a customized
|
||||||
local.yaml:
|
local.yaml:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
parameters:
|
parameters:
|
||||||
ssh_key_name: admin-key
|
ssh_key_name: admin-key
|
||||||
external_network_id: 028d70dd-67b8-4901-8bdd-0c62b06cce2d
|
external_network_id: 028d70dd-67b8-4901-8bdd-0c62b06cce2d
|
||||||
@ -64,106 +71,125 @@ local.yaml:
|
|||||||
container_external_subnet_id: 575770dd-6828-1101-34dd-0c62b06fjf8s
|
container_external_subnet_id: 575770dd-6828-1101-34dd-0c62b06fjf8s
|
||||||
dns_nameserver: 192.168.200.1
|
dns_nameserver: 192.168.200.1
|
||||||
|
|
||||||
The external_network_id is used by Heat to automatically assign
|
The external\_network\_id is used by Heat to automatically assign
|
||||||
floating IP's to your Kolla nodes. You can then access your Kolla nodes
|
floating IP's to your Kolla nodes. You can then access your Kolla nodes
|
||||||
directly using the floating IP. The network ID is derived from the
|
directly using the floating IP. The network ID is derived from the
|
||||||
`neutron net-list` command.
|
``neutron net-list`` command.
|
||||||
|
|
||||||
The container_external_network_id is used by the nova-network container
|
The container\_external\_network\_id is used by the nova-network
|
||||||
within the Kolla node as the FLAT_INTERFACE. The FLAT_INTERFACE tells Nova what
|
container within the Kolla node as the FLAT\_INTERFACE. The
|
||||||
device to use (i.e. eth1) to pass network traffic between Nova instances
|
FLAT\_INTERFACE tells Nova what device to use (i.e. eth1) to pass
|
||||||
across Kolla nodes. This network should be seperate from the external_network_id
|
network traffic between Nova instances across Kolla nodes. This network
|
||||||
above and is derived from the 'neutron net-list' command.
|
should be seperate from the external\_network\_id above and is derived
|
||||||
|
from the 'neutron net-list' command.
|
||||||
|
|
||||||
The container_external_subnet_id: is the subnet equivalent to
|
The container\_external\_subnet\_id: is the subnet equivalent to
|
||||||
container_external_network_id
|
container\_external\_network\_id
|
||||||
|
|
||||||
Review the parameters section of kollacluster.yaml for a full list of
|
Review the parameters section of kollacluster.yaml for a full list of
|
||||||
configuration options. **Note:** You must provide values for:
|
configuration options. **Note:** You must provide values for:
|
||||||
|
|
||||||
- `ssh_key_name`
|
- ``ssh_key_name``
|
||||||
- `external_network_id`
|
- ``external_network_id``
|
||||||
- `container_external_network_id`
|
- ``container_external_network_id``
|
||||||
- `container_external_subnet_id`
|
- ``container_external_subnet_id``
|
||||||
|
|
||||||
And then create the stack, referencing that environment file:
|
And then create the stack, referencing that environment file:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ heat stack-create -f kollacluster.yaml -e local.yaml kolla-cluster
|
$ heat stack-create -f kollacluster.yaml -e local.yaml kolla-cluster
|
||||||
|
|
||||||
Access the Kolla Nodes
|
Access the Kolla Nodes
|
||||||
======================
|
======================
|
||||||
|
|
||||||
You can get the ip address of the Kolla nodes using the `heat
|
You can get the ip address of the Kolla nodes using the
|
||||||
output-show` command:
|
``heat output-show`` command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ heat output-show kolla-cluster kolla_node_external_ip
|
$ heat output-show kolla-cluster kolla_node_external_ip
|
||||||
"192.168.200.86"
|
"192.168.200.86"
|
||||||
|
|
||||||
You can ssh into that server as the `fedora` user:
|
You can ssh into that server as the ``fedora`` user:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ssh fedora@192.168.200.86
|
$ ssh fedora@192.168.200.86
|
||||||
|
|
||||||
Once logged into your Kolla node, setup your environment.
|
Once logged into your Kolla node, setup your environment. The basic
|
||||||
The basic starting environment will be created using `docker-compose`.
|
starting environment will be created using ``docker-compose``. This
|
||||||
This environment will start up the openstack services listed in the
|
environment will start up the openstack services listed in the compose
|
||||||
compose directory.
|
directory.
|
||||||
|
|
||||||
To start, setup your environment variables.
|
To start, setup your environment variables.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ cd kolla
|
$ cd kolla
|
||||||
$ ./tools/genenv
|
$ ./tools/genenv
|
||||||
|
|
||||||
The `genenv` script will create a compose/openstack.env file
|
The ``genenv`` script will create a compose/openstack.env file and an
|
||||||
and an openrc file in your current directory. The openstack.env
|
openrc file in your current directory. The openstack.env file contains
|
||||||
file contains all of your initialized environment variables, which
|
all of your initialized environment variables, which you can edit for a
|
||||||
you can edit for a different setup.
|
different setup.
|
||||||
|
|
||||||
Next, run the start script.
|
Next, run the start script.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./tools/kolla-compose start
|
$ ./tools/kolla-compose start
|
||||||
|
|
||||||
The `start` script is responsible for starting the containers
|
The ``start`` script is responsible for starting the containers using
|
||||||
using `docker-compose -f <osp-service-container> up -d`.
|
``docker-compose -f <osp-service-container> up -d``.
|
||||||
|
|
||||||
If you want to start a container set by hand use this template
|
If you want to start a container set by hand use this template
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ docker-compose -f glance-api-registry.yml up -d
|
$ docker-compose -f glance-api-registry.yml up -d
|
||||||
|
|
||||||
Debugging
|
Debugging
|
||||||
==========
|
=========
|
||||||
|
|
||||||
All Docker commands should be run from the directory of the Docker binaray,
|
All Docker commands should be run from the directory of the Docker
|
||||||
by default this is `/`.
|
binaray, by default this is ``/``.
|
||||||
|
|
||||||
A few commands for debugging the system.
|
A few commands for debugging the system.
|
||||||
|
|
||||||
```
|
::
|
||||||
$ sudo ./docker images
|
|
||||||
```
|
|
||||||
Lists all images that have been pulled from the upstream kollaglue repository
|
|
||||||
thus far. This can be run on the node during the `./start` operation to
|
|
||||||
check on the download progress.
|
|
||||||
|
|
||||||
```
|
$ sudo ./docker images
|
||||||
$ sudo ./docker ps -a
|
|
||||||
```
|
|
||||||
This will show all processes that docker has started. Removing the `-a` will
|
|
||||||
show only active processes. This can be run on the node during the `./start`
|
|
||||||
operation to check that the containers are orchestrated.
|
|
||||||
|
|
||||||
```
|
Lists all images that have been pulled from the upstream kollaglue
|
||||||
$ sudo ./docker logs <containerid>
|
repository thus far. This can be run on the node during the ``./start``
|
||||||
```
|
operation to check on the download progress.
|
||||||
```
|
|
||||||
$ curl http://<NODE_IP>:3306
|
|
||||||
```
|
|
||||||
You can use curl to test connectivity to a container. This example demonstrates
|
|
||||||
the Mariadb service is running on the node. Output should appear as follows
|
|
||||||
|
|
||||||
```
|
::
|
||||||
$ curl http://10.0.0.4:3306
|
|
||||||
Trying 10.0.0.4...
|
$ sudo ./docker ps -a
|
||||||
Connected to 10.0.0.4.
|
|
||||||
Escape character is '^]'.
|
This will show all processes that docker has started. Removing the
|
||||||
|
``-a`` will show only active processes. This can be run on the node
|
||||||
|
during the ``./start`` operation to check that the containers are
|
||||||
|
orchestrated.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ sudo ./docker logs <containerid>
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ curl http://<NODE_IP>:3306
|
||||||
|
|
||||||
|
You can use curl to test connectivity to a container. This example
|
||||||
|
demonstrates the Mariadb service is running on the node. Output should
|
||||||
|
appear as follows
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
$ curl http://10.0.0.4:3306
|
||||||
|
Trying 10.0.0.4...
|
||||||
|
Connected to 10.0.0.4.
|
||||||
|
Escape character is '^]'.
|
||||||
|
|
||||||
```
|
|
||||||
|
@ -1,95 +1,111 @@
|
|||||||
Kolla with Ansible!
|
Kolla with Ansible!
|
||||||
============================
|
===================
|
||||||
|
|
||||||
Kolla supports deploying Openstack using [Ansible][].
|
|
||||||
|
|
||||||
[Ansible]: https://docs.ansible.com
|
|
||||||
|
|
||||||
|
Kolla supports deploying Openstack using
|
||||||
|
`Ansible <https://docs.ansible.com>`__.
|
||||||
|
|
||||||
Getting Started
|
Getting Started
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
To run the Ansible playbooks, an inventory file which tracks all of the
|
To run the Ansible playbooks, an inventory file which tracks all of the
|
||||||
available nodes in the environment must be speficied. With this inventory file
|
available nodes in the environment must be specified. With this
|
||||||
Ansible will log into each node via ssh (configurable) and run tasks. Ansible
|
inventory file Ansible will log into each node via ssh (configurable)
|
||||||
does not require password-less logins via ssh, however it is highly recommended
|
and run tasks. Ansible does not require password-less logins via ssh,
|
||||||
to setup ssh-keys.
|
however it is highly recommended to setup ssh-keys.
|
||||||
|
|
||||||
Two sample inventory files are provided, *all-in-one*, and *multinode*. The
|
Two sample inventory files are provided, *all-in-one*, and *multinode*.
|
||||||
"all-in-one" inventory defaults to use the Ansible "local" connection type,
|
The "all-in-one" inventory defaults to use the Ansible "local"
|
||||||
which removes the need to setup ssh keys in order to get started quickly.
|
connection type, which removes the need to setup ssh keys in order to
|
||||||
|
get started quickly.
|
||||||
|
|
||||||
More information on the Ansible inventory file can be found [here][].
|
More information on the Ansible inventory file can be found in the Ansible
|
||||||
|
`inventory introduction <https://docs.ansible.com/intro_inventory.html>`__.
|
||||||
[here]: https://docs.ansible.com/intro_inventory.html
|
|
||||||
|
|
||||||
Prerequisites
|
Prerequisites
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
On the deployment host you must have Ansible>=1.8.4 installed. That is the only
|
On the deployment host you must have Ansible>=1.8.4 installed. That is
|
||||||
requirement for deploying. To build the images locally you must also have the
|
the only requirement for deploying. To build the images locally you must
|
||||||
Python library docker-py>=1.2.0 installed.
|
also have the Python library docker-py>=1.2.0 installed.
|
||||||
|
|
||||||
On the target nodes you must have docker>=1.6.0 and docker-py>=1.2.0 installed.
|
On the target nodes you must have docker>=1.6.0 and docker-py>=1.2.0
|
||||||
|
installed.
|
||||||
|
|
||||||
Deploying
|
Deploying
|
||||||
---------
|
---------
|
||||||
|
|
||||||
Add the etc/kolla directory to /etc/kolla on the deployment host. Inside of
|
Add the etc/kolla directory to /etc/kolla on the deployment host. Inside
|
||||||
this directory are two files and a minimum number of parameters which are
|
of this directory are two files and a minimum number of parameters which
|
||||||
listed below.
|
are listed below.
|
||||||
|
|
||||||
All variables for the environment can be specified in the files:
|
All variables for the environment can be specified in the files:
|
||||||
"/etc/kolla/globals.yml" and "/etc/kolla/passwords.yml"
|
"/etc/kolla/globals.yml" and "/etc/kolla/passwords.yml"
|
||||||
|
|
||||||
The kolla_*_address variables can both be the same. Please specify an unused IP
|
The kolla\_\*\_address variables can both be the same. Please specify
|
||||||
address in your network to act as a VIP for kolla_internal_address. The VIP will
|
an unused IP address in your network to act as a VIP for
|
||||||
be used with keepalived and added to your "api_interface" as specified in the
|
kolla\_internal\_address. The VIP will be used with keepalived and
|
||||||
globals.yml
|
added to your "api\_interface" as specified in the globals.yml
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
kolla_external_address: "openstack.example.com"
|
kolla_external_address: "openstack.example.com"
|
||||||
kolla_internal_address: "10.10.10.254"
|
kolla_internal_address: "10.10.10.254"
|
||||||
|
|
||||||
The "network_interface" variable is the interface that we bind all our services
|
The "network\_interface" variable is the interface that we bind all our
|
||||||
to. For example, when starting up Mariadb it will bind to the IP on the
|
services to. For example, when starting up Mariadb it will bind to the
|
||||||
interface list in the "network_interface" variable.
|
IP on the interface list in the "network\_interface" variable.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
network_interface: "eth0"
|
network_interface: "eth0"
|
||||||
|
|
||||||
The "neutron_external_interface" variable is the interface that will be used for
|
The "neutron\_external\_interface" variable is the interface that will
|
||||||
your external bridge in Neutron. Without this bridge your instance traffic will
|
be used for your external bridge in Neutron. Without this bridge your
|
||||||
be unable to access the rest of the Internet. In the case of a single interface
|
instance traffic will be unable to access the rest of the Internet. In
|
||||||
on a machine, you may use a veth pair where one end of the veth pair is listed
|
the case of a single interface on a machine, you may use a veth pair
|
||||||
here and the other end is in a bridge on your system.
|
where one end of the veth pair is listed here and the other end is in a
|
||||||
|
bridge on your system.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
neutron_external_interface: "eth1"
|
neutron_external_interface: "eth1"
|
||||||
|
|
||||||
The docker_pull_policy specifies whether Docker should always pull images from
|
The docker\_pull\_policy specifies whether Docker should always pull
|
||||||
the repository it is configured for, or only in the case where the image isn't
|
images from the repository it is configured for, or only in the case
|
||||||
present locally. If you are building your own images locally without pushing
|
where the image isn't present locally. If you are building your own
|
||||||
them to the Docker Registry, or a local registry, you must set this value to
|
images locally without pushing them to the Docker Registry, or a local
|
||||||
"missing" or when you run the playbooks docker will attempt to fetch the latest
|
registry, you must set this value to "missing" or when you run the
|
||||||
image upstream.
|
playbooks docker will attempt to fetch the latest image upstream.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
docker_pull_policy: "always"
|
docker_pull_policy: "always"
|
||||||
|
|
||||||
For All-In-One deploys, the following commands can be run. These will setup all
|
For All-In-One deploys, the following commands can be run. These will
|
||||||
of the containers on the localhost. These commands will be wrapped in the
|
setup all of the containers on the localhost. These commands will be
|
||||||
kolla-script in the future.
|
wrapped in the kolla-script in the future.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
cd ./kolla/ansible
|
cd ./kolla/ansible
|
||||||
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml
|
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml
|
||||||
|
|
||||||
To run the playbooks for only a particular service, Ansible tags can be used.
|
To run the playbooks for only a particular service, Ansible tags can be
|
||||||
Multiple tags may be specified, and order is still determined by the playbooks.
|
used. Multiple tags may be specified, and order is still determined by
|
||||||
|
the playbooks.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
cd ./kolla/ansible
|
cd ./kolla/ansible
|
||||||
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq
|
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq
|
||||||
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq,mariadb
|
ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml site.yml --tags rabbitmq,mariadb
|
||||||
|
|
||||||
Finally, you can view ./kolla/tools/openrc-example for an example of an openrc
|
Finally, you can view ./kolla/tools/openrc-example for an example of an
|
||||||
you can use with your environment. If you wish you may also run the following
|
openrc you can use with your environment. If you wish you may also run
|
||||||
command to initiate your environment with an glance image and neutron networks.
|
the following command to initiate your environment with an glance image
|
||||||
|
and neutron networks.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
cd ./kolla/tools
|
cd ./kolla/tools
|
||||||
./init-runonce
|
./init-runonce
|
||||||
@ -97,6 +113,5 @@ command to initiate your environment with an glance image and neutron networks.
|
|||||||
Further Reading
|
Further Reading
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Ansible playbook documentation can be found [here][].
|
Ansible playbook documentation can be found in the Ansible
|
||||||
|
`playbook documentation <http://docs.ansible.com/playbooks.html>`__.
|
||||||
[here]: http://docs.ansible.com/playbooks.html
|
|
||||||
|
@ -1,76 +1,92 @@
|
|||||||
# Developer Environment
|
Developer Environment
|
||||||
|
=====================
|
||||||
|
|
||||||
If you are developing Kolla on an existing OpenStack cloud that supports
|
If you are developing Kolla on an existing OpenStack cloud that supports
|
||||||
Heat, then follow the Heat template [README][]. Another option available
|
Heat, then follow the Heat template
|
||||||
on systems with VirutalBox is the use of [Vagrant][].
|
`README <https://github.com/stackforge/kolla/blob/master/devenv/README.md>`__.
|
||||||
|
Another option available on systems with VirutalBox is the use of
|
||||||
|
`Vagrant <https://github.com/stackforge/kolla/blob/master/docs/vagrant.md>`__.
|
||||||
|
|
||||||
The best experience is available with bare metal deployment by following
|
The best experience is available with bare metal deployment by following
|
||||||
the instructions below to manually create your Kolla deployment.
|
the instructions below to manually create your Kolla deployment.
|
||||||
|
|
||||||
[README]: https://github.com/stackforge/kolla/blob/master/devenv/README.md
|
Installing Dependencies
|
||||||
[Vagrant]: https://github.com/stackforge/kolla/blob/master/docs/vagrant.md
|
-----------------------
|
||||||
|
|
||||||
## Installing Dependencies
|
NB: Kolla will not run on Fedora 22 or later. Fedora 22 compresses
|
||||||
|
kernel modules with the .xz compressed format. The guestfs system cannot
|
||||||
NB: Kolla will not run on Fedora 22 or later. Fedora 22 compresses kernel
|
read these images because a dependent package supermin in CentOS needs
|
||||||
modules with the .xz compressed format. The guestfs system cannot read
|
to be updated to add .xz compressed format support.
|
||||||
these images because a dependent package supermin in CentOS needs to be
|
|
||||||
updated to add .xz compressed format support.
|
|
||||||
|
|
||||||
To install Kolla depenedencies use:
|
To install Kolla depenedencies use:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
git clone http://github.com/stackforge/kolla
|
git clone http://github.com/stackforge/kolla
|
||||||
cd kolla
|
cd kolla
|
||||||
sudo pip install -r requirements.txt
|
sudo pip install -r requirements.txt
|
||||||
|
|
||||||
In order to run Kolla, it is mandatory to run a version of `docker` that is
|
In order to run Kolla, it is mandatory to run a version of ``docker``
|
||||||
1.7.0 or later.
|
that is 1.7.0 or later.
|
||||||
|
|
||||||
For most systems you can install the latest stable version of Docker with the
|
For most systems you can install the latest stable version of Docker
|
||||||
following command:
|
with the following command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
curl -sSL https://get.docker.io | bash
|
curl -sSL https://get.docker.io | bash
|
||||||
|
|
||||||
For Ubuntu based systems, do not use AUFS when starting Docker daemon unless
|
For Ubuntu based systems, do not use AUFS when starting Docker daemon
|
||||||
you are running the Utopic (3.19) kernel. AUFS requires CONFIG_AUFS_XATTR=y
|
unless you are running the Utopic (3.19) kernel. AUFS requires
|
||||||
set when building the kernel. On Ubuntu, versions prior to 3.19 did not set that
|
CONFIG\_AUFS\_XATTR=y set when building the kernel. On Ubuntu, versions
|
||||||
flag. If you are unable to upgrade your kernel, you should use a different
|
prior to 3.19 did not set that flag. If you are unable to upgrade your
|
||||||
storage backend such as btrfs.
|
kernel, you should use a different storage backend such as btrfs.
|
||||||
|
|
||||||
Next, install the OpenStack python clients if they are not installed:
|
Next, install the OpenStack python clients if they are not installed:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
sudo pip install -U python-openstackclient
|
sudo pip install -U python-openstackclient
|
||||||
|
|
||||||
Finally stop libvirt on the host machine. Only one copy of libvirt may be
|
Finally stop libvirt on the host machine. Only one copy of libvirt may
|
||||||
running at a time.
|
be running at a time.
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
service libvirtd stop
|
service libvirtd stop
|
||||||
|
|
||||||
The basic starting environment will be created using `ansible`.
|
The basic starting environment will be created using ``ansible``. This
|
||||||
This environment will start up the OpenStack services listed in the
|
environment will start up the OpenStack services listed in the inventory
|
||||||
inventory file.
|
file.
|
||||||
|
|
||||||
## Starting Kolla
|
Starting Kolla
|
||||||
|
--------------
|
||||||
|
|
||||||
Configure Ansible by reading the Kolla Ansible configuration documentation
|
Configure Ansible by reading the Kolla
|
||||||
[DEPLOY][].
|
`Ansible configuration <https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md>`__ documentation.
|
||||||
|
|
||||||
[DEPLOY]: https://github.com/stackforge/kolla/blob/master/docs/ansible-deployment.md
|
|
||||||
|
|
||||||
Next, run the start command:
|
Next, run the start command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ sudo ./tools/kolla-ansible deploy
|
$ sudo ./tools/kolla-ansible deploy
|
||||||
|
|
||||||
A bare metal system takes three minutes to deploy AIO. A virtual machine
|
A bare metal system takes three minutes to deploy AIO. A virtual machine
|
||||||
takes five minutes to deploy AIO. These are estimates; your hardware may
|
takes five minutes to deploy AIO. These are estimates; your hardware may
|
||||||
be faster or slower but should near these results.
|
be faster or slower but should near these results.
|
||||||
|
|
||||||
## Debugging Kolla
|
Debugging Kolla
|
||||||
|
---------------
|
||||||
|
|
||||||
You can determine a container's status by executing:
|
You can determine a container's status by executing:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ sudo docker ps -a
|
$ sudo docker ps -a
|
||||||
|
|
||||||
If any of the containers exited you can check the logs by executing:
|
If any of the containers exited you can check the logs by executing:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ sudo docker logs <container-name>
|
$ sudo docker logs <container-name>
|
||||||
|
|
||||||
|
@ -1,77 +1,101 @@
|
|||||||
# Image building
|
Image building
|
||||||
|
==============
|
||||||
|
|
||||||
The `tools/build-docker-image` script in this repository is
|
The ``tools/build-docker-image`` script in this repository is
|
||||||
responsible for building docker images. It is symlinked as `./build`
|
responsible for building docker images. It is symlinked as ``./build``
|
||||||
inside each Docker image directory.
|
inside each Docker image directory.
|
||||||
|
|
||||||
When creating new image directories, you can run the
|
When creating new image directories, you can run the
|
||||||
`tools/update-build-links` scripts to install the `build` symlink
|
``tools/update-build-links`` scripts to install the ``build`` symlink
|
||||||
(this script will install the symlink anywhere it find a file named
|
(this script will install the symlink anywhere it find a file named
|
||||||
`Dockerfile`).
|
``Dockerfile``).
|
||||||
|
|
||||||
## Workflow
|
Workflow
|
||||||
|
--------
|
||||||
|
|
||||||
In general, you will build images like this:
|
In general, you will build images like this:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ cd docker/keystone
|
$ cd docker/keystone
|
||||||
$ ./build
|
$ ./build
|
||||||
|
|
||||||
By default, the above command would build
|
By default, the above command would build
|
||||||
`kollaglue/centos-rdo-keystone:CID`, where `CID` is the current short
|
``kollaglue/centos-rdo-keystone:CID``, where ``CID`` is the current
|
||||||
commit ID. That is, given:
|
short commit ID. That is, given:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ git rev-parse HEAD
|
$ git rev-parse HEAD
|
||||||
76a16029006a2f5d3b79f1198d81acb6653110e9
|
76a16029006a2f5d3b79f1198d81acb6653110e9
|
||||||
|
|
||||||
The above command would generate
|
The above command would generate
|
||||||
`kollaglue/centos-rdo-keystone:76a1602`. This tagging is meant to
|
``kollaglue/centos-rdo-keystone:76a1602``. This tagging is meant to
|
||||||
prevent developers from stepping on each other or on release images
|
prevent developers from stepping on each other or on release images
|
||||||
during the development process.
|
during the development process.
|
||||||
|
|
||||||
To push the image after building, add `--push`:
|
To push the image after building, add ``--push``:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./build --push
|
$ ./build --push
|
||||||
|
|
||||||
To use these images, you must specify the tag in your `docker run`
|
To use these images, you must specify the tag in your ``docker run``
|
||||||
commands:
|
commands:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ docker run kollaglue/centos-rdo-keystone:76a1602
|
$ docker run kollaglue/centos-rdo-keystone:76a1602
|
||||||
|
|
||||||
## Building releases
|
Building releases
|
||||||
|
-----------------
|
||||||
|
|
||||||
To build into the `latest` tag, add `--release`:
|
To build into the ``latest`` tag, add ``--release``:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./build --release
|
$ ./build --release
|
||||||
|
|
||||||
Or to build and push:
|
Or to build and push:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ ./build --push --release
|
$ ./build --push --release
|
||||||
|
|
||||||
## Build all images at once
|
Build all images at once
|
||||||
|
------------------------
|
||||||
|
|
||||||
The `build-all-docker-images` script in the tools directory is a wrapper for
|
The ``build-all-docker-images`` script in the tools directory is a
|
||||||
the `build-docker-image` that builds all images, as the name suggests, in the
|
wrapper for the ``build-docker-image`` that builds all images, as the
|
||||||
correct order. It responds to the same options as `build-docker-image` with the
|
name suggests, in the correct order. It responds to the same options as
|
||||||
additional `--from` and `--to` options that allows building only images that
|
``build-docker-image`` with the additional ``--from`` and ``--to``
|
||||||
have changed between the specified git revisions.
|
options that allows building only images that have changed between the
|
||||||
|
specified git revisions.
|
||||||
|
|
||||||
For example, to build all images contained in docker directory and push new release:
|
For example, to build all images contained in docker directory and push
|
||||||
|
new release:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ tools/build-all-docker-images --release --push
|
$ tools/build-all-docker-images --release --push
|
||||||
|
|
||||||
To build only images modified in test-branch along with their children:
|
To build only images modified in test-branch along with their children:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
$ tools/build-all-docker-images --from master --to test-branch
|
$ tools/build-all-docker-images --from master --to test-branch
|
||||||
|
|
||||||
## Configuration
|
Configuration
|
||||||
|
-------------
|
||||||
|
|
||||||
The `build-docker-image` script will look for a file named `.buildconf`
|
The ``build-docker-image`` script will look for a file named
|
||||||
in the image directory and in the top level of the repository. You
|
``.buildconf`` in the image directory and in the top level of the
|
||||||
can use this to set defaults, such as:
|
repository. You can use this to set defaults, such as:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
NAMESPACE=larsks
|
NAMESPACE=larsks
|
||||||
PREFIX=fedora-rdo-
|
PREFIX=fedora-rdo-
|
||||||
|
|
||||||
This setting would cause images to be tagged into the `larsks/`
|
This setting would cause images to be tagged into the ``larsks/``
|
||||||
namespace and use Fedora as base image instead of the default CentOS.
|
namespace and use Fedora as base image instead of the default CentOS.
|
||||||
|
|
||||||
|
103
docs/vagrant.rst
103
docs/vagrant.rst
@ -1,81 +1,96 @@
|
|||||||
Vagrant up!
|
Vagrant up!
|
||||||
============================
|
===========
|
||||||
|
|
||||||
This guide describes how to use [Vagrant][] to assist in developing for Kolla.
|
This guide describes how to use `Vagrant <http://vagrantup.com>`__ to
|
||||||
|
assist in developing for Kolla.
|
||||||
Vagrant is a tool to assist in scripted creation of virtual machines, it will
|
|
||||||
take care of setting up a CentOS-based cluster of virtual machines, each with
|
|
||||||
proper hardware like memory amount and number of network interfaces.
|
|
||||||
|
|
||||||
[Vagrant]: http://vagrantup.com
|
|
||||||
|
|
||||||
|
Vagrant is a tool to assist in scripted creation of virtual machines, it
|
||||||
|
will take care of setting up a CentOS-based cluster of virtual machines,
|
||||||
|
each with proper hardware like memory amount and number of network
|
||||||
|
interfaces.
|
||||||
|
|
||||||
Getting Started
|
Getting Started
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
The vagrant setup will build a cluster with the following nodes:
|
The vagrant setup will build a cluster with the following nodes:
|
||||||
|
|
||||||
- 3 control nodes
|
- 3 control nodes
|
||||||
- 1 compute node
|
- 1 compute node
|
||||||
- 1 operator node
|
- 1 operator node
|
||||||
|
|
||||||
Kolla runs from the operator node to deploy OpenStack on the other nodes.
|
Kolla runs from the operator node to deploy OpenStack on the other
|
||||||
|
nodes.
|
||||||
|
|
||||||
All nodes are connected with each other on the secondary nic, the primary nic
|
All nodes are connected with each other on the secondary nic, the
|
||||||
is behind a NAT interface for connecting with the internet. A third nic is
|
primary nic is behind a NAT interface for connecting with the internet.
|
||||||
connected without IP configuration to a public bridge interface. This may be
|
A third nic is connected without IP configuration to a public bridge
|
||||||
used for Neutron/Nova to connect to instances.
|
interface. This may be used for Neutron/Nova to connect to instances.
|
||||||
|
|
||||||
Start with downloading and installing the Vagrant package for your distro of
|
Start with downloading and installing the Vagrant package for your
|
||||||
choice. Various downloads can be found [here][]. After we will install the
|
distro of choice. Various downloads can be found
|
||||||
hostmanager plugin so all hosts are recorded in /etc/hosts (inside each vm):
|
at the `Vagrant downloads <https://www.vagrantup.com/downloads.html>`__.
|
||||||
|
After we will install the hostmanager plugin so all hosts are recorded in
|
||||||
|
/etc/hosts (inside each vm):
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
vagrant plugin install vagrant-hostmanager
|
vagrant plugin install vagrant-hostmanager
|
||||||
|
|
||||||
Vagrant supports a wide range of virtualization technologies, of which we will
|
Vagrant supports a wide range of virtualization technologies, of which
|
||||||
use VirtualBox for now.
|
we will use VirtualBox for now.
|
||||||
|
|
||||||
Find some place in your homedir and checkout the Kolla repo
|
Find some place in your homedir and checkout the Kolla repo:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
git clone https://github.com/stackforge/kolla.git ~/dev/kolla
|
git clone https://github.com/stackforge/kolla.git ~/dev/kolla
|
||||||
|
|
||||||
You can now tweak the Vagrantfile or start a CentOS7-based cluster right away:
|
You can now tweak the Vagrantfile or start a CentOS7-based cluster right
|
||||||
|
away:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
cd ~/dev/kolla/vagrant && vagrant up
|
cd ~/dev/kolla/vagrant && vagrant up
|
||||||
|
|
||||||
The command `vagrant up` will build your cluster, `vagrant status` will give
|
The command ``vagrant up`` will build your cluster, ``vagrant status``
|
||||||
you a quick overview once done.
|
will give you a quick overview once done.
|
||||||
|
|
||||||
[here]: https://www.vagrantup.com/downloads.html
|
|
||||||
|
|
||||||
Vagrant Up
|
Vagrant Up
|
||||||
---------
|
----------
|
||||||
|
|
||||||
Once vagrant has completed deploying all nodes, we can focus on launching Kolla.
|
Once vagrant has completed deploying all nodes, we can focus on
|
||||||
First, connect with the _operator_ node:
|
launching Kolla. First, connect with the *operator* node:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
vagrant ssh operator
|
vagrant ssh operator
|
||||||
|
|
||||||
Once connected you can run a simple Ansible-style ping to verify if the cluster is operable:
|
Once connected you can run a simple Ansible-style ping to verify if the
|
||||||
|
cluster is operable:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
ansible -i kolla/ansible/inventory/multinode all -m ping -e ansible_ssh_user=root
|
ansible -i kolla/ansible/inventory/multinode all -m ping -e ansible_ssh_user=root
|
||||||
|
|
||||||
Congratulations, your cluster is usable and you can start deploying OpenStack using Ansible!
|
Congratulations, your cluster is usable and you can start deploying
|
||||||
|
OpenStack using Ansible!
|
||||||
|
|
||||||
To speed things up, there is a local registry running on the operator. All nodes are configured
|
To speed things up, there is a local registry running on the operator.
|
||||||
so they can use this insecure repo to pull from, and they will use it as mirror. Ansible may
|
All nodes are configured so they can use this insecure repo to pull
|
||||||
use this registry to pull images from.
|
from, and they will use it as mirror. Ansible may use this registry to
|
||||||
|
pull images from.
|
||||||
|
|
||||||
All nodes have a local folder shared between the group and the hypervisor, and a folder shared
|
All nodes have a local folder shared between the group and the
|
||||||
between _all_ nodes and the hypervisor. This mapping is lost after reboots, so make sure you use
|
hypervisor, and a folder shared between *all* nodes and the hypervisor.
|
||||||
the command `vagrant reload <node>` when reboots are required. Having this shared folder you
|
This mapping is lost after reboots, so make sure you use the command
|
||||||
have a method to supply a different docker binary to the cluster. The shared folder is also
|
``vagrant reload <node>`` when reboots are required. Having this shared
|
||||||
used to store the docker-registry files, so they are save from destructive operations like
|
folder you have a method to supply a different docker binary to the
|
||||||
`vagrant destroy`.
|
cluster. The shared folder is also used to store the docker-registry
|
||||||
|
files, so they are save from destructive operations like
|
||||||
|
``vagrant destroy``.
|
||||||
|
|
||||||
Further Reading
|
Further Reading
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
All Vagrant documentation can be found on their [website][].
|
All Vagrant documentation can be found at
|
||||||
|
`docs.vagrantup.com <http://docs.vagrantup.com>`__.
|
||||||
[website]: http://docs.vagrantup.com
|
|
||||||
|
Loading…
Reference in New Issue
Block a user