Upgrade the rst convention of the User Guide

We upgrade the rst convention by following Documentation Contributor
Guide[1].

[1] https://docs.openstack.org/doc-contrib-guide

Change-Id: Ieceb3942073512fb10670a48d258c4055909496e
Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
chenxing 2018-01-26 19:27:16 +08:00
parent 5c66b522a4
commit 3a77dba899
7 changed files with 470 additions and 290 deletions

View File

@ -1,8 +1,9 @@
===========
User Guides User Guides
=========== ===========
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
quickstart quickstart
multinode multinode

View File

@ -9,17 +9,17 @@ with Kolla. A basic multiple regions deployment consists of separate
OpenStack installation in two or more regions (RegionOne, RegionTwo, ...) OpenStack installation in two or more regions (RegionOne, RegionTwo, ...)
with a shared Keystone and Horizon. The rest of this documentation assumes with a shared Keystone and Horizon. The rest of this documentation assumes
Keystone and Horizon are deployed in RegionOne, and other regions have Keystone and Horizon are deployed in RegionOne, and other regions have
access to the admin endpoint (i.e., ``kolla_internal_fqdn``) of RegionOne. access to the admin endpoint (for example, ``kolla_internal_fqdn``) of
RegionOne.
It also assumes that the operator knows the name of all OpenStack regions It also assumes that the operator knows the name of all OpenStack regions
in advance, and considers as many Kolla deployments as there are regions. in advance, and considers as many Kolla deployments as there are regions.
There are specifications of multiple regions deployment at: There is specifications of multiple regions deployment at
`<http://docs.openstack.org/arch-design/multi-site-architecture.html>`__ `Multi Region Support for Heat
and <https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
`<https://wiki.openstack.org/wiki/Heat/Blueprints/Multi_Region_Support_for_Heat>`__.
Deployment of the first region with Keystone and Horizon Deployment of the first region with Keystone and Horizon
======================================================== ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deployment of the first region results in a typical Kolla deployment Deployment of the first region results in a typical Kolla deployment
whenever, it is an *all-in-one* or *multinode* deployment (see whenever, it is an *all-in-one* or *multinode* deployment (see
@ -27,27 +27,33 @@ whenever, it is an *all-in-one* or *multinode* deployment (see
``/etc/kolla/globals.yml`` configuration file. First of all, ensure that ``/etc/kolla/globals.yml`` configuration file. First of all, ensure that
Keystone and Horizon are enabled: Keystone and Horizon are enabled:
:: .. code-block:: console
enable_keystone: "yes" enable_keystone: "yes"
enable_horizon: "yes" enable_horizon: "yes"
.. end
Then, change the value of ``multiple_regions_names`` to add names of other Then, change the value of ``multiple_regions_names`` to add names of other
regions. In this example, we consider two regions. The current one, regions. In this example, we consider two regions. The current one,
formerly knows as RegionOne, that is hided behind formerly knows as RegionOne, that is hided behind
``openstack_region_name`` variable, and the RegionTwo: ``openstack_region_name`` variable, and the RegionTwo:
:: .. code-block:: none
openstack_region_name: "RegionOne" openstack_region_name: "RegionOne"
multiple_regions_names: multiple_regions_names:
- "{{ openstack_region_name }}" - "{{ openstack_region_name }}"
- "RegionTwo" - "RegionTwo"
.. note:: Kolla uses these variables to create necessary endpoints into .. end
Keystone so that services of other regions can access it. Kolla
also updates the Horizon ``local_settings`` to support multiple .. note::
regions.
Kolla uses these variables to create necessary endpoints into
Keystone so that services of other regions can access it. Kolla
also updates the Horizon ``local_settings`` to support multiple
regions.
Finally, note the value of ``kolla_internal_fqdn`` and run Finally, note the value of ``kolla_internal_fqdn`` and run
``kolla-ansible``. The ``kolla_internal_fqdn`` value will be used by other ``kolla-ansible``. The ``kolla_internal_fqdn`` value will be used by other
@ -55,7 +61,7 @@ regions to contact Keystone. For the sake of this example, we assume the
value of ``kolla_internal_fqdn`` is ``10.10.10.254``. value of ``kolla_internal_fqdn`` is ``10.10.10.254``.
Deployment of other regions Deployment of other regions
=========================== ~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deployment of other regions follows an usual Kolla deployment except that Deployment of other regions follows an usual Kolla deployment except that
OpenStack services connect to the RegionOne's Keystone. This implies to OpenStack services connect to the RegionOne's Keystone. This implies to
@ -63,7 +69,7 @@ update the ``/etc/kolla/globals.yml`` configuration file to tell Kolla how
to reach Keystone. In the following, ``kolla_internal_fqdn_r1`` refers to to reach Keystone. In the following, ``kolla_internal_fqdn_r1`` refers to
the value of ``kolla_internal_fqdn`` in RegionOne: the value of ``kolla_internal_fqdn`` in RegionOne:
:: .. code-block:: none
kolla_internal_fqdn_r1: 10.10.10.254 kolla_internal_fqdn_r1: 10.10.10.254
@ -77,32 +83,39 @@ the value of ``kolla_internal_fqdn`` in RegionOne:
project_name: "admin" project_name: "admin"
domain_name: "default" domain_name: "default"
.. end
Configuration files of cinder,nova,neutron,glance... have to be updated to Configuration files of cinder,nova,neutron,glance... have to be updated to
contact RegionOne's Keystone. Fortunately, Kolla offers to override all contact RegionOne's Keystone. Fortunately, Kolla offers to override all
configuration files at the same time thanks to the configuration files at the same time thanks to the
``node_custom_config`` variable (see :ref:`service-config`). This ``node_custom_config`` variable (see :ref:`service-config`). This
implies to create a ``global.conf`` file with the following content: implies to create a ``global.conf`` file with the following content:
:: .. code-block:: ini
[keystone_authtoken] [keystone_authtoken]
auth_uri = {{ keystone_internal_url }} auth_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_admin_url }} auth_url = {{ keystone_admin_url }}
.. end
The Placement API section inside the nova configuration file also has The Placement API section inside the nova configuration file also has
to be updated to contact RegionOne's Keystone. So create, in the same to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``nova.conf`` file with below content: directory, a ``nova.conf`` file with below content:
:: .. code-block:: ini
[placement] [placement]
auth_url = {{ keystone_admin_url }} auth_url = {{ keystone_admin_url }}
.. end
The Heat section inside the configuration file also The Heat section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``heat.conf`` file with below content: directory, a ``heat.conf`` file with below content:
:: .. code-block:: ini
[trustee] [trustee]
auth_uri = {{ keystone_internal_url }} auth_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_internal_url }} auth_url = {{ keystone_internal_url }}
@ -113,33 +126,44 @@ directory, a ``heat.conf`` file with below content:
[clients_keystone] [clients_keystone]
auth_uri = {{ keystone_internal_url }} auth_uri = {{ keystone_internal_url }}
.. end
The Ceilometer section inside the configuration file also The Ceilometer section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``ceilometer.conf`` file with below content: directory, a ``ceilometer.conf`` file with below content:
:: .. code-block:: ini
[service_credentials]
auth_url = {{ keystone_internal_url }} [service_credentials]
auth_url = {{ keystone_internal_url }}
.. end
And link the directory that contains these files into the And link the directory that contains these files into the
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
:: .. code-block:: none
node_custom_config: path/to/the/directory/of/global&nova_conf/ node_custom_config: path/to/the/directory/of/global&nova_conf/
.. end
Also, change the name of the current region. For instance, RegionTwo: Also, change the name of the current region. For instance, RegionTwo:
:: .. code-block:: none
openstack_region_name: "RegionTwo" openstack_region_name: "RegionTwo"
.. end
Finally, disable the deployment of Keystone and Horizon that are Finally, disable the deployment of Keystone and Horizon that are
unnecessary in this region and run ``kolla-ansible``: unnecessary in this region and run ``kolla-ansible``:
:: .. code-block:: none
enable_keystone: "no" enable_keystone: "no"
enable_horizon: "no" enable_horizon: "no"
.. end
The configuration is the same for any other region. The configuration is the same for any other region.

View File

@ -24,17 +24,21 @@ Edit the ``/etc/kolla/globals.yml`` and add the following where 192.168.1.100
is the IP address of the machine and 5000 is the port where the registry is is the IP address of the machine and 5000 is the port where the registry is
currently running: currently running:
:: .. code-block:: none
docker_registry = 192.168.1.100:5000 docker_registry = 192.168.1.100:5000
.. end
The Kolla community recommends using registry 2.3 or later. To deploy registry The Kolla community recommends using registry 2.3 or later. To deploy registry
with version 2.3 or later, do the following: with version 2.3 or later, do the following:
:: .. code-block:: console
cd kolla cd kolla
tools/start-registry tools/start-registry
.. end
The Docker registry can be configured as a pull through cache to proxy the The Docker registry can be configured as a pull through cache to proxy the
official Kolla images hosted in Docker Hub. In order to configure the local official Kolla images hosted in Docker Hub. In order to configure the local
@ -42,75 +46,96 @@ registry as a pull through cache, in the host machine set the environment
variable ``REGISTRY_PROXY_REMOTEURL`` to the URL for the repository on variable ``REGISTRY_PROXY_REMOTEURL`` to the URL for the repository on
Docker Hub. Docker Hub.
:: .. code-block:: console
export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
.. end
.. note:: .. note::
Pushing to a registry configured as a pull-through cache is unsupported. Pushing to a registry configured as a pull-through cache is unsupported.
For more information, Reference the `Docker Documentation For more information, Reference the `Docker Documentation
<https://docs.docker.com/registry/configuration/>`__. <https://docs.docker.com/registry/configuration/>`__.
.. _configure_docker_all_nodes: .. _configure_docker_all_nodes:
Configure Docker on all nodes Configure Docker on all nodes
============================= =============================
.. note:: As the subtitle for this section implies, these steps should be .. note::
applied to all nodes, not just the deployment node.
After starting the registry, it is necessary to instruct Docker that it will As the subtitle for this section implies, these steps should be
be communicating with an insecure registry. To enable insecure registry applied to all nodes, not just the deployment node.
communication on CentOS, modify the ``/etc/sysconfig/docker`` file to contain
the following where 192.168.1.100 is the IP address of the machine where the
registry is currently running:
:: After starting the registry, it is necessary to instruct Docker that
it will be communicating with an insecure registry.
For example, To enable insecure registry communication on CentOS,
modify the ``/etc/sysconfig/docker`` file to contain the following where
``192.168.1.100`` is the IP address of the machine where the registry
is currently running:
# CentOS .. path /etc/sysconfig/docker
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000" .. code-block:: ini
INSECURE_REGISTRY="--insecure-registry 192.168.1.100:5000"
.. end
For Ubuntu, check whether its using upstart or systemd. For Ubuntu, check whether its using upstart or systemd.
:: .. code-block:: console
# stat /proc/1/exe # stat /proc/1/exe
File: '/proc/1/exe' -> '/lib/systemd/systemd' File: '/proc/1/exe' -> '/lib/systemd/systemd'
Edit ``/etc/default/docker`` and add: Edit ``/etc/default/docker`` and add the following configuration:
:: .. path /etc/default/docker
.. code-block:: ini
# Ubuntu DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
DOCKER_OPTS="--insecure-registry 192.168.1.100:5000"
.. end
If Ubuntu is using systemd, additional settings needs to be configured. If Ubuntu is using systemd, additional settings needs to be configured.
Copy Docker's systemd unit file to ``/etc/systemd/system/`` directory: Copy Docker's systemd unit file to ``/etc/systemd/system/`` directory:
:: .. code-block:: console
cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service cp /lib/systemd/system/docker.service /etc/systemd/system/docker.service
.. end
Next, modify ``/etc/systemd/system/docker.service``, add ``environmentFile`` Next, modify ``/etc/systemd/system/docker.service``, add ``environmentFile``
variable and add ``$DOCKER_OPTS`` to the end of ExecStart in ``[Service]`` variable and add ``$DOCKER_OPTS`` to the end of ExecStart in ``[Service]``
section: section.
:: For CentOS:
.. path /etc/systemd/system/docker.service
.. code-block:: ini
# CentOS
[Service] [Service]
MountFlags=shared MountFlags=shared
EnvironmentFile=/etc/sysconfig/docker EnvironmentFile=/etc/sysconfig/docker
ExecStart= ExecStart=
ExecStart=/usr/bin/docker daemon $INSECURE_REGISTRY ExecStart=/usr/bin/docker daemon $INSECURE_REGISTRY
# Ubuntu .. end
[Service]
MountFlags=shared For Ubuntu:
EnvironmentFile=-/etc/default/docker
ExecStart= .. path /etc/systemd/system/docker.service
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS .. code-block:: ini
[Service]
MountFlags=shared
EnvironmentFile=-/etc/default/docker
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// $DOCKER_OPTS
.. end
.. note:: .. note::
@ -120,14 +145,22 @@ section:
Restart Docker by executing the following commands: Restart Docker by executing the following commands:
:: For CentOS or Ubuntu with systemd:
# CentOS or Ubuntu with systemd .. code-block:: console
systemctl daemon-reload
systemctl restart docker
# Ubuntu with upstart or sysvinit systemctl daemon-reload
sudo service docker restart systemctl restart docker
.. end
For Ubuntu with upstart or sysvinit:
.. code-block:: console
service docker restart
.. end
.. _edit-inventory: .. _edit-inventory:
@ -152,7 +185,7 @@ controls how ansible interacts with remote hosts.
information about SSH authentication please reference information about SSH authentication please reference
`Ansible documentation <http://docs.ansible.com/ansible/intro_inventory.html>`__. `Ansible documentation <http://docs.ansible.com/ansible/intro_inventory.html>`__.
:: .. code-block:: none
# These initial groups are the only groups required to be modified. The # These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment. # additional groups are for more control of the environment.
@ -161,6 +194,8 @@ controls how ansible interacts with remote hosts.
control01 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file> control01 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file> 192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
.. end
.. note:: .. note::
Additional inventory parameters might be required according to your Additional inventory parameters might be required according to your
@ -173,7 +208,7 @@ For more advanced roles, the operator can edit which services will be
associated in with each group. Keep in mind that some services have to be associated in with each group. Keep in mind that some services have to be
grouped together and changing these around can break your deployment: grouped together and changing these around can break your deployment:
:: .. code-block:: none
[kibana:children] [kibana:children]
control control
@ -184,6 +219,8 @@ grouped together and changing these around can break your deployment:
[haproxy:children] [haproxy:children]
network network
.. end
Deploying Kolla Deploying Kolla
=============== ===============
@ -203,9 +240,11 @@ Deploying Kolla
First, check that the deployment targets are in a state where Kolla may deploy First, check that the deployment targets are in a state where Kolla may deploy
to them: to them:
:: .. code-block:: console
kolla-ansible prechecks -i <path/to/multinode/inventory/file> kolla-ansible prechecks -i <path/to/multinode/inventory/file>
.. end
.. note:: .. note::
@ -215,8 +254,8 @@ to them:
Run the deployment: Run the deployment:
:: .. code-block:: console
kolla-ansible deploy -i <path/to/multinode/inventory/file> kolla-ansible deploy -i <path/to/multinode/inventory/file>
.. _Building Container Images: https://docs.openstack.org/kolla/latest/image-building.html .. end

View File

@ -5,7 +5,8 @@ Operating Kolla
=============== ===============
Upgrading Upgrading
========= ~~~~~~~~~
Kolla's strategy for upgrades is to never make a mess and to follow consistent Kolla's strategy for upgrades is to never make a mess and to follow consistent
patterns during deployment such that upgrades from one environment to the next patterns during deployment such that upgrades from one environment to the next
are simple to automate. are simple to automate.
@ -28,48 +29,68 @@ choosing.
If the alpha identifier is not used, Kolla will deploy or upgrade using the If the alpha identifier is not used, Kolla will deploy or upgrade using the
version number information contained in the release. To customize the version number information contained in the release. To customize the
version number uncomment openstack_release in globals.yml and specify version number uncomment ``openstack_release`` in ``globals.yml`` and specify
the version number desired. the version number desired.
For example, to deploy a custom built Liberty version built with the For example, to deploy a custom built ``Liberty`` version built with the
``kolla-build --tag 1.0.0.0`` operation, change globals.yml:: :command:`kolla-build --tag 1.0.0.0` operation, configure the ``globals.yml``
file:
openstack_release: 1.0.0.0 .. code-block:: none
Then run the command to deploy:: openstack_release: 1.0.0.0
kolla-ansible deploy .. end
If using Liberty and a custom alpha number of 0, and upgrading to 1, change Then run the following command to deploy:
globals.yml::
openstack_release: 1.0.0.1 .. code-block:: console
Then run the command to upgrade:: kolla-ansible deploy
kolla-ansible upgrade .. end
.. note:: Varying degrees of success have been reported with upgrading If using Liberty and a custom alpha number of 0, and upgrading to 1,
the libvirt container with a running virtual machine in it. The libvirt configure the ``globals.yml`` file:
upgrade still needs a bit more validation, but the Kolla community feels
confident this mechanism can be used with the correct Docker graph driver.
.. note:: The Kolla community recommends the btrfs or aufs graph drivers for .. code-block:: none
storing data as sometimes the LVM graph driver loses track of its reference
counting and results in an unremovable container.
.. note:: Because of system technical limitations, upgrade of a libvirt openstack_release: 1.0.0.1
container when using software emulation (``virt_type = qemu`` in nova.conf),
does not work at all. This is acceptable because KVM is the recommended
virtualization driver to use with Nova.
.. note:: Please note that when the ``use_preconfigured_databases`` flag is .. end
set to ``"yes"``, you need to have the ``log_bin_trust_function_creators``
set to ``1`` by your database administrator before performing the upgrade. Then run the command to upgrade:
.. code-block:: console
kolla-ansible upgrade
.. end
.. note::
Varying degrees of success have been reported with upgrading
the libvirt container with a running virtual machine in it. The libvirt
upgrade still needs a bit more validation, but the Kolla community feels
confident this mechanism can be used with the correct Docker graph driver.
.. note::
The Kolla community recommends the btrfs or aufs graph drivers for
storing data as sometimes the LVM graph driver loses track of its reference
counting and results in an unremovable container.
.. note::
Because of system technical limitations, upgrade of a libvirt
container when using software emulation (``virt_type = qemu`` in
``nova.conf`` file), does not work at all. This is acceptable because
KVM is the recommended virtualization driver to use with Nova.
Tips and Tricks Tips and Tricks
=============== ~~~~~~~~~~~~~~~
Kolla ships with several utilities intended to facilitate ease of operation. Kolla ships with several utilities intended to facilitate ease of operation.
``tools/cleanup-containers`` is used to remove deployed containers from the ``tools/cleanup-containers`` is used to remove deployed containers from the
@ -113,21 +134,26 @@ Environment.
tests. tests.
.. note:: .. note::
In order to do smoke tests, requires ``kolla_enable_sanity_checks=yes``.
In order to do smoke tests, requires ``kolla_enable_sanity_checks=yes``.
``kolla-mergepwd --old OLD_PASSWDS --new NEW_PASSWDS --final FINAL_PASSWDS`` ``kolla-mergepwd --old OLD_PASSWDS --new NEW_PASSWDS --final FINAL_PASSWDS``
is used to merge passwords from old installation with newly generated is used to merge passwords from old installation with newly generated
passwords during upgrade of Kolla release. The workflow is: passwords during upgrade of Kolla release. The workflow is:
- Save old passwords from ``/etc/kolla/passwords.yml`` into #. Save old passwords from ``/etc/kolla/passwords.yml`` into
``passwords.yml.old`` ``passwords.yml.old``.
- Generate new passwords via ``kolla-genpwd`` as ``passwords.yml.new`` #. Generate new passwords via ``kolla-genpwd`` as ``passwords.yml.new``.
- Merge ``passwords.yml.old`` and ``passwords.yml.new`` into #. Merge ``passwords.yml.old`` and ``passwords.yml.new`` into
``/etc/kolla/passwords.yml`` ``/etc/kolla/passwords.yml``.
For example:: For example:
mv /etc/kolla/passwords.yml passwords.yml.old .. code-block:: console
cp kolla-ansible/etc/kolla/passwords.yml passwords.yml.new
kolla-genpwd -p passwords.yml.new mv /etc/kolla/passwords.yml passwords.yml.old
kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml cp kolla-ansible/etc/kolla/passwords.yml passwords.yml.new
kolla-genpwd -p passwords.yml.new
kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml
.. end

View File

@ -24,7 +24,7 @@ The host machine must satisfy the following minimum requirements:
.. note:: .. note::
Root access to the deployment host machine is required. Root access to the deployment host machine is required.
Install dependencies Install dependencies
~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~
@ -34,34 +34,42 @@ before proceeding.
For CentOS, run: For CentOS, run:
:: .. code-block:: console
yum install epel-release yum install epel-release
yum install python-pip yum install python-pip
pip install -U pip pip install -U pip
.. end
For Ubuntu, run: For Ubuntu, run:
:: .. code-block:: console
apt-get update apt-get update
apt-get install python-pip apt-get install python-pip
pip install -U pip pip install -U pip
.. end
To build the code with ``pip`` package manager, install the following To build the code with ``pip`` package manager, install the following
dependencies: dependencies:
For CentOS, run: For CentOS, run:
:: .. code-block:: console
yum install python-devel libffi-devel gcc openssl-devel libselinux-python yum install python-devel libffi-devel gcc openssl-devel libselinux-python
.. end
For Ubuntu, run: For Ubuntu, run:
:: .. code-block:: console
apt-get install python-dev libffi-dev gcc libssl-dev python-selinux apt-get install python-dev libffi-dev gcc libssl-dev python-selinux
.. end
Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install Kolla deploys OpenStack using `Ansible <http://www.ansible.com>`__. Install
Ansible from distribution packaging if the distro packaging has recommended Ansible from distribution packaging if the distro packaging has recommended
@ -76,17 +84,21 @@ repository to install via yum -- to do so, take a look at Fedora's EPEL `docs
On CentOS or RHEL systems, this can be done using: On CentOS or RHEL systems, this can be done using:
:: .. code-block:: console
yum install ansible yum install ansible
.. end
Many DEB based systems do not meet Kolla's Ansible version requirements. It is Many DEB based systems do not meet Kolla's Ansible version requirements. It is
recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be recommended to use pip to install Ansible >2.0. Finally Ansible >2.0 may be
installed using: installed using:
:: .. code-block:: console
pip install -U ansible pip install -U ansible
.. end
.. note:: .. note::
@ -95,19 +107,24 @@ installed using:
If DEB based systems include a version of Ansible that meets Kolla's version If DEB based systems include a version of Ansible that meets Kolla's version
requirements it can be installed by: requirements it can be installed by:
:: .. code-block:: console
apt-get install ansible apt-get install ansible
.. end
It's beneficial to add the following options to ansible It's beneficial to add the following options to ansible
configuration file ``/etc/ansible/ansible.cfg``: configuration file ``/etc/ansible/ansible.cfg``:
:: .. path /etc/ansible/ansible.cfg
.. code-block:: ini
[defaults] [defaults]
host_key_checking=False host_key_checking=False
pipelining=True pipelining=True
forks=100 forks=100
.. end
Install Kolla-ansible Install Kolla-ansible
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
@ -117,65 +134,80 @@ Install Kolla-ansible for deployment or evaluation
Install kolla-ansible and its dependencies using ``pip``. Install kolla-ansible and its dependencies using ``pip``.
:: .. code-block:: console
pip install kolla-ansible pip install kolla-ansible
.. end
Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory. Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory.
For CentOS, run: For CentOS, run:
:: .. code-block:: console
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/ cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
.. end
For Ubuntu, run: For Ubuntu, run:
:: .. code-block:: console
cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/kolla/ cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/kolla/
.. end
Copy the ``all-in-one`` and ``multinode`` inventory files to Copy the ``all-in-one`` and ``multinode`` inventory files to
the current directory. the current directory.
For CentOS, run: For CentOS, run:
:: .. code-block:: console
cp /usr/share/kolla-ansible/ansible/inventory/* . cp /usr/share/kolla-ansible/ansible/inventory/* .
.. end
For Ubuntu, run: For Ubuntu, run:
:: .. code-block:: console
cp /usr/local/share/kolla-ansible/ansible/inventory/* . cp /usr/local/share/kolla-ansible/ansible/inventory/* .
.. end
Install Kolla for development Install Kolla for development
----------------------------- -----------------------------
Clone the Kolla and Kolla-Ansible repositories from git. Clone the Kolla and Kolla-Ansible repositories from git.
:: .. code-block:: console
git clone https://github.com/openstack/kolla git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible git clone https://github.com/openstack/kolla-ansible
.. end
Kolla-ansible holds the configuration files (``globals.yml`` and Kolla-ansible holds the configuration files (``globals.yml`` and
``passwords.yml``) in ``etc/kolla``. Copy the configuration ``passwords.yml``) in ``etc/kolla``. Copy the configuration
files to ``/etc/kolla`` directory. files to ``/etc/kolla`` directory.
:: .. code-block:: console
cp -r kolla-ansible/etc/kolla /etc/kolla/ cp -r kolla-ansible/etc/kolla /etc/kolla/
.. end
Kolla-ansible holds the inventory files (``all-in-one`` and ``multinode``) Kolla-ansible holds the inventory files (``all-in-one`` and ``multinode``)
in ``ansible/inventory``. Copy the inventory files to the current in ``ansible/inventory``. Copy the inventory files to the current
directory. directory.
:: .. code-block:: console
cp kolla-ansible/ansible/inventory/* . cp kolla-ansible/ansible/inventory/* .
.. end
Prepare initial configuration Prepare initial configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -194,46 +226,50 @@ than one node, edit ``multinode`` inventory:
Edit the first section of ``multinode`` with connection details of your Edit the first section of ``multinode`` with connection details of your
environment, for example: environment, for example:
:: .. code-block:: none
[control] [control]
10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true 10.0.0.[10:12] ansible_user=ubuntu ansible_password=foobar ansible_become=true
# Ansible supports syntax like [10:12] - that means 10, 11 and 12. # Ansible supports syntax like [10:12] - that means 10, 11 and 12.
# Become clausule means "use sudo". # Become clausule means "use sudo".
[network:children] [network:children]
control control
# when you specify group_name:children, it will use contents of group specified. # when you specify group_name:children, it will use contents of group specified.
[compute] [compute]
10.0.0.[13:14] ansible_user=ubuntu ansible_password=foobar ansible_become=true 10.0.0.[13:14] ansible_user=ubuntu ansible_password=foobar ansible_become=true
[monitoring] [monitoring]
10.0.0.10 10.0.0.10
# This group is for monitoring node. # This group is for monitoring node.
# Fill it with one of the controllers' IP address or some others. # Fill it with one of the controllers' IP address or some others.
[storage:children] [storage:children]
compute compute
[deployment] [deployment]
localhost ansible_connection=local become=true localhost ansible_connection=local become=true
# use localhost and sudo # use localhost and sudo
.. end
To learn more about inventory files, check To learn more about inventory files, check
`Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_. `Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_.
To confirm that our inventory is correct, run: To confirm that our inventory is correct, run:
:: .. code-block:: console
ansible -m ping all ansible -m ping all
.. end
.. note:: .. note::
Ubuntu might not come with python pre-installed. That will cause Ubuntu might not come with python pre-installed. That will cause
errors in ping module. To quickly install python with ansible you errors in ping module. To quickly install python with ansible you
can run ``ansible -m raw -a "apt-get -y install python-dev all"`` can run ``ansible -m raw -a "apt-get -y install python-dev all"``
Kolla passwords Kolla passwords
--------------- ---------------
@ -244,16 +280,20 @@ manually or by running random password generator:
For deployment or evaluation, run: For deployment or evaluation, run:
:: .. code-block:: console
kolla-genpwd kolla-genpwd
.. end
For development, run: For development, run:
:: .. code-block:: console
cd kolla-ansible/tools cd kolla-ansible/tools
./generate_passwords.py ./generate_passwords.py
.. end
Kolla globals.yml Kolla globals.yml
----------------- -----------------
@ -279,9 +319,11 @@ There are a few options that are required to deploy Kolla-Ansible:
For newcomers, we recommend to use CentOS 7 or Ubuntu 16.04. For newcomers, we recommend to use CentOS 7 or Ubuntu 16.04.
:: .. code-block:: console
kolla_base_distro: "centos" kolla_base_distro: "centos"
.. end
Next "type" of installation needs to be configured. Next "type" of installation needs to be configured.
Choices are: Choices are:
@ -301,16 +343,20 @@ There are a few options that are required to deploy Kolla-Ansible:
Source builds are proven to be slightly more reliable than binary. Source builds are proven to be slightly more reliable than binary.
:: .. code-block:: console
kolla_install_type: "source" kolla_install_type: "source"
.. end
To use DockerHub images, the default image tag has to be overriden. Images are To use DockerHub images, the default image tag has to be overriden. Images are
tagged with release names. For example to use stable Pike images set tagged with release names. For example to use stable Pike images set
:: .. code-block:: console
openstack_release: "pike" openstack_release: "pike"
.. end
It's important to use same version of images as kolla-ansible. That It's important to use same version of images as kolla-ansible. That
means if pip was used to install kolla-ansible, that means it's latest stable means if pip was used to install kolla-ansible, that means it's latest stable
@ -318,9 +364,11 @@ There are a few options that are required to deploy Kolla-Ansible:
master branch, DockerHub also provides daily builds of master branch (which is master branch, DockerHub also provides daily builds of master branch (which is
tagged as ``master``): tagged as ``master``):
:: .. code-block:: console
openstack_release: "master" openstack_release: "master"
.. end
* Networking * Networking
@ -330,18 +378,22 @@ There are a few options that are required to deploy Kolla-Ansible:
First interface to set is "network_interface". This is the default interface First interface to set is "network_interface". This is the default interface
for multiple management-type networks. for multiple management-type networks.
:: .. code-block:: console
network_interface: "eth0" network_interface: "eth0"
.. end
Second interface required is dedicated for Neutron external (or public) Second interface required is dedicated for Neutron external (or public)
networks, can be vlan or flat, depends on how the networks are created. networks, can be vlan or flat, depends on how the networks are created.
This interface should be active without IP address. If not, instances This interface should be active without IP address. If not, instances
won't be able to access to the external networks. won't be able to access to the external networks.
:: .. code-block:: console
neutron_external_interface: "eth1" neutron_external_interface: "eth1"
.. end
To learn more about network configuration, refer `Network overview To learn more about network configuration, refer `Network overview
<https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_. <https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_.
@ -351,9 +403,11 @@ There are a few options that are required to deploy Kolla-Ansible:
*not used* address in management network that is connected to our *not used* address in management network that is connected to our
``network_interface``. ``network_interface``.
:: .. code-block:: console
kolla_internal_vip_address: "10.1.0.250" kolla_internal_vip_address: "10.1.0.250"
.. end
* Enable additional services * Enable additional services
@ -361,9 +415,11 @@ There are a few options that are required to deploy Kolla-Ansible:
support for a vast selection of additional services. To enable them, set support for a vast selection of additional services. To enable them, set
``enable_*`` to "yes". For example, to enable Block Storage service: ``enable_*`` to "yes". For example, to enable Block Storage service:
:: .. code-block:: console
enable_cinder: "yes" enable_cinder: "yes"
.. end
Kolla now supports many OpenStack services, there is Kolla now supports many OpenStack services, there is
`a list of available services `a list of available services
@ -385,42 +441,54 @@ the correct versions.
#. Bootstrap servers with kolla deploy dependencies: #. Bootstrap servers with kolla deploy dependencies:
:: .. code-block:: console
kolla-ansible -i ./multinode bootstrap-servers kolla-ansible -i ./multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts: #. Do pre-deployment checks for hosts:
:: .. code-block:: console
kolla-ansible -i ./multinode prechecks kolla-ansible -i ./multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment: #. Finally proceed to actual OpenStack deployment:
:: .. code-block:: console
kolla-ansible -i ./multinode deploy kolla-ansible -i ./multinode deploy
.. end
* For development, run: * For development, run:
#. Bootstrap servers with kolla deploy dependencies: #. Bootstrap servers with kolla deploy dependencies:
:: .. code-block:: console
cd kolla-ansible/tools cd kolla-ansible/tools
./kolla-ansible -i ./multinode bootstrap-servers ./kolla-ansible -i ./multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts: #. Do pre-deployment checks for hosts:
:: .. code-block:: console
./kolla-ansible -i ./multinode prechecks ./kolla-ansible -i ./multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment: #. Finally proceed to actual OpenStack deployment:
:: .. code-block:: console
./kolla-ansible -i ./multinode deploy ./kolla-ansible -i ./multinode deploy
.. end
When this playbook finishes, OpenStack should be up, running and functional! When this playbook finishes, OpenStack should be up, running and functional!
If error occurs during execution, refer to If error occurs during execution, refer to
@ -432,35 +500,44 @@ Using OpenStack
OpenStack requires an openrc file where credentials for admin user etc are set. OpenStack requires an openrc file where credentials for admin user etc are set.
To generate this file run To generate this file run
:: .. code-block:: console
kolla-ansible post-deploy kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh . /etc/kolla/admin-openrc.sh
.. end
Install basic OpenStack CLI clients: Install basic OpenStack CLI clients:
:: .. code-block:: console
pip install python-openstackclient python-glanceclient python-neutronclient pip install python-openstackclient python-glanceclient python-neutronclient
.. end
Depending on how you installed Kolla-Ansible, there is script that will create Depending on how you installed Kolla-Ansible, there is script that will create
example networks, images, and so on. example networks, images, and so on.
For pip install and CentOS host: For pip install and CentOS host:
:: .. code-block:: console
. /usr/share/kolla-ansible/init-runonce . /usr/share/kolla-ansible/init-runonce
.. end
For pip install and Ubuntu host: For pip install and Ubuntu host:
:: .. code-block:: console
. /usr/local/share/kolla-ansible/init-runonce . /usr/local/share/kolla-ansible/init-runonce
.. end
For git pulled source: For git pulled source:
:: .. code-block:: console
. kolla-ansible/tools/init-runonce . kolla-ansible/tools/init-runonce
.. end

View File

@ -5,52 +5,58 @@ Kolla Security
============== ==============
Non Root containers Non Root containers
=================== ~~~~~~~~~~~~~~~~~~~
The OpenStack services, with a few exceptions, run as non root inside of
Kolla's containers. Kolla uses the Docker provided USER flag to set the The OpenStack services, with a few exceptions, run as non root inside
appropriate user for each service. of Kolla's containers. Kolla uses the Docker provided ``USER`` flag to
set the appropriate user for each service.
SELinux SELinux
======= ~~~~~~~
The state of SELinux in Kolla is a work in progress. The short answer is you
must disable it until selinux polices are written for the Docker containers.
To understand why Kolla needs to set certain selinux policies for services that The state of SELinux in Kolla is a work in progress. The short answer
you wouldn't expect to need them (rabbitmq, mariadb, glance, etc.) we must take is you must disable it until selinux polices are written for the
a step back and talk about Docker. Docker containers.
Docker has not had the concept of persistent containerized data until recently. To understand why Kolla needs to set certain selinux policies for
This means when a container is run the data it creates is destroyed when the services that you wouldn't expect to need them (rabbitmq, mariadb, glance
container goes away, which is obviously no good in the case of upgrades. and so on) we must take a step back and talk about Docker.
It was suggested data containers could solve this issue by only holding data if Docker has not had the concept of persistent containerized data until
they were never recreated, leading to a scary state where you could lose access recently. This means when a container is run the data it creates is
to your data if the wrong command was executed. The real answer to this problem destroyed when the container goes away, which is obviously no good
came in Docker 1.9 with the introduction of named volumes. You could now in the case of upgrades.
address volumes directly by name removing the need for so called **data
containers** all together.
Another solution to the persistent data issue is to use a host bind mount which It was suggested data containers could solve this issue by only holding
involves making, for sake of example, host directory ``var/lib/mysql`` data if they were never recreated, leading to a scary state where you
available inside the container at ``var/lib/mysql``. This absolutely solves the could lose access to your data if the wrong command was executed. The
problem of persistent data, but it introduces another security issue, real answer to this problem came in Docker 1.9 with the introduction of
permissions. With this host bind mount solution the data in ``var/lib/mysql`` named volumes. You could now address volumes directly by name removing
will be owned by the mysql user in the container. Unfortunately, that mysql the need for so called **data containers** all together.
user in the container could have any UID/GID and thats who will own the data
outside the container introducing a potential security risk. Additionally, this Another solution to the persistent data issue is to use a host bind
method dirties the host and requires host permissions to the directories to mount which involves making, for sake of example, host directory
bind mount. ``var/lib/mysql`` available inside the container at ``var/lib/mysql``.
This absolutely solves the problem of persistent data, but it introduces
another security issue, permissions. With this host bind mount solution
the data in ``var/lib/mysql`` will be owned by the mysql user in the
container. Unfortunately, that mysql user in the container could have
any UID/GID and thats who will own the data outside the container
introducing a potential security risk. Additionally, this method
dirties the host and requires host permissions to the directories
to bind mount.
The solution Kolla chose is named volumes. The solution Kolla chose is named volumes.
Why does this matter in the case of selinux? Kolla does not run the process it Why does this matter in the case of selinux? Kolla does not run the
is launching as root in most cases. So glance-api is run as the glance user, process. It is launching as root in most cases. So glance-api is run
and mariadb is run as the mysql user, and so on. When mounting a named volume as the glance user, and mariadb is run as the mysql user, and so on.
in the location that the persistent data will be stored it will be owned by the When mounting a named volume in the location that the persistent data
root user and group. The mysql user has no permissions to write to this folder will be stored it will be owned by the root user and group. The mysql
now. What Kolla does is allow a select few commands to be run with sudo as the user has no permissions to write to this folder now. What Kolla does
mysql user. This allows the mysql user to chown a specific, explicit directory is allow a select few commands to be run with sudo as the mysql user.
and store its data in a named volume without the security risk and other This allows the mysql user to chown a specific, explicit directory
downsides of host bind mounts. The downside to this is selinux blocks those and store its data in a named volume without the security risk and
sudo commands and it will do so until we make explicit policies to allow those other downsides of host bind mounts. The downside to this is selinux
operations. blocks those sudo commands and it will do so until we make explicit
policies to allow those operations.

View File

@ -5,19 +5,21 @@ Troubleshooting Guide
===================== =====================
Failures Failures
======== ~~~~~~~~
If Kolla fails, often it is caused by a CTRL-C during the deployment If Kolla fails, often it is caused by a CTRL-C during the deployment
process or a problem in the ``globals.yml`` configuration. process or a problem in the ``globals.yml`` configuration.
To correct the problem where Operators have a misconfigured environment, the To correct the problem where Operators have a misconfigured environment,
Kolla community has added a precheck feature which ensures the deployment the Kolla community has added a precheck feature which ensures the
targets are in a state where Kolla may deploy to them. To run the prechecks, deployment targets are in a state where Kolla may deploy to them. To
execute: run the prechecks:
:: .. code-block:: console
kolla-ansible prechecks kolla-ansible prechecks
.. end
If a failure during deployment occurs it nearly always occurs during evaluation If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options of the software. Once the Operator learns the few configuration options
@ -30,9 +32,11 @@ In this scenario, Kolla's behavior is undefined.
The fastest way during to recover from a deployment failure is to The fastest way during to recover from a deployment failure is to
remove the failed deployment: remove the failed deployment:
:: .. code-block:: console
kolla-ansible destroy -i <<inventory-file>> kolla-ansible destroy -i <<inventory-file>>
.. end
Any time the tags of a release change, it is possible that the container Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new implementation from older versions won't match the Ansible playbooks in a new
@ -40,37 +44,46 @@ version. If running multinode from a registry, each node's Docker image cache
must be refreshed with the latest images before a new deployment can occur. To must be refreshed with the latest images before a new deployment can occur. To
refresh the docker cache from the local Docker registry: refresh the docker cache from the local Docker registry:
:: .. code-block:: console
kolla-ansible pull kolla-ansible pull
.. end
Debugging Kolla Debugging Kolla
=============== ~~~~~~~~~~~~~~~
The status of containers after deployment can be determined on the deployment The status of containers after deployment can be determined on the deployment
targets by executing: targets by executing:
:: .. code-block:: console
docker ps -a docker ps -a
.. end
If any of the containers exited, this indicates a bug in the container. Please If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug`_ or contacting the developers via IRC. seek help by filing a `launchpad bug <https://bugs.launchpad.net/kolla-ansible/+filebug>`__
or contacting the developers via IRC.
The logs can be examined by executing: The logs can be examined by executing:
:: .. code-block:: console
docker exec -it fluentd bash docker exec -it fluentd bash
.. end
The logs from all services in all containers may be read from The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME`` ``/var/log/kolla/SERVICE_NAME``
If the stdout logs are needed, please run: If the stdout logs are needed, please run:
:: .. code-block:: console
docker logs <container-name> docker logs <container-name>
.. end
Note that most of the containers don't log to stdout so the above command will Note that most of the containers don't log to stdout so the above command will
provide no information. provide no information.
@ -79,19 +92,13 @@ To learn more about Docker command line operation please refer to `Docker
documentation <https://docs.docker.com/reference/>`__. documentation <https://docs.docker.com/reference/>`__.
When ``enable_central_logging`` is enabled, to view the logs in a web browser When ``enable_central_logging`` is enabled, to view the logs in a web browser
using Kibana, go to: using Kibana, go to
``http://<kolla_internal_vip_address>:<kibana_server_port>`` or
:: ``http://<kolla_external_vip_address>:<kibana_server_port>``. Authenticate
using ``<kibana_user>`` and ``<kibana_password>``.
http://<kolla_internal_vip_address>:<kibana_server_port>
or http://<kolla_external_vip_address>:<kibana_server_port>
and authenticate using ``<kibana_user>`` and ``<kibana_password>``.
The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>`` The values ``<kolla_internal_vip_address>``, ``<kolla_external_vip_address>``
``<kibana_server_port>`` and ``<kibana_user>`` can be found in ``<kibana_server_port>`` and ``<kibana_user>`` can be found in
``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default ``<kolla_install_path>/kolla/ansible/group_vars/all.yml`` or if the default
values are overridden, in ``/etc/kolla/globals.yml``. The value of values are overridden, in ``/etc/kolla/globals.yml``. The value of
``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``. ``<kibana_password>`` can be found in ``/etc/kolla/passwords.yml``.
.. _launchpad bug: https://bugs.launchpad.net/kolla-ansible/+filebug