Remove '.. end' comments

Following by https://review.openstack.org/#/c/605097/
These were used by now-dead tooling. We can remove them.

Change-Id: I0953751044f038a3fdd1acd49b3d2b053ac4bec8
This commit is contained in:
chenxing 2018-09-28 10:14:29 +08:00
parent f1c8136556
commit eaa9815ad2
34 changed files with 0 additions and 789 deletions

View File

@ -31,8 +31,6 @@ API requests, internal and external, will flow over the same network.
kolla_internal_vip_address: "10.10.10.254" kolla_internal_vip_address: "10.10.10.254"
network_interface: "eth0" network_interface: "eth0"
.. end
For the separate option, set these four variables. In this configuration For the separate option, set these four variables. In this configuration
the internal and external REST API requests can flow over separate the internal and external REST API requests can flow over separate
networks. networks.
@ -44,8 +42,6 @@ networks.
kolla_external_vip_address: "10.10.20.254" kolla_external_vip_address: "10.10.20.254"
kolla_external_vip_interface: "eth1" kolla_external_vip_interface: "eth1"
.. end
Fully Qualified Domain Name Configuration Fully Qualified Domain Name Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -62,8 +58,6 @@ in your kolla deployment use the variables:
kolla_internal_fqdn: inside.mykolla.example.net kolla_internal_fqdn: inside.mykolla.example.net
kolla_external_fqdn: mykolla.example.net kolla_external_fqdn: mykolla.example.net
.. end
Provisions must be taken outside of kolla for these names to map to the Provisions must be taken outside of kolla for these names to map to the
configured IP addresses. Using a DNS server or the ``/etc/hosts`` file configured IP addresses. Using a DNS server or the ``/etc/hosts`` file
are two ways to create this mapping. are two ways to create this mapping.
@ -100,8 +94,6 @@ The default for TLS is disabled, to enable TLS networking:
kolla_enable_tls_external: "yes" kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem" kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/mycert.pem"
.. end
.. note:: .. note::
TLS authentication is based on certificates that have been TLS authentication is based on certificates that have been
@ -138,8 +130,6 @@ have settings similar to this:
export OS_CACERT=/etc/pki/mykolla-cacert.crt export OS_CACERT=/etc/pki/mykolla-cacert.crt
export OS_IDENTITY_API_VERSION=3 export OS_IDENTITY_API_VERSION=3
.. end
Self-Signed Certificates Self-Signed Certificates
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
@ -191,8 +181,6 @@ needs to create ``/etc/kolla/config/nova/nova-scheduler.conf`` with content:
[DEFAULT] [DEFAULT]
scheduler_max_attempts = 100 scheduler_max_attempts = 100
.. end
If the operator wants to configure compute node cpu and ram allocation ratio If the operator wants to configure compute node cpu and ram allocation ratio
on host myhost, the operator needs to create file on host myhost, the operator needs to create file
``/etc/kolla/config/nova/myhost/nova.conf`` with content: ``/etc/kolla/config/nova/myhost/nova.conf`` with content:
@ -204,8 +192,6 @@ on host myhost, the operator needs to create file
cpu_allocation_ratio = 16.0 cpu_allocation_ratio = 16.0
ram_allocation_ratio = 5.0 ram_allocation_ratio = 5.0
.. end
Kolla allows the operator to override configuration globally for all services. Kolla allows the operator to override configuration globally for all services.
It will look for a file called ``/etc/kolla/config/global.conf``. It will look for a file called ``/etc/kolla/config/global.conf``.
@ -218,8 +204,6 @@ operator needs to create ``/etc/kolla/config/global.conf`` with content:
[database] [database]
max_pool_size = 100 max_pool_size = 100
.. end
In case the operators want to customize ``policy.json`` file, they should In case the operators want to customize ``policy.json`` file, they should
create a full policy file for specific project in the same directory like above create a full policy file for specific project in the same directory like above
and Kolla will overwrite default policy file with it. Be aware, with some and Kolla will overwrite default policy file with it. Be aware, with some
@ -242,8 +226,6 @@ using following command:
kolla-ansible reconfigure kolla-ansible reconfigure
.. end
IP Address Constrained Environments IP Address Constrained Environments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -255,8 +237,6 @@ adding:
enable_haproxy: "no" enable_haproxy: "no"
.. end
Note this method is not recommended and generally not tested by the Note this method is not recommended and generally not tested by the
Kolla community, but included since sometimes a free IP is not available Kolla community, but included since sometimes a free IP is not available
in a testing environment. in a testing environment.
@ -271,8 +251,6 @@ first disable the deployment of the central logging.
enable_central_logging: "no" enable_central_logging: "no"
.. end
Now you can use the parameter ``elasticsearch_address`` to configure the Now you can use the parameter ``elasticsearch_address`` to configure the
address of the external Elasticsearch environment. address of the external Elasticsearch environment.
@ -287,8 +265,6 @@ for service(s) in Kolla. It is possible with setting
database_port: 3307 database_port: 3307
.. end
As ``<service>_port`` value is saved in different services' configuration so As ``<service>_port`` value is saved in different services' configuration so
it's advised to make above change before deploying. it's advised to make above change before deploying.
@ -304,8 +280,6 @@ You can set syslog parameters in ``globals.yml`` file. For example:
syslog_server: "172.29.9.145" syslog_server: "172.29.9.145"
syslog_udp_port: "514" syslog_udp_port: "514"
.. end
You can also set syslog facility names for Swift and HAProxy logs. You can also set syslog facility names for Swift and HAProxy logs.
By default, Swift and HAProxy use ``local0`` and ``local1``, respectively. By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
@ -314,4 +288,3 @@ By default, Swift and HAProxy use ``local0`` and ``local1``, respectively.
syslog_swift_facility: "local0" syslog_swift_facility: "local0"
syslog_haproxy_facility: "local1" syslog_haproxy_facility: "local1"
.. end

View File

@ -66,8 +66,6 @@ result, simply :command:`mkdir -p /etc/kolla/config` and modify the file
virt_type=qemu virt_type=qemu
cpu_mode = none cpu_mode = none
.. end
After this change Kolla will use an emulated hypervisor with lower performance. After this change Kolla will use an emulated hypervisor with lower performance.
Kolla could have templated this commonly modified configuration option. If Kolla could have templated this commonly modified configuration option. If
Kolla starts down this path, the Kolla project could end with hundreds of Kolla starts down this path, the Kolla project could end with hundreds of

View File

@ -93,8 +93,6 @@ that Kolla uses throughout that should be followed.
{ {
} }
.. end
- For OpenStack services there should be an entry in the ``services`` list - For OpenStack services there should be an entry in the ``services`` list
in the ``cron.json.j2`` template file in ``ansible/roles/common/templates``. in the ``cron.json.j2`` template file in ``ansible/roles/common/templates``.

View File

@ -30,8 +30,6 @@ To enable dev mode for all supported services, set in
kolla_dev_mode: true kolla_dev_mode: true
.. end
To enable it just for heat, set: To enable it just for heat, set:
.. path /etc/kolla/globals.yml .. path /etc/kolla/globals.yml
@ -39,8 +37,6 @@ To enable it just for heat, set:
heat_dev_mode: true heat_dev_mode: true
.. end
Usage Usage
----- -----
@ -54,8 +50,6 @@ After making code changes, simply restart the container to pick them up:
docker restart heat_api docker restart heat_api
.. end
Debugging Debugging
--------- ---------
@ -66,8 +60,6 @@ make sure it is installed in the container in question:
docker exec -it -u root heat_api pip install remote_pdb docker exec -it -u root heat_api pip install remote_pdb
.. end
Then, set your breakpoint as follows: Then, set your breakpoint as follows:
.. code-block:: python .. code-block:: python
@ -75,8 +67,6 @@ Then, set your breakpoint as follows:
from remote_pdb import RemotePdb from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace() RemotePdb('127.0.0.1', 4444).set_trace()
.. end
Once you run the code(restart the container), pdb can be accessed using Once you run the code(restart the container), pdb can be accessed using
``socat``: ``socat``:
@ -84,7 +74,5 @@ Once you run the code(restart the container), pdb can be accessed using
socat readline tcp:127.0.0.1:4444 socat readline tcp:127.0.0.1:4444
.. end
Learn more information about `remote_pdb Learn more information about `remote_pdb
<https://pypi.org/project/remote-pdb/>`_. <https://pypi.org/project/remote-pdb/>`_.

View File

@ -25,8 +25,6 @@ so the only package you install is ``tox`` itself:
pip install tox pip install tox
.. end
For more information, see `the unit testing section of the Testing wiki page For more information, see `the unit testing section of the Testing wiki page
<https://wiki.openstack.org/wiki/Testing#Unit_Tests>`_. For example: <https://wiki.openstack.org/wiki/Testing#Unit_Tests>`_. For example:
@ -36,24 +34,18 @@ To run the Python 2.7 tests:
tox -e py27 tox -e py27
.. end
To run the style tests: To run the style tests:
.. code-block:: console .. code-block:: console
tox -e pep8 tox -e pep8
.. end
To run multiple tests separate items by commas: To run multiple tests separate items by commas:
.. code-block:: console .. code-block:: console
tox -e py27,py35,pep8 tox -e py27,py35,pep8
.. end
Running a subset of tests Running a subset of tests
------------------------- -------------------------
@ -68,8 +60,6 @@ directory use:
tox -e py27 kolla-ansible.tests tox -e py27 kolla-ansible.tests
.. end
To run the tests of a specific file To run the tests of a specific file
``kolla-ansible/tests/test_kolla_docker.py``: ``kolla-ansible/tests/test_kolla_docker.py``:
@ -77,8 +67,6 @@ To run the tests of a specific file
tox -e py27 test_kolla_docker tox -e py27 test_kolla_docker
.. end
To run the tests in the ``ModuleArgsTest`` class in To run the tests in the ``ModuleArgsTest`` class in
the ``kolla-ansible/tests/test_kolla_docker.py`` file: the ``kolla-ansible/tests/test_kolla_docker.py`` file:
@ -86,8 +74,6 @@ the ``kolla-ansible/tests/test_kolla_docker.py`` file:
tox -e py27 test_kolla_docker.ModuleArgsTest tox -e py27 test_kolla_docker.ModuleArgsTest
.. end
To run the ``ModuleArgsTest.test_module_args`` test method in To run the ``ModuleArgsTest.test_module_args`` test method in
the ``kolla-ansible/tests/test_kolla_docker.py`` file: the ``kolla-ansible/tests/test_kolla_docker.py`` file:
@ -95,8 +81,6 @@ the ``kolla-ansible/tests/test_kolla_docker.py`` file:
tox -e py27 test_kolla_docker.ModuleArgsTest.test_module_args tox -e py27 test_kolla_docker.ModuleArgsTest.test_module_args
.. end
Debugging unit tests Debugging unit tests
-------------------- --------------------
@ -107,8 +91,6 @@ a breaking point to the code:
import pdb; pdb.set_trace() import pdb; pdb.set_trace()
.. end
Then run ``tox`` with the debug environment as one of the following: Then run ``tox`` with the debug environment as one of the following:
.. code-block:: console .. code-block:: console
@ -116,8 +98,6 @@ Then run ``tox`` with the debug environment as one of the following:
tox -e debug tox -e debug
tox -e debug test_file_name.TestClass.test_name tox -e debug test_file_name.TestClass.test_name
.. end
For more information, see the `oslotest documentation For more information, see the `oslotest documentation
<https://docs.openstack.org/oslotest/latest/user/features.html#debugging-with-oslo-debug-helper>`_. <https://docs.openstack.org/oslotest/latest/user/features.html#debugging-with-oslo-debug-helper>`_.

View File

@ -50,8 +50,6 @@ For CentOS 7 or later:
qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install \ qemu-kvm qemu-img libvirt libvirt-python libvirt-client virt-install \
bridge-utils git bridge-utils git
.. end
For Ubuntu 16.04 or later: For Ubuntu 16.04 or later:
.. code-block:: console .. code-block:: console
@ -60,8 +58,6 @@ For Ubuntu 16.04 or later:
qemu-utils qemu-kvm libvirt-dev nfs-kernel-server zlib1g-dev libpng12-dev \ qemu-utils qemu-kvm libvirt-dev nfs-kernel-server zlib1g-dev libpng12-dev \
gcc git gcc git
.. end
.. note:: .. note::
Many distros ship outdated versions of Vagrant by default. When in Many distros ship outdated versions of Vagrant by default. When in
@ -74,16 +70,12 @@ Next install the hostmanager plugin so all hosts are recorded in ``/etc/hosts``
vagrant plugin install vagrant-hostmanager vagrant plugin install vagrant-hostmanager
.. end
If you are going to use VirtualBox, then install vagrant-vbguest: If you are going to use VirtualBox, then install vagrant-vbguest:
.. code-block:: console .. code-block:: console
vagrant plugin install vagrant-vbguest vagrant plugin install vagrant-vbguest
.. end
Vagrant supports a wide range of virtualization technologies. If VirtualBox is Vagrant supports a wide range of virtualization technologies. If VirtualBox is
used, the vbguest plugin will be required to install the VirtualBox Guest used, the vbguest plugin will be required to install the VirtualBox Guest
Additions in the virtual machine: Additions in the virtual machine:
@ -92,8 +84,6 @@ Additions in the virtual machine:
vagrant plugin install vagrant-vbguest vagrant plugin install vagrant-vbguest
.. end
This documentation focuses on libvirt specifics. To install vagrant-libvirt This documentation focuses on libvirt specifics. To install vagrant-libvirt
plugin: plugin:
@ -101,8 +91,6 @@ plugin:
vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt vagrant plugin install --plugin-version ">= 0.0.31" vagrant-libvirt
.. end
Some Linux distributions offer vagrant-libvirt packages, but the version they Some Linux distributions offer vagrant-libvirt packages, but the version they
provide tends to be too old to run Kolla. A version of >= 0.0.31 is required. provide tends to be too old to run Kolla. A version of >= 0.0.31 is required.
@ -114,8 +102,6 @@ a password, add the user to the libvirt group:
sudo gpasswd -a ${USER} libvirt sudo gpasswd -a ${USER} libvirt
newgrp libvirt newgrp libvirt
.. end
.. note:: .. note::
In Ubuntu 16.04 and later, libvirtd group is used. In Ubuntu 16.04 and later, libvirtd group is used.
@ -131,8 +117,6 @@ than VirtualBox shared folders. For CentOS:
sudo firewall-cmd --zone=internal --add-interface=virbr0 sudo firewall-cmd --zone=internal --add-interface=virbr0
sudo firewall-cmd --zone=internal --add-interface=virbr1 sudo firewall-cmd --zone=internal --add-interface=virbr1
.. end
#. Enable nfs, rpc-bind and mountd services for firewalld: #. Enable nfs, rpc-bind and mountd services for firewalld:
.. code-block:: console .. code-block:: console
@ -146,8 +130,6 @@ than VirtualBox shared folders. For CentOS:
sudo firewall-cmd --permanent --add-port=111/tcp sudo firewall-cmd --permanent --add-port=111/tcp
sudo firewall-cmd --reload sudo firewall-cmd --reload
.. end
.. note:: .. note::
You may not have to do this because Ubuntu uses Uncomplicated Firewall (ufw) You may not have to do this because Ubuntu uses Uncomplicated Firewall (ufw)
@ -161,8 +143,6 @@ than VirtualBox shared folders. For CentOS:
sudo systemctl start nfs-server sudo systemctl start nfs-server
sudo systemctl start rpcbind.service sudo systemctl start rpcbind.service
.. end
Ensure your system has libvirt and associated software installed and setup Ensure your system has libvirt and associated software installed and setup
correctly. For CentOS: correctly. For CentOS:
@ -171,8 +151,6 @@ correctly. For CentOS:
sudo systemctl start libvirtd sudo systemctl start libvirtd
sudo systemctl enable libvirtd sudo systemctl enable libvirtd
.. end
Find a location in the system's home directory and checkout Kolla repos: Find a location in the system's home directory and checkout Kolla repos:
.. code-block:: console .. code-block:: console
@ -181,8 +159,6 @@ Find a location in the system's home directory and checkout Kolla repos:
git clone https://git.openstack.org/openstack/kolla-ansible git clone https://git.openstack.org/openstack/kolla-ansible
git clone https://git.openstack.org/openstack/kolla git clone https://git.openstack.org/openstack/kolla
.. end
All repos must share the same parent directory so the bootstrap code can All repos must share the same parent directory so the bootstrap code can
locate them. locate them.
@ -193,8 +169,6 @@ CentOS 7-based environment:
cd kolla-ansible/contrib/dev/vagrant && vagrant up cd kolla-ansible/contrib/dev/vagrant && vagrant up
.. end
The command ``vagrant status`` provides a quick overview of the VMs composing The command ``vagrant status`` provides a quick overview of the VMs composing
the environment. the environment.
@ -208,8 +182,6 @@ Kolla. First, connect with the **operator** node:
vagrant ssh operator vagrant ssh operator
.. end
To speed things up, there is a local registry running on the operator. All To speed things up, there is a local registry running on the operator. All
nodes are configured so they can use this insecure repo to pull from, and use nodes are configured so they can use this insecure repo to pull from, and use
it as a mirror. Ansible may use this registry to pull images from. it as a mirror. Ansible may use this registry to pull images from.
@ -231,8 +203,6 @@ Once logged on the **operator** VM call the ``kolla-build`` utility:
kolla-build kolla-build
.. end
``kolla-build`` accept arguments as documented in `Building Container Images ``kolla-build`` accept arguments as documented in `Building Container Images
<https://docs.openstack.org/kolla/latest/admin/image-building.html>`_. <https://docs.openstack.org/kolla/latest/admin/image-building.html>`_.
It builds Docker images and pushes them to the local registry if the **push** It builds Docker images and pushes them to the local registry if the **push**
@ -247,8 +217,6 @@ To deploy **all-in-one**:
sudo kolla-ansible deploy sudo kolla-ansible deploy
.. end
To deploy multinode: To deploy multinode:
For Centos 7: For Centos 7:
@ -257,16 +225,12 @@ For Centos 7:
sudo kolla-ansible deploy -i /usr/share/kolla-ansible/ansible/inventory/multinode sudo kolla-ansible deploy -i /usr/share/kolla-ansible/ansible/inventory/multinode
.. end
For Ubuntu 16.04 or later: For Ubuntu 16.04 or later:
.. code-block:: console .. code-block:: console
sudo kolla-ansible deploy -i /usr/local/share/kolla-ansible/ansible/inventory/multinode sudo kolla-ansible deploy -i /usr/local/share/kolla-ansible/ansible/inventory/multinode
.. end
Validate OpenStack is operational: Validate OpenStack is operational:
.. code-block:: console .. code-block:: console
@ -275,8 +239,6 @@ Validate OpenStack is operational:
. /etc/kolla/admin-openrc.sh . /etc/kolla/admin-openrc.sh
openstack user list openstack user list
.. end
Or navigate to ``http://172.28.128.254/`` with a web browser. Or navigate to ``http://172.28.128.254/`` with a web browser.
Further Reading Further Reading

View File

@ -87,8 +87,6 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example:
cat /etc/hosts cat /etc/hosts
127.0.0.1 bifrost localhost 127.0.0.1 bifrost localhost
.. end
The following lines are desirable for IPv6 capable hosts: The following lines are desirable for IPv6 capable hosts:
.. code-block:: console .. code-block:: console
@ -101,8 +99,6 @@ The following lines are desirable for IPv6 capable hosts:
ff02::3 ip6-allhosts ff02::3 ip6-allhosts
192.168.100.15 bifrost 192.168.100.15 bifrost
.. end
Build a Bifrost Container Image Build a Bifrost Container Image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -123,8 +119,6 @@ bifrost image.
cd kolla cd kolla
tox -e genconfig tox -e genconfig
.. end
* Modify ``kolla-build.conf``, setting ``install_type`` to ``source``: * Modify ``kolla-build.conf``, setting ``install_type`` to ``source``:
.. path etc/kolla/kolla-build.conf .. path etc/kolla/kolla-build.conf
@ -132,8 +126,6 @@ bifrost image.
install_type = source install_type = source
.. end
Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can
be enabled by appending ``--type source`` to the :command:`kolla-build` or be enabled by appending ``--type source`` to the :command:`kolla-build` or
``tools/build.py`` command. ``tools/build.py`` command.
@ -145,16 +137,12 @@ be enabled by appending ``--type source`` to the :command:`kolla-build` or
cd kolla cd kolla
tools/build.py bifrost-deploy tools/build.py bifrost-deploy
.. end
For Production: For Production:
.. code-block:: console .. code-block:: console
kolla-build bifrost-deploy kolla-build bifrost-deploy
.. end
.. note:: .. note::
By default :command:`kolla-build` will build all containers using CentOS as By default :command:`kolla-build` will build all containers using CentOS as
@ -165,8 +153,6 @@ be enabled by appending ``--type source`` to the :command:`kolla-build` or
--base [ubuntu|centos|oraclelinux] --base [ubuntu|centos|oraclelinux]
.. end
Configure and Deploy a Bifrost Container Configure and Deploy a Bifrost Container
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -204,8 +190,6 @@ different than ``network_interface``. For example to use ``eth1``:
bifrost_network_interface: eth1 bifrost_network_interface: eth1
.. end
Note that this interface should typically have L2 network connectivity with the Note that this interface should typically have L2 network connectivity with the
bare metal cloud hosts in order to provide DHCP leases with PXE boot options. bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
@ -216,8 +200,6 @@ reflected in ``globals.yml``
kolla_install_type: source kolla_install_type: source
.. end
Prepare Bifrost Configuration Prepare Bifrost Configuration
----------------------------- -----------------------------
@ -266,8 +248,6 @@ properties and a logical name.
cpus: "16" cpus: "16"
name: "cloud1" name: "cloud1"
.. end
The required inventory will be specific to the hardware and environment in use. The required inventory will be specific to the hardware and environment in use.
Create Bifrost Configuration Create Bifrost Configuration
@ -288,8 +268,6 @@ For details on bifrost's variables see the bifrost documentation. For example:
# dhcp_lease_time: 12h # dhcp_lease_time: 12h
# dhcp_static_mask: 255.255.255.0 # dhcp_static_mask: 255.255.255.0
.. end
Create Disk Image Builder Configuration Create Disk Image Builder Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -305,8 +283,6 @@ For example, to use the ``debian`` Disk Image Builder OS element:
dib_os_element: debian dib_os_element: debian
.. end
See the `diskimage-builder documentation See the `diskimage-builder documentation
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details. <https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
@ -325,16 +301,12 @@ For development:
cd kolla-ansible cd kolla-ansible
tools/kolla-ansible deploy-bifrost tools/kolla-ansible deploy-bifrost
.. end
For Production: For Production:
.. code-block:: console .. code-block:: console
kolla-ansible deploy-bifrost kolla-ansible deploy-bifrost
.. end
Deploy Bifrost manually Deploy Bifrost manually
----------------------- -----------------------
@ -346,8 +318,6 @@ Deploy Bifrost manually
--privileged --name bifrost_deploy \ --privileged --name bifrost_deploy \
kolla/ubuntu-source-bifrost-deploy:3.0.1 kolla/ubuntu-source-bifrost-deploy:3.0.1
.. end
#. Copy Configuration Files #. Copy Configuration Files
.. code-block:: console .. code-block:: console
@ -357,24 +327,18 @@ Deploy Bifrost manually
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
.. end
#. Bootstrap Bifrost #. Bootstrap Bifrost
.. code-block:: console .. code-block:: console
docker exec -it bifrost_deploy bash docker exec -it bifrost_deploy bash
.. end
#. Generate an SSH Key #. Generate an SSH Key
.. code-block:: console .. code-block:: console
ssh-keygen ssh-keygen
.. end
#. Bootstrap and Start Services #. Bootstrap and Start Services
.. code-block:: console .. code-block:: console
@ -392,8 +356,6 @@ Deploy Bifrost manually
-e @/etc/bifrost/dib.yml \ -e @/etc/bifrost/dib.yml \
-e skip_package_install=true -e skip_package_install=true
.. end
Validate the Deployed Container Validate the Deployed Container
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -403,8 +365,6 @@ Validate the Deployed Container
cd /bifrost cd /bifrost
. env-vars . env-vars
.. end
Running "ironic node-list" should return with no nodes, for example Running "ironic node-list" should return with no nodes, for example
.. code-block:: console .. code-block:: console
@ -415,8 +375,6 @@ Running "ironic node-list" should return with no nodes, for example
+------+------+---------------+-------------+--------------------+-------------+ +------+------+---------------+-------------+--------------------+-------------+
+------+------+---------------+-------------+--------------------+-------------+ +------+------+---------------+-------------+--------------------+-------------+
.. end
Enroll and Deploy Physical Nodes Enroll and Deploy Physical Nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -434,16 +392,12 @@ For Development:
tools/kolla-ansible deploy-servers tools/kolla-ansible deploy-servers
.. end
For Production: For Production:
.. code-block:: console .. code-block:: console
kolla-ansible deploy-servers kolla-ansible deploy-servers
.. end
Manually Manually
-------- --------
@ -469,8 +423,6 @@ Manually
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \ -e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
-e @/etc/bifrost/bifrost.yml -e @/etc/bifrost/bifrost.yml
.. end
At this point Ironic should clean down the nodes and install the default At this point Ironic should clean down the nodes and install the default
OS image. OS image.
@ -503,8 +455,6 @@ done remotely with :command:`ipmitool` and Serial Over LAN. For example
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
.. end
References References
~~~~~~~~~~ ~~~~~~~~~~

View File

@ -18,8 +18,6 @@ the following:
enable_central_logging: "yes" enable_central_logging: "yes"
.. end
Elasticsearch Elasticsearch
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
@ -89,8 +87,6 @@ First, re-run the server creation with ``--debug``:
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \ --key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
demo1 demo1
.. end
In this output, look for the key ``X-Compute-Request-Id``. This is a unique In this output, look for the key ``X-Compute-Request-Id``. This is a unique
identifier that can be used to track the request through the system. An identifier that can be used to track the request through the system. An
example ID looks like this: example ID looks like this:
@ -99,8 +95,6 @@ example ID looks like this:
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5 X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
.. end
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
search bar, minus the leading ``req-``. Assuming some basic filters have been search bar, minus the leading ``req-``. Assuming some basic filters have been
added as shown in the previous section, Kibana should now show the path this added as shown in the previous section, Kibana should now show the path this

View File

@ -45,8 +45,6 @@ operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
.. end
The following shows an example of using parted to configure ``/dev/sdb`` for The following shows an example of using parted to configure ``/dev/sdb`` for
usage with Kolla. usage with Kolla.
@ -61,8 +59,6 @@ usage with Kolla.
Number Start End Size File system Name Flags Number Start End Size File system Name Flags
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP 1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
.. end
Bluestore Bluestore
~~~~~~~~~ ~~~~~~~~~
@ -72,8 +68,6 @@ To prepare a bluestore OSD partition, execute the following operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1
.. end
If only one device is offered, Kolla Ceph will create the bluestore OSD on the If only one device is offered, Kolla Ceph will create the bluestore OSD on the
device. Kolla Ceph will create two partitions for OSD and block separately. device. Kolla Ceph will create two partitions for OSD and block separately.
@ -87,8 +81,6 @@ To prepare a bluestore OSD block partition, execute the following operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_B 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_B 1 -1
.. end
To prepare a bluestore OSD block.wal partition, execute the following To prepare a bluestore OSD block.wal partition, execute the following
operations: operations:
@ -96,8 +88,6 @@ operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_W 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_W 1 -1
.. end
To prepare a bluestore OSD block.db partition, execute the following To prepare a bluestore OSD block.db partition, execute the following
operations: operations:
@ -105,8 +95,6 @@ operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_D 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO_D 1 -1
.. end
Kolla Ceph will handle the bluestore OSD according to the above up to four Kolla Ceph will handle the bluestore OSD according to the above up to four
partition labels. In Ceph bluestore OSD, the block.wal and block.db partitions partition labels. In Ceph bluestore OSD, the block.wal and block.db partitions
are not mandatory. are not mandatory.
@ -127,8 +115,6 @@ Using an external journal drive
The section is only meaningful for Ceph filestore OSD. The section is only meaningful for Ceph filestore OSD.
.. end
The steps documented above created a journal partition of 5 GByte The steps documented above created a journal partition of 5 GByte
and a data partition with the remaining storage capacity on the same tagged and a data partition with the remaining storage capacity on the same tagged
drive. drive.
@ -146,16 +132,12 @@ Prepare the storage drive in the same way as documented above:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
.. end
To prepare the journal external drive execute the following command: To prepare the journal external drive execute the following command:
.. code-block:: console .. code-block:: console
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
.. end
.. note:: .. note::
Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external Use different suffixes (``_42``, ``_FOO``, ``_FOO42``, ..) to use different external
@ -182,24 +164,18 @@ of the hosts that have the block devices you have prepped as shown above.
controller controller
compute1 compute1
.. end
Enable Ceph in ``/etc/kolla/globals.yml``: Enable Ceph in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
enable_ceph: "yes" enable_ceph: "yes"
.. end
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``: RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
enable_ceph_rgw: "yes" enable_ceph_rgw: "yes"
.. end
.. note:: .. note::
By default RadosGW supports both Swift and S3 API, and it is not By default RadosGW supports both Swift and S3 API, and it is not
@ -208,8 +184,6 @@ RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
compatibility with Swift API completely. After changing the value, run the compatibility with Swift API completely. After changing the value, run the
"reconfigure“ command to enable. "reconfigure“ command to enable.
.. end
Configure the Ceph store type in ``ansible/group_vars/all.yml``, the default Configure the Ceph store type in ``ansible/group_vars/all.yml``, the default
value is ``bluestore`` in Rocky: value is ``bluestore`` in Rocky:
@ -217,8 +191,6 @@ value is ``bluestore`` in Rocky:
ceph_osd_store_type: "bluestore" ceph_osd_store_type: "bluestore"
.. end
.. note:: .. note::
Regarding number of placement groups (PGs) Regarding number of placement groups (PGs)
@ -229,8 +201,6 @@ value is ``bluestore`` in Rocky:
*highly* recommended to consult the official Ceph documentation regarding *highly* recommended to consult the official Ceph documentation regarding
these values before running Ceph in any kind of production scenario. these values before running Ceph in any kind of production scenario.
.. end
RGW requires a healthy cluster in order to be successfully deployed. On initial RGW requires a healthy cluster in order to be successfully deployed. On initial
start up, RGW will create several pools. The first pool should be in an start up, RGW will create several pools. The first pool should be in an
operational state to proceed with the second one, and so on. So, in the case of operational state to proceed with the second one, and so on. So, in the case of
@ -245,8 +215,6 @@ copies for the pools before deployment. Modify the file
osd pool default size = 1 osd pool default size = 1
osd pool default min size = 1 osd pool default min size = 1
.. end
To build a high performance and secure Ceph Storage Cluster, the Ceph community To build a high performance and secure Ceph Storage Cluster, the Ceph community
recommend the use of two separate networks: public network and cluster network. recommend the use of two separate networks: public network and cluster network.
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``: Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
@ -256,8 +224,6 @@ Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
cluster_interface: "eth2" cluster_interface: "eth2"
.. end
For more details, see `NETWORK CONFIGURATION REFERENCE For more details, see `NETWORK CONFIGURATION REFERENCE
<http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks>`_ <http://docs.ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks>`_
of Ceph Documentation. of Ceph Documentation.
@ -271,8 +237,6 @@ Finally deploy the Ceph-enabled OpenStack:
kolla-ansible deploy -i path/to/inventory kolla-ansible deploy -i path/to/inventory
.. end
Using Cache Tiering Using Cache Tiering
------------------- -------------------
@ -287,16 +251,12 @@ operations:
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1 parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
.. end
.. note:: .. note::
To prepare a bluestore OSD as a cache device, change the partition name in To prepare a bluestore OSD as a cache device, change the partition name in
the above command to "KOLLA_CEPH_OSD_CACHE_BOOTSTRAP_BS". The deployment of the above command to "KOLLA_CEPH_OSD_CACHE_BOOTSTRAP_BS". The deployment of
bluestore cache OSD is the same as bluestore OSD. bluestore cache OSD is the same as bluestore OSD.
.. end
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``: Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
@ -306,16 +266,12 @@ Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
# Valid options are [ forward, none, writeback ] # Valid options are [ forward, none, writeback ]
ceph_cache_mode: "writeback" ceph_cache_mode: "writeback"
.. end
After this run the playbooks as you normally would, for example: After this run the playbooks as you normally would, for example:
.. code-block:: console .. code-block:: console
kolla-ansible deploy -i path/to/inventory kolla-ansible deploy -i path/to/inventory
.. end
Setting up an Erasure Coded Pool Setting up an Erasure Coded Pool
-------------------------------- --------------------------------
@ -338,8 +294,6 @@ To enable erasure coded pools add the following options to your
# Optionally, you can change the profile # Optionally, you can change the profile
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host" #ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
.. end
Managing Ceph Managing Ceph
------------- -------------
@ -374,8 +328,6 @@ the number of copies for the pool to 1:
docker exec ceph_mon ceph osd pool set rbd size 1 docker exec ceph_mon ceph osd pool set rbd size 1
.. end
All the pools must be modified if Glance, Nova, and Cinder have been deployed. All the pools must be modified if Glance, Nova, and Cinder have been deployed.
An example of modifying the pools to have 2 copies: An example of modifying the pools to have 2 copies:
@ -383,24 +335,18 @@ An example of modifying the pools to have 2 copies:
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
.. end
If using a cache tier, these changes must be made as well: If using a cache tier, these changes must be made as well:
.. code-block:: console .. code-block:: console
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
.. end
The default pool Ceph creates is named **rbd**. It is safe to remove this pool: The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
.. code-block:: console .. code-block:: console
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
.. end
Troubleshooting Troubleshooting
--------------- ---------------
@ -465,8 +411,6 @@ environment before adding the 3rd node for Ceph:
kolla1.ducourrier.com kolla1.ducourrier.com
kolla2.ducourrier.com kolla2.ducourrier.com
.. end
Configuration Configuration
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
@ -477,8 +421,6 @@ to add a partition label to it as shown below:
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1 parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
.. end
Make sure to run this command on each of the 3 nodes or the deployment will Make sure to run this command on each of the 3 nodes or the deployment will
fail. fail.
@ -510,8 +452,6 @@ existing inventory file:
kolla2.ducourrier.com kolla2.ducourrier.com
kolla3.ducourrier.com kolla3.ducourrier.com
.. end
It is now time to enable Ceph in the environment by editing the It is now time to enable Ceph in the environment by editing the
``/etc/kolla/globals.yml`` file: ``/etc/kolla/globals.yml`` file:
@ -523,8 +463,6 @@ It is now time to enable Ceph in the environment by editing the
glance_backend_file: "no" glance_backend_file: "no"
glance_backend_ceph: "yes" glance_backend_ceph: "yes"
.. end
Deployment Deployment
~~~~~~~~~~ ~~~~~~~~~~
@ -534,4 +472,3 @@ Finally deploy the Ceph-enabled configuration:
kolla-ansible deploy -i path/to/inventory-file kolla-ansible deploy -i path/to/inventory-file
.. end

View File

@ -86,16 +86,12 @@ contents:
hnas_iscsi_svc0_hdp = FS-Baremetal1 hnas_iscsi_svc0_hdp = FS-Baremetal1
hnas_iscsi_svc0_iscsi_ip = <svc0_ip> hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
.. end
Then set password for the backend in ``/etc/kolla/passwords.yml``: Then set password for the backend in ``/etc/kolla/passwords.yml``:
.. code-block:: yaml .. code-block:: yaml
hnas_iscsi_password: supervisor hnas_iscsi_password: supervisor
.. end
NFS backend NFS backend
----------- -----------
@ -105,8 +101,6 @@ Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
enable_cinder_backend_hnas_nfs: "yes" enable_cinder_backend_hnas_nfs: "yes"
.. end
Create or modify the file ``/etc/kolla/config/cinder.conf`` and Create or modify the file ``/etc/kolla/config/cinder.conf`` and
add the contents: add the contents:
@ -126,16 +120,12 @@ add the contents:
hnas_nfs_svc0_volume_type = nfs_gold hnas_nfs_svc0_volume_type = nfs_gold
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name> hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
.. end
Then set password for the backend in ``/etc/kolla/passwords.yml``: Then set password for the backend in ``/etc/kolla/passwords.yml``:
.. code-block:: yaml .. code-block:: yaml
hnas_nfs_password: supervisor hnas_nfs_password: supervisor
.. end
Configuration on Kolla deployment Configuration on Kolla deployment
--------------------------------- ---------------------------------
@ -146,8 +136,6 @@ Enable Shared File Systems service and HNAS driver in
enable_cinder: "yes" enable_cinder: "yes"
.. end
Configuration on HNAS Configuration on HNAS
--------------------- ---------------------
@ -159,8 +147,6 @@ List the available tenants:
openstack project list openstack project list
.. end
Create a network to the given tenant (service), providing the tenant ID, Create a network to the given tenant (service), providing the tenant ID,
a name for the network, the name of the physical network over which the a name for the network, the name of the physical network over which the
virtual network is implemented, and the type of the physical mechanism by virtual network is implemented, and the type of the physical mechanism by
@ -171,8 +157,6 @@ which the virtual network is implemented:
neutron net-create --tenant-id <SERVICE_ID> hnas_network \ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
--provider:physical_network=physnet2 --provider:network_type=flat --provider:physical_network=physnet2 --provider:network_type=flat
.. end
Create a subnet to the same tenant (service), the gateway IP of this subnet, Create a subnet to the same tenant (service), the gateway IP of this subnet,
a name for the subnet, the network ID created before, and the CIDR of a name for the subnet, the network ID created before, and the CIDR of
subnet: subnet:
@ -182,8 +166,6 @@ subnet:
neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR> --name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
.. end
Add the subnet interface to a router, providing the router ID and subnet Add the subnet interface to a router, providing the router ID and subnet
ID created before: ID created before:
@ -191,8 +173,6 @@ ID created before:
neutron router-interface-add <ROUTER_ID> <SUBNET_ID> neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
.. end
Create volume Create volume
~~~~~~~~~~~~~ ~~~~~~~~~~~~~
@ -202,8 +182,6 @@ Create a non-bootable volume.
openstack volume create --size 1 my-volume openstack volume create --size 1 my-volume
.. end
Verify Operation. Verify Operation.
.. code-block:: console .. code-block:: console
@ -258,8 +236,6 @@ Verify Operation.
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb | | 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
+--------------------------------------+---------------+----------------+------+-------------------------------------------+ +--------------------------------------+---------------+----------------+------+-------------------------------------------+
.. end
For more information about how to manage volumes, see the For more information about how to manage volumes, see the
`Manage volumes `Manage volumes
<https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html>`__. <https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html>`__.

View File

@ -32,8 +32,6 @@ group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
pvcreate /dev/sdb /dev/sdc pvcreate /dev/sdb /dev/sdc
vgcreate cinder-volumes /dev/sdb /dev/sdc vgcreate cinder-volumes /dev/sdb /dev/sdc
.. end
During development, it may be desirable to use file backed block storage. It During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback is possible to use a file and mount it as a block device via the loopback
system. system.
@ -46,16 +44,12 @@ system.
pvcreate $free_device pvcreate $free_device
vgcreate cinder-volumes $free_device vgcreate cinder-volumes $free_device
.. end
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``: Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
enable_cinder_backend_lvm: "yes" enable_cinder_backend_lvm: "yes"
.. end
.. note:: .. note::
There are currently issues using the LVM backend in a multi-controller setup, There are currently issues using the LVM backend in a multi-controller setup,
@ -71,8 +65,6 @@ where the volumes are to be stored:
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash) /kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
.. end
In this example, ``/kolla_nfs`` is the directory on the storage node which will In this example, ``/kolla_nfs`` is the directory on the storage node which will
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and ``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
@ -84,8 +76,6 @@ Then start ``nfsd``:
systemctl start nfs systemctl start nfs
.. end
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
each storage node: each storage node:
@ -94,16 +84,12 @@ each storage node:
storage01:/kolla_nfs storage01:/kolla_nfs
storage02:/kolla_nfs storage02:/kolla_nfs
.. end
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``: Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
enable_cinder_backend_nfs: "yes" enable_cinder_backend_nfs: "yes"
.. end
Validation Validation
~~~~~~~~~~ ~~~~~~~~~~
@ -114,8 +100,6 @@ Create a volume as follows:
openstack volume create --size 1 steak_volume openstack volume create --size 1 steak_volume
<bunch of stuff printed> <bunch of stuff printed>
.. end
Verify it is available. If it says "error", then something went wrong during Verify it is available. If it says "error", then something went wrong during
LVM creation of the volume. LVM creation of the volume.
@ -129,24 +113,18 @@ LVM creation of the volume.
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | | | 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+ +--------------------------------------+--------------+-----------+------+-------------+
.. end
Attach the volume to a server using: Attach the volume to a server using:
.. code-block:: console .. code-block:: console
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
.. end
Check the console log to verify the disk addition: Check the console log to verify the disk addition:
.. code-block:: console .. code-block:: console
openstack console log show steak_server openstack console log show steak_server
.. end
A ``/dev/vdb`` should appear in the console log, at least when booting cirros. A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
If the disk stays in the available state, something went wrong during the If the disk stays in the available state, something went wrong during the
iSCSI mounting of the volume to the guest VM. iSCSI mounting of the volume to the guest VM.
@ -168,8 +146,6 @@ exist on the server and following parameter must be specified in
enable_cinder_backend_lvm: "yes" enable_cinder_backend_lvm: "yes"
.. end
For Ubuntu and LVM2/iSCSI For Ubuntu and LVM2/iSCSI
------------------------- -------------------------
@ -195,8 +171,6 @@ targeted for nova compute role.
mount -t configfs /etc/rc.local /sys/kernel/config mount -t configfs /etc/rc.local /sys/kernel/config
.. end
Cinder backend with external iSCSI storage Cinder backend with external iSCSI storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -207,6 +181,4 @@ the following parameter must be specified in ``globals.yml``:
enable_cinder_backend_iscsi: "yes" enable_cinder_backend_iscsi: "yes"
.. end
Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case. Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case.

View File

@ -25,8 +25,6 @@ Enable Designate service in ``/etc/kolla/globals.yml``
enable_designate: "yes" enable_designate: "yes"
.. end
Configure Designate options in ``/etc/kolla/globals.yml`` Configure Designate options in ``/etc/kolla/globals.yml``
.. important:: .. important::
@ -40,8 +38,6 @@ Configure Designate options in ``/etc/kolla/globals.yml``
designate_backend: "bind9" designate_backend: "bind9"
designate_ns_record: "sample.openstack.org" designate_ns_record: "sample.openstack.org"
.. end
Neutron and Nova Integration Neutron and Nova Integration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -51,16 +47,12 @@ Create default Designate Zone for Neutron:
openstack zone create --email admin@sample.openstack.org sample.openstack.org. openstack zone create --email admin@sample.openstack.org sample.openstack.org.
.. end
Create designate-sink custom configuration folder: Create designate-sink custom configuration folder:
.. code-block:: console .. code-block:: console
mkdir -p /etc/kolla/config/designate/ mkdir -p /etc/kolla/config/designate/
.. end
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf`` Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
.. code-block:: console .. code-block:: console
@ -70,16 +62,12 @@ Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
[handler:neutron_floatingip] [handler:neutron_floatingip]
zone_id = <ZONE_ID> zone_id = <ZONE_ID>
.. end
Reconfigure Designate: Reconfigure Designate:
.. code-block:: console .. code-block:: console
kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
.. end
Verify operation Verify operation
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -89,16 +77,12 @@ List available networks:
openstack network list openstack network list
.. end
Associate a domain to a network: Associate a domain to a network:
.. code-block:: console .. code-block:: console
neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org. neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
.. end
Start an instance: Start an instance:
.. code-block:: console .. code-block:: console
@ -110,8 +94,6 @@ Start an instance:
--nic net-id=${NETWORK_ID} \ --nic net-id=${NETWORK_ID} \
my-vm my-vm
.. end
Check DNS records in Designate: Check DNS records in Designate:
.. code-block:: console .. code-block:: console
@ -130,8 +112,6 @@ Check DNS records in Designate:
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE | | e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+ +--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
.. end
Query instance DNS information to Designate ``dns_interface`` IP address: Query instance DNS information to Designate ``dns_interface`` IP address:
.. code-block:: console .. code-block:: console
@ -139,8 +119,6 @@ Query instance DNS information to Designate ``dns_interface`` IP address:
dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
192.168.190.232 192.168.190.232
.. end
For more information about how Designate works, see For more information about how Designate works, see
`Designate, a DNSaaS component for OpenStack `Designate, a DNSaaS component for OpenStack
<https://docs.openstack.org/designate/latest/>`__. <https://docs.openstack.org/designate/latest/>`__.

View File

@ -26,8 +26,6 @@ disable Ceph deployment in ``/etc/kolla/globals.yml``
enable_ceph: "no" enable_ceph: "no"
.. end
There are flags indicating individual services to use ceph or not which default There are flags indicating individual services to use ceph or not which default
to the value of ``enable_ceph``. Those flags now need to be activated in order to the value of ``enable_ceph``. Those flags now need to be activated in order
to activate external Ceph integration. This can be done individually per to activate external Ceph integration. This can be done individually per
@ -41,8 +39,6 @@ service in ``/etc/kolla/globals.yml``:
gnocchi_backend_storage: "ceph" gnocchi_backend_storage: "ceph"
enable_manila_backend_cephfs_native: "yes" enable_manila_backend_cephfs_native: "yes"
.. end
The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"`` The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
triggers the activation of external ceph mechanism in Kolla. triggers the activation of external ceph mechanism in Kolla.
@ -59,8 +55,6 @@ nodes where ``cinder-volume`` and ``cinder-backup`` will run:
[storage] [storage]
compute01 compute01
.. end
Configuring External Ceph Configuring External Ceph
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
@ -85,8 +79,6 @@ Step 1 is done by using Kolla's INI merge mechanism: Create a file in
rbd_store_user = glance rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_ceph_conf = /etc/ceph/ceph.conf
.. end
Now put ceph.conf and the keyring file (name depends on the username created in Now put ceph.conf and the keyring file (name depends on the username created in
Ceph) into the same directory, for example: Ceph) into the same directory, for example:
@ -101,8 +93,6 @@ Ceph) into the same directory, for example:
auth_service_required = cephx auth_service_required = cephx
auth_client_required = cephx auth_client_required = cephx
.. end
.. code-block:: console .. code-block:: console
$ cat /etc/kolla/config/glance/ceph.client.glance.keyring $ cat /etc/kolla/config/glance/ceph.client.glance.keyring
@ -110,8 +100,6 @@ Ceph) into the same directory, for example:
[client.glance] [client.glance]
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA== key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
.. end
Kolla will pick up all files named ``ceph.*`` in this directory and copy them Kolla will pick up all files named ``ceph.*`` in this directory and copy them
to the ``/etc/ceph/`` directory of the container. to the ``/etc/ceph/`` directory of the container.
@ -138,8 +126,6 @@ the following configuration:
volume_driver=cinder.volume.drivers.rbd.RBDDriver volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }} rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
.. end
.. note:: .. note::
``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file. ``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
@ -159,8 +145,6 @@ the following configuration:
backup_ceph_stripe_count = 0 backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true restore_discard_excess_bytes = true
.. end
Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``: Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
.. code-block:: ini .. code-block:: ini
@ -173,8 +157,6 @@ Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
auth_service_required = cephx auth_service_required = cephx
auth_client_required = cephx auth_client_required = cephx
.. end
Separate configuration options can be configured for Separate configuration options can be configured for
cinder-volume and cinder-backup by adding ceph.conf files to cinder-volume and cinder-backup by adding ceph.conf files to
``/etc/kolla/config/cinder/cinder-volume`` and ``/etc/kolla/config/cinder/cinder-volume`` and
@ -197,8 +179,6 @@ to these directories, for example:
[client.cinder] [client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w== key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
.. end
.. code-block:: console .. code-block:: console
$ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring $ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
@ -206,8 +186,6 @@ to these directories, for example:
[client.cinder-backup] [client.cinder-backup]
key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w== key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
.. end
.. code-block:: console .. code-block:: console
$ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring $ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
@ -215,8 +193,6 @@ to these directories, for example:
[client.cinder] [client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w== key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
.. end
It is important that the files are named ``ceph.client*``. It is important that the files are named ``ceph.client*``.
Nova Nova
@ -230,8 +206,6 @@ Put ceph.conf, nova client keyring file and cinder client keyring file into
$ ls /etc/kolla/config/nova $ ls /etc/kolla/config/nova
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
.. end
Configure nova-compute to use Ceph as the ephemeral back end by creating Configure nova-compute to use Ceph as the ephemeral back end by creating
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following ``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
configurations: configurations:
@ -244,8 +218,6 @@ configurations:
images_rbd_ceph_conf=/etc/ceph/ceph.conf images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova rbd_user=nova
.. end
.. note:: .. note::
``rbd_user`` might vary depending on your environment. ``rbd_user`` might vary depending on your environment.
@ -264,8 +236,6 @@ the following configuration:
ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
ceph_conffile = /etc/ceph/ceph.conf ceph_conffile = /etc/ceph/ceph.conf
.. end
Put ceph.conf and gnocchi client keyring file in Put ceph.conf and gnocchi client keyring file in
``/etc/kolla/config/gnocchi``: ``/etc/kolla/config/gnocchi``:
@ -274,8 +244,6 @@ Put ceph.conf and gnocchi client keyring file in
$ ls /etc/kolla/config/gnocchi $ ls /etc/kolla/config/gnocchi
ceph.client.gnocchi.keyring ceph.conf gnocchi.conf ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
.. end
Manila Manila
------ ------
@ -301,8 +269,6 @@ in Ceph) into the same directory, for example:
auth_service_required = cephx auth_service_required = cephx
auth_client_required = cephx auth_client_required = cephx
.. end
.. code-block:: console .. code-block:: console
$ cat /etc/kolla/config/manila/ceph.client.manila.keyring $ cat /etc/kolla/config/manila/ceph.client.manila.keyring
@ -310,8 +276,6 @@ in Ceph) into the same directory, for example:
[client.manila] [client.manila]
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA== key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
.. end
For more details on the rest of the Manila setup, such as creating the share For more details on the rest of the Manila setup, such as creating the share
type ``default_share_type``, please see `Manila in Kolla type ``default_share_type``, please see `Manila in Kolla
<https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__. <https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__.

View File

@ -33,8 +33,6 @@ by ensuring the following line exists within ``/etc/kolla/globals.yml`` :
enable_mariadb: "no" enable_mariadb: "no"
.. end
There are two ways in which you can use external MariaDB: There are two ways in which you can use external MariaDB:
* Using an already load-balanced MariaDB address * Using an already load-balanced MariaDB address
* Using an external MariaDB cluster * Using an external MariaDB cluster
@ -53,8 +51,6 @@ need to do the following:
[mariadb:children] [mariadb:children]
myexternalmariadbloadbalancer.com myexternalmariadbloadbalancer.com
.. end
#. Define ``database_address`` in ``/etc/kolla/globals.yml`` file: #. Define ``database_address`` in ``/etc/kolla/globals.yml`` file:
@ -62,8 +58,6 @@ need to do the following:
database_address: myexternalloadbalancer.com database_address: myexternalloadbalancer.com
.. end
.. note:: .. note::
If ``enable_external_mariadb_load_balancer`` is set to ``no`` If ``enable_external_mariadb_load_balancer`` is set to ``no``
@ -82,8 +76,6 @@ Using this way, you need to adjust the inventory file:
myexternaldbserver2.com myexternaldbserver2.com
myexternaldbserver3.com myexternaldbserver3.com
.. end
If you choose to use haproxy for load balancing between the If you choose to use haproxy for load balancing between the
members of the cluster, every node within this group members of the cluster, every node within this group
needs to be resolvable and reachable from all needs to be resolvable and reachable from all
@ -97,8 +89,6 @@ according to the following configuration:
enable_external_mariadb_load_balancer: yes enable_external_mariadb_load_balancer: yes
.. end
Using External MariaDB with a privileged user Using External MariaDB with a privileged user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -111,8 +101,6 @@ and set the ``database_password`` in ``/etc/kolla/passwords.yml`` file:
database_password: mySuperSecurePassword database_password: mySuperSecurePassword
.. end
If the MariaDB ``username`` is not ``root``, set ``database_username`` in If the MariaDB ``username`` is not ``root``, set ``database_username`` in
``/etc/kolla/globals.yml`` file: ``/etc/kolla/globals.yml`` file:
@ -120,8 +108,6 @@ If the MariaDB ``username`` is not ``root``, set ``database_username`` in
database_username: "privillegeduser" database_username: "privillegeduser"
.. end
Using preconfigured databases / users: Using preconfigured databases / users:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -132,8 +118,6 @@ The first step you need to take is to set ``use_preconfigured_databases`` to
use_preconfigured_databases: "yes" use_preconfigured_databases: "yes"
.. end
.. note:: .. note::
when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need
@ -153,8 +137,6 @@ In order to achieve this, you will need to define the user names in the
keystone_database_user: preconfigureduser1 keystone_database_user: preconfigureduser1
nova_database_user: preconfigureduser2 nova_database_user: preconfigureduser2
.. end
Also, you will need to set the passwords for all databases in the Also, you will need to set the passwords for all databases in the
``/etc/kolla/passwords.yml`` file ``/etc/kolla/passwords.yml`` file
@ -172,8 +154,6 @@ all you need to do is the following steps:
use_common_mariadb_user: "yes" use_common_mariadb_user: "yes"
.. end
#. Set the database_user within ``/etc/kolla/globals.yml`` to #. Set the database_user within ``/etc/kolla/globals.yml`` to
the one provided to you: the one provided to you:
@ -181,8 +161,6 @@ all you need to do is the following steps:
database_user: mycommondatabaseuser database_user: mycommondatabaseuser
.. end
#. Set the common password for all components within #. Set the common password for all components within
``/etc/kolla/passwords.yml``. In order to achieve that you ``/etc/kolla/passwords.yml``. In order to achieve that you
could use the following command: could use the following command:
@ -191,4 +169,3 @@ all you need to do is the following steps:
sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
.. end

View File

@ -27,4 +27,3 @@ a file named custom_local_settings should be created under the directory
('material', 'Material', 'themes/material'), ('material', 'Material', 'themes/material'),
] ]
.. end

View File

@ -58,8 +58,6 @@ Virtual Interface the following PowerShell may be used:
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
.. end
.. note:: .. note::
It is very important to make sure that when you are using a Hyper-V node It is very important to make sure that when you are using a Hyper-V node
@ -76,8 +74,6 @@ running and started automatically.
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI PS C:\> Start-Service MSiSCSI
.. end
Preparation for Kolla deployer node Preparation for Kolla deployer node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -87,8 +83,6 @@ Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
enable_hyperv: "yes" enable_hyperv: "yes"
.. end
Hyper-V options are also required in ``/etc/kolla/globals.yml``: Hyper-V options are also required in ``/etc/kolla/globals.yml``:
.. code-block:: yaml .. code-block:: yaml
@ -98,8 +92,6 @@ Hyper-V options are also required in ``/etc/kolla/globals.yml``:
vswitch_name: <HyperV virtual switch name> vswitch_name: <HyperV virtual switch name>
nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi" nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
.. end
If tenant networks are to be built using VLAN add corresponding type in If tenant networks are to be built using VLAN add corresponding type in
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
@ -107,8 +99,6 @@ If tenant networks are to be built using VLAN add corresponding type in
neutron_tenant_network_types: 'flat,vlan' neutron_tenant_network_types: 'flat,vlan'
.. end
The virtual switch is the same one created on the HyperV setup part. The virtual switch is the same one created on the HyperV setup part.
For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
be found on `Cloudbase website be found on `Cloudbase website
@ -128,8 +118,6 @@ Add the Hyper-V node in ``ansible/inventory`` file:
ansible_connection=winrm ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore ansible_winrm_server_cert_validation=ignore
.. end
``pywinrm`` package needs to be installed in order for Ansible to work ``pywinrm`` package needs to be installed in order for Ansible to work
on the HyperV node: on the HyperV node:
@ -137,8 +125,6 @@ on the HyperV node:
pip install "pywinrm>=0.2.2" pip install "pywinrm>=0.2.2"
.. end
.. note:: .. note::
In case of a test deployment with controller and compute nodes as In case of a test deployment with controller and compute nodes as
@ -149,8 +135,6 @@ on the HyperV node:
Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name> Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
.. end
networking-hyperv mechanism driver is needed for neutron-server to networking-hyperv mechanism driver is needed for neutron-server to
communicate with HyperV nova-compute. This can be built with source communicate with HyperV nova-compute. This can be built with source
images by default. Manually it can be intalled in neutron-server images by default. Manually it can be intalled in neutron-server
@ -160,8 +144,6 @@ container with pip:
pip install "networking-hyperv>=4.0.0" pip install "networking-hyperv>=4.0.0"
.. end
For neutron_extension_drivers, ``port_security`` and ``qos`` are For neutron_extension_drivers, ``port_security`` and ``qos`` are
currently supported by the networking-hyperv mechanism driver. currently supported by the networking-hyperv mechanism driver.
By default only ``port_security`` is set. By default only ``port_security`` is set.
@ -177,15 +159,11 @@ OpenStack HyperV services can be inspected and managed from PowerShell:
PS C:\> Get-Service nova-compute PS C:\> Get-Service nova-compute
PS C:\> Get-Service neutron-hyperv-agent PS C:\> Get-Service neutron-hyperv-agent
.. end
.. code-block:: console .. code-block:: console
PS C:\> Restart-Service nova-compute PS C:\> Restart-Service nova-compute
PS C:\> Restart-Service neutron-hyperv-agent PS C:\> Restart-Service neutron-hyperv-agent
.. end
For more information on OpenStack HyperV, see For more information on OpenStack HyperV, see
`Hyper-V virtualization platform `Hyper-V virtualization platform
<https://docs.openstack.org/ocata/config-reference/compute/hypervisor-hyper-v.html>`__. <https://docs.openstack.org/ocata/config-reference/compute/hypervisor-hyper-v.html>`__.

View File

@ -17,8 +17,6 @@ Enable Ironic in ``/etc/kolla/globals.yml``:
enable_ironic: "yes" enable_ironic: "yes"
.. end
In the same file, define a range of IP addresses that will be available for use In the same file, define a range of IP addresses that will be available for use
by Ironic inspector, as well as a network to be used for the Ironic cleaning by Ironic inspector, as well as a network to be used for the Ironic cleaning
network: network:
@ -28,8 +26,6 @@ network:
ironic_dnsmasq_dhcp_range: "192.168.5.100,192.168.5.110" ironic_dnsmasq_dhcp_range: "192.168.5.100,192.168.5.110"
ironic_cleaning_network: "public1" ironic_cleaning_network: "public1"
.. end
In the same file, optionally a default gateway to be used for the Ironic In the same file, optionally a default gateway to be used for the Ironic
Inspector inspection network: Inspector inspection network:
@ -37,8 +33,6 @@ Inspector inspection network:
ironic_dnsmasq_default_gateway: 192.168.5.1 ironic_dnsmasq_default_gateway: 192.168.5.1
.. end
In the same file, specify the PXE bootloader file for Ironic Inspector. The In the same file, specify the PXE bootloader file for Ironic Inspector. The
file is relative to the ``/tftpboot`` directory. The default is ``pxelinux.0``, file is relative to the ``/tftpboot`` directory. The default is ``pxelinux.0``,
and should be correct for x86 systems. Other platforms may require a different and should be correct for x86 systems. Other platforms may require a different
@ -49,8 +43,6 @@ value, for example aarch64 on Debian requires
ironic_dnsmasq_boot_file: pxelinux.0 ironic_dnsmasq_boot_file: pxelinux.0
.. end
Ironic inspector also requires a deploy kernel and ramdisk to be placed in Ironic inspector also requires a deploy kernel and ramdisk to be placed in
``/etc/kolla/config/ironic/``. The following example uses coreos which is ``/etc/kolla/config/ironic/``. The following example uses coreos which is
commonly used in Ironic deployments, though any compatible kernel/ramdisk may commonly used in Ironic deployments, though any compatible kernel/ramdisk may
@ -64,16 +56,12 @@ be used:
$ curl https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz \ $ curl https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem.cpio.gz \
-o /etc/kolla/config/ironic/ironic-agent.initramfs -o /etc/kolla/config/ironic/ironic-agent.initramfs
.. end
You may optionally pass extra kernel parameters to the inspection kernel using: You may optionally pass extra kernel parameters to the inspection kernel using:
.. code-block:: yaml .. code-block:: yaml
ironic_inspector_kernel_cmdline_extras: ['ipa-lldp-timeout=90.0', 'ipa-collect-lldp=1'] ironic_inspector_kernel_cmdline_extras: ['ipa-lldp-timeout=90.0', 'ipa-collect-lldp=1']
.. end
in ``/etc/kolla/globals.yml``. in ``/etc/kolla/globals.yml``.
Enable iPXE booting (optional) Enable iPXE booting (optional)
@ -86,8 +74,6 @@ true in ``/etc/kolla/globals.yml``:
enable_ironic_ipxe: "yes" enable_ironic_ipxe: "yes"
.. end
This will enable deployment of a docker container, called ironic_ipxe, running This will enable deployment of a docker container, called ironic_ipxe, running
the web server which iPXE uses to obtain it's boot images. the web server which iPXE uses to obtain it's boot images.
@ -98,8 +84,6 @@ The port used for the iPXE webserver is controlled via ``ironic_ipxe_port`` in
ironic_ipxe_port: "8089" ironic_ipxe_port: "8089"
.. end
The following changes will occur if iPXE booting is enabled: The following changes will occur if iPXE booting is enabled:
- Ironic will be configured with the ``ipxe_enabled`` configuration option set - Ironic will be configured with the ``ipxe_enabled`` configuration option set
@ -117,8 +101,6 @@ Run the deploy as usual:
$ kolla-ansible deploy $ kolla-ansible deploy
.. end
Post-deployment configuration Post-deployment configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -129,8 +111,6 @@ initialise the cloud with some defaults (only to be used for demo purposes):
tools/init-runonce tools/init-runonce
.. end
Add the deploy kernel and ramdisk to Glance. Here we're reusing the same images Add the deploy kernel and ramdisk to Glance. Here we're reusing the same images
that were fetched for the Inspector: that were fetched for the Inspector:
@ -142,8 +122,6 @@ that were fetched for the Inspector:
openstack image create --disk-format ari --container-format ari --public \ openstack image create --disk-format ari --container-format ari --public \
--file /etc/kolla/config/ironic/ironic-agent.initramfs deploy-initrd --file /etc/kolla/config/ironic/ironic-agent.initramfs deploy-initrd
.. end
Create a baremetal flavor: Create a baremetal flavor:
.. code-block:: console .. code-block:: console
@ -152,8 +130,6 @@ Create a baremetal flavor:
openstack flavor set my-baremetal-flavor --property \ openstack flavor set my-baremetal-flavor --property \
resources:CUSTOM_BAREMETAL_RESOURCE_CLASS=1 resources:CUSTOM_BAREMETAL_RESOURCE_CLASS=1
.. end
Create the baremetal node and associate a port. (Ensure to substitute correct Create the baremetal node and associate a port. (Ensure to substitute correct
values for the kernel, ramdisk, and MAC address for your baremetal node) values for the kernel, ramdisk, and MAC address for your baremetal node)
@ -171,8 +147,6 @@ values for the kernel, ramdisk, and MAC address for your baremetal node)
openstack baremetal port create 52:54:00:ff:15:55 --node 57aa574a-5fea-4468-afcf-e2551d464412 openstack baremetal port create 52:54:00:ff:15:55 --node 57aa574a-5fea-4468-afcf-e2551d464412
.. end
Make the baremetal node available to nova: Make the baremetal node available to nova:
.. code-block:: console .. code-block:: console
@ -180,8 +154,6 @@ Make the baremetal node available to nova:
openstack baremetal node manage 57aa574a-5fea-4468-afcf-e2551d464412 openstack baremetal node manage 57aa574a-5fea-4468-afcf-e2551d464412
openstack baremetal node provide 57aa574a-5fea-4468-afcf-e2551d464412 openstack baremetal node provide 57aa574a-5fea-4468-afcf-e2551d464412
.. end
It may take some time for the node to become available for scheduling in nova. It may take some time for the node to become available for scheduling in nova.
Use the following commands to wait for the resources to become available: Use the following commands to wait for the resources to become available:
@ -190,8 +162,6 @@ Use the following commands to wait for the resources to become available:
openstack hypervisor stats show openstack hypervisor stats show
openstack hypervisor show 57aa574a-5fea-4468-afcf-e2551d464412 openstack hypervisor show 57aa574a-5fea-4468-afcf-e2551d464412
.. end
Booting the baremetal Booting the baremetal
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
You can now use the following sample command to boot the baremetal instance: You can now use the following sample command to boot the baremetal instance:
@ -201,8 +171,6 @@ You can now use the following sample command to boot the baremetal instance:
openstack server create --image cirros --flavor my-baremetal-flavor \ openstack server create --image cirros --flavor my-baremetal-flavor \
--key-name mykey --network public1 demo1 --key-name mykey --network public1 demo1
.. end
Notes Notes
~~~~~ ~~~~~
@ -215,8 +183,6 @@ requests may not be hitting various pieces of the process:
tcpdump -i <interface> port 67 or port 68 or port 69 -e -n tcpdump -i <interface> port 67 or port 68 or port 69 -e -n
.. end
Configuring the Web Console Configuring the Web Console
--------------------------- ---------------------------
Configuration based off upstream `Node web console Configuration based off upstream `Node web console
@ -231,8 +197,6 @@ Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
ironic_console_serial_speed: 9600n8 ironic_console_serial_speed: 9600n8
.. end
Deploying using virtual baremetal (vbmc + libvirt) Deploying using virtual baremetal (vbmc + libvirt)
-------------------------------------------------- --------------------------------------------------
See https://brk3.github.io/post/kolla-ironic-libvirt/ See https://brk3.github.io/post/kolla-ironic-libvirt/

View File

@ -22,8 +22,6 @@ To allow Docker daemon connect to the etcd, add the following in the
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375 ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
.. end
The IP address is host running the etcd service. ```2375``` is port that The IP address is host running the etcd service. ```2375``` is port that
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
port. port.
@ -37,16 +35,12 @@ following variables
enable_etcd: "yes" enable_etcd: "yes"
enable_kuryr: "yes" enable_kuryr: "yes"
.. end
Deploy the OpenStack cloud and kuryr network plugin Deploy the OpenStack cloud and kuryr network plugin
.. code-block:: console .. code-block:: console
kolla-ansible deploy kolla-ansible deploy
.. end
Create a Virtual Network Create a Virtual Network
~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~
@ -54,24 +48,18 @@ Create a Virtual Network
docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1 docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
.. end
To list the created network: To list the created network:
.. code-block:: console .. code-block:: console
docker network ls docker network ls
.. end
The created network is also available from OpenStack CLI: The created network is also available from OpenStack CLI:
.. code-block:: console .. code-block:: console
openstack network list openstack network list
.. end
For more information about how kuryr works, see For more information about how kuryr works, see
`kuryr (OpenStack Containers Networking) `kuryr (OpenStack Containers Networking)
<https://docs.openstack.org/kuryr/latest/>`__. <https://docs.openstack.org/kuryr/latest/>`__.

View File

@ -42,8 +42,6 @@ Cinder and Ceph are required, enable it in ``/etc/kolla/globals.yml``:
enable_cinder: "yes" enable_cinder: "yes"
enable_ceph: "yes" enable_ceph: "yes"
.. end
Enable Manila and generic back end in ``/etc/kolla/globals.yml``: Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
.. code-block:: console .. code-block:: console
@ -51,8 +49,6 @@ Enable Manila and generic back end in ``/etc/kolla/globals.yml``:
enable_manila: "yes" enable_manila: "yes"
enable_manila_backend_generic: "yes" enable_manila_backend_generic: "yes"
.. end
By default Manila uses instance flavor id 100 for its file systems. For Manila By default Manila uses instance flavor id 100 for its file systems. For Manila
to work, either create a new nova flavor with id 100 (use *nova flavor-create*) to work, either create a new nova flavor with id 100 (use *nova flavor-create*)
or change *service_instance_flavor_id* to use one of the default nova flavor or change *service_instance_flavor_id* to use one of the default nova flavor
@ -67,8 +63,6 @@ contents:
[generic] [generic]
service_instance_flavor_id = 2 service_instance_flavor_id = 2
.. end
Verify Operation Verify Operation
~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~
@ -86,8 +80,6 @@ to verify successful launch of each process:
| manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None | | manila-share | share1@generic | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+ +------------------+----------------+------+---------+-------+----------------------------+-----------------+
.. end
Launch an Instance Launch an Instance
~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
@ -112,8 +104,6 @@ Create a default share type before running manila-share service:
| 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True | | 8a35da28-0f74-490d-afff-23664ecd4f01 | default_share_type | public | - | driver_handles_share_servers : True | snapshot_support : True |
+--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+ +--------------------------------------+--------------------+------------+------------+-------------------------------------+-------------------------+
.. end
Create a manila share server image to the Image service: Create a manila share server image to the Image service:
.. code-block:: console .. code-block:: console
@ -146,8 +136,6 @@ Create a manila share server image to the Image service:
| visibility | public | | visibility | public |
+------------------+--------------------------------------+ +------------------+--------------------------------------+
.. end
List available networks to get id and subnets of the private network: List available networks to get id and subnets of the private network:
.. code-block:: console .. code-block:: console
@ -159,8 +147,6 @@ List available networks to get id and subnets of the private network:
| 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 | | 7c6f9b37-76b4-463e-98d8-27e5686ed083 | private | 3482f524-8bff-4871-80d4-5774c2730728 172.16.1.0/24 |
+--------------------------------------+---------+----------------------------------------------------+ +--------------------------------------+---------+----------------------------------------------------+
.. end
Create a shared network Create a shared network
.. code-block:: console .. code-block:: console
@ -187,8 +173,6 @@ Create a shared network
| description | None | | description | None |
+-------------------+--------------------------------------+ +-------------------+--------------------------------------+
.. end
Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
``/etc/kolla/config/manila-share.conf`` file) ``/etc/kolla/config/manila-share.conf`` file)
@ -196,8 +180,6 @@ Create a flavor (**Required** if you not defined *manila_instance_flavor_id* in
# nova flavor-create manila-service-flavor 100 128 0 1 # nova flavor-create manila-service-flavor 100 128 0 1
.. end
Create a share Create a share
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -234,8 +216,6 @@ Create a NFS share using the share network:
| metadata | {} | | metadata | {} |
+-----------------------------+--------------------------------------+ +-----------------------------+--------------------------------------+
.. end
After some time, the share status should change from ``creating`` After some time, the share status should change from ``creating``
to ``available``: to ``available``:
@ -249,8 +229,6 @@ to ``available``:
| e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova | | e1e06b14-ba17-48d4-9e0b-ca4d59823166 | demo-share1 | 1 | NFS | available | False | default_share_type | share1@generic#GENERIC | nova |
+--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+ +--------------------------------------+-------------+------+-------------+-----------+-----------+--------------------------------------+-----------------------------+-------------------+
.. end
Configure user access to the new share before attempting to mount it via the Configure user access to the new share before attempting to mount it via the
network: network:
@ -258,8 +236,6 @@ network:
# manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP # manila access-allow demo-share1 ip INSTANCE_PRIVATE_NETWORK_IP
.. end
Mount the share from an instance Mount the share from an instance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -304,24 +280,18 @@ Get export location from share
| metadata | {} | | metadata | {} |
+-----------------------------+----------------------------------------------------------------------+ +-----------------------------+----------------------------------------------------------------------+
.. end
Create a folder where the mount will be placed: Create a folder where the mount will be placed:
.. code-block:: console .. code-block:: console
# mkdir ~/test_folder # mkdir ~/test_folder
.. end
Mount the NFS share in the instance using the export location of the share: Mount the NFS share in the instance using the export location of the share:
.. code-block:: console .. code-block:: console
# mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder # mount -v 10.254.0.3:/shares/share-422dc546-8f37-472b-ac3c-d23fe410d1b6 ~/test_folder
.. end
Share Migration Share Migration
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
@ -340,8 +310,6 @@ Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
[DEFAULT] [DEFAULT]
data_node_access_ip = 10.10.10.199 data_node_access_ip = 10.10.10.199
.. end
.. note:: .. note::
Share migration requires have more than one back end configured. Share migration requires have more than one back end configured.
@ -356,8 +324,6 @@ Use the manila migration command, as shown in the following example:
--new_share_type share_type --new_share_network share_network \ --new_share_type share_type --new_share_network share_network \
shareID destinationHost shareID destinationHost
.. end
- ``--force-host-copy``: Forces the generic host-based migration mechanism and - ``--force-host-copy``: Forces the generic host-based migration mechanism and
bypasses any driver optimizations. bypasses any driver optimizations.
- ``destinationHost``: Is in this format ``host#pool`` which includes - ``destinationHost``: Is in this format ``host#pool`` which includes
@ -391,8 +357,6 @@ check progress.
| total_progress | 100 | | total_progress | 100 |
+----------------+-------------------------+ +----------------+-------------------------+
.. end
Use the :command:`manila migration-complete shareID` command to complete share Use the :command:`manila migration-complete shareID` command to complete share
migration process. migration process.

View File

@ -80,8 +80,6 @@ Enable Shared File Systems service and HNAS driver in
enable_manila: "yes" enable_manila: "yes"
enable_manila_backend_hnas: "yes" enable_manila_backend_hnas: "yes"
.. end
Configure the OpenStack networking so it can reach HNAS Management Configure the OpenStack networking so it can reach HNAS Management
interface and HNAS EVS Data interface. interface and HNAS EVS Data interface.
@ -95,8 +93,6 @@ In ``/etc/kolla/globals.yml`` set:
neutron_bridge_name: "br-ex,br-ex2" neutron_bridge_name: "br-ex,br-ex2"
neutron_external_interface: "eth1,eth2" neutron_external_interface: "eth1,eth2"
.. end
.. note:: .. note::
``eth1`` is used to Neutron external interface and ``eth2`` is ``eth1`` is used to Neutron external interface and ``eth2`` is
@ -127,8 +123,6 @@ List the available tenants:
$ openstack project list $ openstack project list
.. end
Create a network to the given tenant (service), providing the tenant ID, Create a network to the given tenant (service), providing the tenant ID,
a name for the network, the name of the physical network over which the a name for the network, the name of the physical network over which the
virtual network is implemented, and the type of the physical mechanism by virtual network is implemented, and the type of the physical mechanism by
@ -139,16 +133,12 @@ which the virtual network is implemented:
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \ $ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
--provider:physical_network=physnet2 --provider:network_type=flat --provider:physical_network=physnet2 --provider:network_type=flat
.. end
*Optional* - List available networks: *Optional* - List available networks:
.. code-block:: console .. code-block:: console
$ neutron net-list $ neutron net-list
.. end
Create a subnet to the same tenant (service), the gateway IP of this subnet, Create a subnet to the same tenant (service), the gateway IP of this subnet,
a name for the subnet, the network ID created before, and the CIDR of a name for the subnet, the network ID created before, and the CIDR of
subnet: subnet:
@ -158,16 +148,12 @@ subnet:
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \ $ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR> --name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
.. end
*Optional* - List available subnets: *Optional* - List available subnets:
.. code-block:: console .. code-block:: console
$ neutron subnet-list $ neutron subnet-list
.. end
Add the subnet interface to a router, providing the router ID and subnet Add the subnet interface to a router, providing the router ID and subnet
ID created before: ID created before:
@ -175,8 +161,6 @@ ID created before:
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID> $ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
.. end
Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_. Create a file system on HNAS. See the `Hitachi HNAS reference <http://www.hds.com/assets/pdf/hus-file-module-file-services-administration-guide.pdf>`_.
.. important :: .. important ::
@ -193,8 +177,6 @@ Create a route in HNAS to the tenant network:
$ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \ $ console-context --evs <EVS_ID_IN_USE> route-net-add --gateway <FLAT_NETWORK_GATEWAY> \
<TENANT_PRIVATE_NETWORK> <TENANT_PRIVATE_NETWORK>
.. end
.. important :: .. important ::
Make sure multi-tenancy is enabled and routes are configured per EVS. Make sure multi-tenancy is enabled and routes are configured per EVS.
@ -204,8 +186,6 @@ Create a route in HNAS to the tenant network:
$ console-context --evs 3 route-net-add --gateway 192.168.1.1 \ $ console-context --evs 3 route-net-add --gateway 192.168.1.1 \
10.0.0.0/24 10.0.0.0/24
.. end
Create a share Create a share
~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
@ -221,8 +201,6 @@ Create a default share type before running manila-share service:
| 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True | | 3e54c8a2-1e50-455e-89a0-96bb52876c35 | default_share_hitachi | public | - | driver_handles_share_servers : False | snapshot_support : True |
+--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+ +--------------------------------------+-----------------------+------------+------------+--------------------------------------+-------------------------+
.. end
Create a NFS share using the HNAS back end: Create a NFS share using the HNAS back end:
.. code-block:: console .. code-block:: console
@ -232,8 +210,6 @@ Create a NFS share using the HNAS back end:
--description "My Manila share" \ --description "My Manila share" \
--share-type default_share_hitachi --share-type default_share_hitachi
.. end
Verify Operation: Verify Operation:
.. code-block:: console .. code-block:: console
@ -246,8 +222,6 @@ Verify Operation:
| 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova | | 721c0a6d-eea6-41af-8c10-72cd98985203 | mysharehnas | 1 | NFS | available | False | default_share_hitachi | control@hnas1#HNAS1 | nova |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+ +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------------+-------------------------+-------------------+
.. end
.. code-block:: console .. code-block:: console
$ manila show mysharehnas $ manila show mysharehnas
@ -288,8 +262,6 @@ Verify Operation:
| metadata | {} | | metadata | {} |
+-----------------------------+-----------------------------------------------------------------+ +-----------------------------+-----------------------------------------------------------------+
.. end
.. _hnas_configure_multiple_back_ends: .. _hnas_configure_multiple_back_ends:
Configure multiple back ends Configure multiple back ends
@ -314,8 +286,6 @@ Modify the file ``/etc/kolla/config/manila.conf`` and add the contents:
[DEFAULT] [DEFAULT]
enabled_share_backends = generic,hnas1,hnas2 enabled_share_backends = generic,hnas1,hnas2
.. end
Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents: Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
.. path /etc/kolla/config/manila-share.conf .. path /etc/kolla/config/manila-share.conf
@ -352,8 +322,6 @@ Modify the file ``/etc/kolla/config/manila-share.conf`` and add the contents:
hitachi_hnas_evs_ip = <evs_ip> hitachi_hnas_evs_ip = <evs_ip>
hitachi_hnas_file_system_name = FS-Manila2 hitachi_hnas_file_system_name = FS-Manila2
.. end
For more information about how to manage shares, see the For more information about how to manage shares, see the
`Manage shares `Manage shares
<https://docs.openstack.org/manila/latest/user/create-and-manage-shares.html>`__. <https://docs.openstack.org/manila/latest/user/create-and-manage-shares.html>`__.

View File

@ -27,8 +27,6 @@ as the following example shows:
enable_neutron_provider_networks: "yes" enable_neutron_provider_networks: "yes"
.. end
Enabling Neutron Extensions Enabling Neutron Extensions
=========================== ===========================
@ -44,8 +42,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
enable_neutron_sfc: "yes" enable_neutron_sfc: "yes"
.. end
Verification Verification
------------ ------------
@ -65,8 +61,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
enable_neutron_vpnaas: "yes" enable_neutron_vpnaas: "yes"
.. end
Verification Verification
------------ ------------
@ -83,8 +77,6 @@ and versioning may differ depending on deploy configuration):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
97d25657d55e operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0 "kolla_start" 44 minutes ago Up 44 minutes neutron_vpnaas_agent 97d25657d55e operator:5000/kolla/oraclelinux-source-neutron-vpnaas-agent:4.0.0 "kolla_start" 44 minutes ago Up 44 minutes neutron_vpnaas_agent
.. end
Kolla-Ansible includes a small script that can be used in tandem with Kolla-Ansible includes a small script that can be used in tandem with
``tools/init-runonce`` to verify the VPN using two routers and two Nova VMs: ``tools/init-runonce`` to verify the VPN using two routers and two Nova VMs:
@ -93,8 +85,6 @@ Kolla-Ansible includes a small script that can be used in tandem with
tools/init-runonce tools/init-runonce
tools/init-vpn tools/init-vpn
.. end
Verify both VPN services are active: Verify both VPN services are active:
.. code-block:: console .. code-block:: console
@ -108,8 +98,6 @@ Verify both VPN services are active:
| edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE | | edce15db-696f-46d8-9bad-03d087f1f682 | vpn_east | 058842e0-1d01-4230-af8d-0ba6d0da8b1f | ACTIVE |
+--------------------------------------+----------+--------------------------------------+--------+ +--------------------------------------+----------+--------------------------------------+--------+
.. end
Two VMs can now be booted, one on vpn_east, the other on vpn_west, and Two VMs can now be booted, one on vpn_east, the other on vpn_west, and
encrypted ping packets observed being sent from one to the other. encrypted ping packets observed being sent from one to the other.
@ -129,8 +117,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
enable_opendaylight: "yes" enable_opendaylight: "yes"
.. end
Networking-ODL is an additional Neutron plugin that allows the OpenDaylight Networking-ODL is an additional Neutron plugin that allows the OpenDaylight
SDN Controller to utilize its networking virtualization features. SDN Controller to utilize its networking virtualization features.
For OpenDaylight to work, the Networking-ODL plugin has to be installed in For OpenDaylight to work, the Networking-ODL plugin has to be installed in
@ -152,8 +138,6 @@ OpenDaylight ``globals.yml`` configurable options with their defaults include:
opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack" opendaylight_features: "odl-mdsal-apidocs,odl-netvirt-openstack"
opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"' opendaylight_allowed_network_types: '"flat", "vlan", "vxlan"'
.. end
Clustered OpenDaylight Deploy Clustered OpenDaylight Deploy
----------------------------- -----------------------------
@ -221,8 +205,6 @@ config and regenerating your grub file.
default_hugepagesz=2M hugepagesz=2M hugepages=25000 default_hugepagesz=2M hugepagesz=2M hugepages=25000
.. end
As dpdk is a userspace networking library it requires userspace compatible As dpdk is a userspace networking library it requires userspace compatible
drivers to be able to control the physical interfaces on the platform. drivers to be able to control the physical interfaces on the platform.
dpdk technically support 3 kernel drivers ``igb_uio``,``uio_pci_generic``, and dpdk technically support 3 kernel drivers ``igb_uio``,``uio_pci_generic``, and
@ -252,8 +234,6 @@ To enable ovs-dpdk, add the following configuration to
tunnel_interface: "dpdk_bridge" tunnel_interface: "dpdk_bridge"
neutron_bridge_name: "dpdk_bridge" neutron_bridge_name: "dpdk_bridge"
.. end
Unlike standard Open vSwitch deployments, the interface specified by Unlike standard Open vSwitch deployments, the interface specified by
neutron_external_interface should have an ip address assigned. neutron_external_interface should have an ip address assigned.
The ip address assigned to neutron_external_interface will be moved to The ip address assigned to neutron_external_interface will be moved to
@ -306,8 +286,6 @@ Modify the ``/etc/kolla/globals.yml`` file as the following example shows:
enable_neutron_sriov: "yes" enable_neutron_sriov: "yes"
.. end
Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add Modify the ``/etc/kolla/config/neutron/ml2_conf.ini`` file and add
``sriovnicswitch`` to the ``mechanism_drivers``. Also, the provider ``sriovnicswitch`` to the ``mechanism_drivers``. Also, the provider
networks used by SRIOV should be configured. Both flat and VLAN are configured networks used by SRIOV should be configured. Both flat and VLAN are configured
@ -325,8 +303,6 @@ with the same physical network name in this example:
[ml2_type_flat] [ml2_type_flat]
flat_networks = sriovtenant1 flat_networks = sriovtenant1
.. end
Add ``PciPassthroughFilter`` to scheduler_default_filters Add ``PciPassthroughFilter`` to scheduler_default_filters
The ``PciPassthroughFilter``, which is required by Nova Scheduler service The ``PciPassthroughFilter``, which is required by Nova Scheduler service
@ -343,8 +319,6 @@ required by The Nova Scheduler service on the controller node.
scheduler_default_filters = <existing filters>, PciPassthroughFilter scheduler_default_filters = <existing filters>, PciPassthroughFilter
scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_available_filters = nova.scheduler.filters.all_filters
.. end
Edit the ``/etc/kolla/config/nova.conf`` file and add PCI device whitelisting. Edit the ``/etc/kolla/config/nova.conf`` file and add PCI device whitelisting.
this is needed by OpenStack Compute service(s) on the Compute. this is needed by OpenStack Compute service(s) on the Compute.
@ -354,8 +328,6 @@ this is needed by OpenStack Compute service(s) on the Compute.
[pci] [pci]
passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}] passthrough_whitelist = [{"devname": "ens785f0", "physical_network": "sriovtenant1"}]
.. end
Modify the ``/etc/kolla/config/neutron/sriov_agent.ini`` file. Add physical Modify the ``/etc/kolla/config/neutron/sriov_agent.ini`` file. Add physical
network to interface mapping. Specific VFs can also be excluded here. Leaving network to interface mapping. Specific VFs can also be excluded here. Leaving
blank means to enable all VFs for the interface: blank means to enable all VFs for the interface:
@ -367,8 +339,6 @@ blank means to enable all VFs for the interface:
physical_device_mappings = sriovtenant1:ens785f0 physical_device_mappings = sriovtenant1:ens785f0
exclude_devices = exclude_devices =
.. end
Run deployment. Run deployment.
Verification Verification
@ -392,8 +362,6 @@ output of both ``lspci`` and ``ip link show``. For example:
vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off vf 2 MAC fa:16:3e:92:cf:12, spoof checking on, link-state auto, trust off
vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off vf 3 MAC fa:16:3e:00:a3:01, vlan 1000, spoof checking on, link-state auto, trust off
.. end
Verify the SRIOV Agent container is running on the compute node(s): Verify the SRIOV Agent container is running on the compute node(s):
.. code-block:: console .. code-block:: console
@ -402,8 +370,6 @@ Verify the SRIOV Agent container is running on the compute node(s):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b03a8f4c0b80 10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0 "kolla_start" 18 minutes ago Up 18 minutes neutron_sriov_agent b03a8f4c0b80 10.10.10.10:4000/registry/centos-source-neutron-sriov-agent:17.04.0 "kolla_start" 18 minutes ago Up 18 minutes neutron_sriov_agent
.. end
Verify the SRIOV Agent service is present and UP: Verify the SRIOV Agent service is present and UP:
.. code-block:: console .. code-block:: console
@ -416,8 +382,6 @@ Verify the SRIOV Agent service is present and UP:
| 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent | av09-18-wcp | None | :-) | UP | neutron-sriov-nic-agent | | 7c06bda9-7b87-487e-a645-cc6c289d9082 | NIC Switch agent | av09-18-wcp | None | :-) | UP | neutron-sriov-nic-agent |
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+ +--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
.. end
Create a new provider network. Set ``provider-physical-network`` to the Create a new provider network. Set ``provider-physical-network`` to the
physical network name that was configured in ``/etc/kolla/config/nova.conf``. physical network name that was configured in ``/etc/kolla/config/nova.conf``.
Set ``provider-network-type`` to the desired type. If using VLAN, ensure Set ``provider-network-type`` to the desired type. If using VLAN, ensure
@ -442,16 +406,12 @@ Create a subnet with a DHCP range for the provider network:
--allocation-pool start=11.0.0.5,end=11.0.0.100 \ --allocation-pool start=11.0.0.5,end=11.0.0.100 \
sriovnet1_sub1 sriovnet1_sub1
.. end
Create a port on the provider network with ``vnic_type`` set to ``direct``: Create a port on the provider network with ``vnic_type`` set to ``direct``:
.. code-block:: console .. code-block:: console
# openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1 # openstack port create --network sriovnet1 --vnic-type=direct sriovnet1-port1
.. end
Start a new instance with the SRIOV port assigned: Start a new instance with the SRIOV port assigned:
.. code-block:: console .. code-block:: console
@ -471,8 +431,6 @@ dmesg on the compute node where the instance was placed.
[ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3 [ 2896.850028] ixgbe 0000:05:00.0: Setting VLAN 1000, QOS 0x0 on VF 3
[ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002) [ 2897.403367] vfio-pci 0000:05:10.4: enabling device (0000 -> 0002)
.. end
For more information see `OpenStack SRIOV documentation <https://docs.openstack.org/neutron/pike/admin/config-sriov.html>`_. For more information see `OpenStack SRIOV documentation <https://docs.openstack.org/neutron/pike/admin/config-sriov.html>`_.
Nova SRIOV Nova SRIOV
@ -508,8 +466,6 @@ Compute service on the compute node also require the ``alias`` option under the
passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}] passthrough_whitelist = [{"vendor_id": "8086", "product_id": "10fb"}]
alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}] alias = [{"vendor_id":"8086", "product_id":"10ed", "device_type":"type-VF", "name":"vf1"}]
.. end
Run deployment. Run deployment.
Verification Verification
@ -522,16 +478,12 @@ device from the PCI alias:
# openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1" # openstack flavor set sriov-flavor --property "pci_passthrough:alias"="vf1:1"
.. end
Start a new instance using the flavor: Start a new instance using the flavor:
.. code-block:: console .. code-block:: console
# openstack server create --flavor sriov-flavor --image fc-26 vm2 # openstack server create --flavor sriov-flavor --image fc-26 vm2
.. end
Verify VF devices were created and the instance starts successfully as in Verify VF devices were created and the instance starts successfully as in
the Neutron SRIOV case. the Neutron SRIOV case.

View File

@ -32,8 +32,6 @@ the command line options.
enable_nova_fake: "yes" enable_nova_fake: "yes"
num_nova_fake_per_node: 5 num_nova_fake_per_node: 5
.. end
Each Compute node will run 5 ``nova-compute`` containers and 5 Each Compute node will run 5 ``nova-compute`` containers and 5
``neutron-plugin-agent`` containers. When booting instance, there will be ``neutron-plugin-agent`` containers. When booting instance, there will be
no real instances created. But :command:`nova list` shows the fake instances. no real instances created. But :command:`nova list` shows the fake instances.

View File

@ -25,8 +25,6 @@ Enable ``OSprofiler`` in ``/etc/kolla/globals.yml`` file:
enable_osprofiler: "yes" enable_osprofiler: "yes"
enable_elasticsearch: "yes" enable_elasticsearch: "yes"
.. end
Verify operation Verify operation
---------------- ----------------
@ -43,8 +41,6 @@ UUID for :command:`openstack server create` command.
--image cirros --flavor m1.tiny --key-name mykey \ --image cirros --flavor m1.tiny --key-name mykey \
--nic net-id=${NETWORK_ID} demo --nic net-id=${NETWORK_ID} demo
.. end
The previous command will output the command to retrieve OSprofiler trace. The previous command will output the command to retrieve OSprofiler trace.
.. code-block:: console .. code-block:: console
@ -52,8 +48,6 @@ The previous command will output the command to retrieve OSprofiler trace.
$ osprofiler trace show --html <TRACE_ID> --connection-string \ $ osprofiler trace show --html <TRACE_ID> --connection-string \
elasticsearch://<api_interface_address>:9200 elasticsearch://<api_interface_address>:9200
.. end
For more information about how OSprofiler works, see For more information about how OSprofiler works, see
`OSProfiler Cross-project profiling library `OSProfiler Cross-project profiling library
<https://docs.openstack.org/osprofiler/latest/>`__. <https://docs.openstack.org/osprofiler/latest/>`__.

View File

@ -28,8 +28,6 @@ The resources currently supported by Kolla Ansible are:
kernel_memory kernel_memory
blkio_weight blkio_weight
.. end
Pre-deployment Configuration Pre-deployment Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -89,8 +87,6 @@ add the following to the dimensions options section in
default_container_dimensions: default_container_dimensions:
cpuset_cpus: "1" cpuset_cpus: "1"
.. end
For example, to constrain the number of CPUs that may be used by For example, to constrain the number of CPUs that may be used by
the ``nova_libvirt`` container, add the following to the dimensions the ``nova_libvirt`` container, add the following to the dimensions
options section in ``/etc/kolla/globals.yml``: options section in ``/etc/kolla/globals.yml``:
@ -100,8 +96,6 @@ options section in ``/etc/kolla/globals.yml``:
nova_libvirt_dimensions: nova_libvirt_dimensions:
cpuset_cpus: "2" cpuset_cpus: "2"
.. end
Deployment Deployment
~~~~~~~~~~ ~~~~~~~~~~
@ -111,4 +105,3 @@ To deploy resource constrained containers, run the deployment as usual:
$ kolla-ansible deploy -i /path/to/inventory $ kolla-ansible deploy -i /path/to/inventory
.. end

View File

@ -23,8 +23,6 @@ Enable Skydive in ``/etc/kolla/globals.yml`` file:
enable_skydive: "yes" enable_skydive: "yes"
enable_elasticsearch: "yes" enable_elasticsearch: "yes"
.. end
Verify operation Verify operation
---------------- ----------------

View File

@ -33,8 +33,6 @@ for three disks:
(( index++ )) (( index++ ))
done done
.. end
For evaluation, loopback devices can be used in lieu of real disks: For evaluation, loopback devices can be used in lieu of real disks:
.. code-block:: console .. code-block:: console
@ -49,8 +47,6 @@ For evaluation, loopback devices can be used in lieu of real disks:
(( index++ )) (( index++ ))
done done
.. end
Disks without a partition table Disks without a partition table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -67,8 +63,6 @@ Given hard disks with labels swd1, swd2, swd3, use the following settings in
swift_devices_match_mode: "prefix" swift_devices_match_mode: "prefix"
swift_devices_name: "swd" swift_devices_name: "swd"
.. end
Rings Rings
~~~~~ ~~~~~
@ -93,8 +87,6 @@ the environment variable and create ``/etc/kolla/config/swift`` directory:
KOLLA_SWIFT_BASE_IMAGE="kolla/oraclelinux-source-swift-base:4.0.0" KOLLA_SWIFT_BASE_IMAGE="kolla/oraclelinux-source-swift-base:4.0.0"
mkdir -p /etc/kolla/config/swift mkdir -p /etc/kolla/config/swift
.. end
Generate Object Ring Generate Object Ring
-------------------- --------------------
@ -120,8 +112,6 @@ To generate Swift object ring, run the following commands:
done done
done done
.. end
Generate Account Ring Generate Account Ring
--------------------- ---------------------
@ -147,8 +137,6 @@ To generate Swift account ring, run the following commands:
done done
done done
.. end
Generate Container Ring Generate Container Ring
----------------------- -----------------------
@ -183,8 +171,6 @@ To generate Swift container ring, run the following commands:
/etc/kolla/config/swift/${ring}.builder rebalance; /etc/kolla/config/swift/${ring}.builder rebalance;
done done
.. end
For more information, see For more information, see
https://docs.openstack.org/project-install-guide/object-storage/ocata/initial-rings.html https://docs.openstack.org/project-install-guide/object-storage/ocata/initial-rings.html
@ -197,8 +183,6 @@ Enable Swift in ``/etc/kolla/globals.yml``:
enable_swift : "yes" enable_swift : "yes"
.. end
Once the rings are in place, deploying Swift is the same as any other Kolla Once the rings are in place, deploying Swift is the same as any other Kolla
Ansible service: Ansible service:
@ -206,8 +190,6 @@ Ansible service:
# kolla-ansible deploy -i <path/to/inventory-file> # kolla-ansible deploy -i <path/to/inventory-file>
.. end
Verification Verification
~~~~~~~~~~~~ ~~~~~~~~~~~~

View File

@ -59,8 +59,6 @@ In order to enable them, you need to edit the file
enable_mistral: "yes" enable_mistral: "yes"
enable_redis: "yes" enable_redis: "yes"
.. end
.. warning:: .. warning::
Barbican is required in multinode deployments to share VIM fernet_keys. Barbican is required in multinode deployments to share VIM fernet_keys.
@ -74,8 +72,6 @@ Deploy tacker and related services.
$ kolla-ansible deploy $ kolla-ansible deploy
.. end
Verification Verification
~~~~~~~~~~~~ ~~~~~~~~~~~~
@ -85,24 +81,18 @@ Generate the credentials file.
$ kolla-ansible post-deploy $ kolla-ansible post-deploy
.. end
Source credentials file. Source credentials file.
.. code-block:: console .. code-block:: console
$ . /etc/kolla/admin-openrc.sh $ . /etc/kolla/admin-openrc.sh
.. end
Create base neutron networks and glance images. Create base neutron networks and glance images.
.. code-block:: console .. code-block:: console
$ ./tools/init-runonce $ ./tools/init-runonce
.. end
.. note:: .. note::
``init-runonce`` file is located in ``$PYTHON_PATH/kolla-ansible`` ``init-runonce`` file is located in ``$PYTHON_PATH/kolla-ansible``
@ -123,16 +113,12 @@ Install python-tackerclient.
$ pip install python-tackerclient $ pip install python-tackerclient
.. end
Execute ``deploy-tacker-demo`` script to initialize the VNF creation. Execute ``deploy-tacker-demo`` script to initialize the VNF creation.
.. code-block:: console .. code-block:: console
$ ./deploy-tacker-demo $ ./deploy-tacker-demo
.. end
Tacker demo script will create sample VNF Descriptor (VNFD) file, Tacker demo script will create sample VNF Descriptor (VNFD) file,
then register a default VIM, create a tacker VNFD and finally then register a default VIM, create a tacker VNFD and finally
deploy a VNF from the previously created VNFD. deploy a VNF from the previously created VNFD.
@ -153,8 +139,6 @@ Verify tacker VNF status is ACTIVE.
| c52fcf99-101d-427b-8a2d-c9ef54af8b1d | kolla-sample-vnf | {"VDU1": "10.0.0.10"} | ACTIVE | eb3aa497-192c-4557-a9d7-1dff6874a8e6 | 27e8ea98-f1ff-4a40-a45c-e829e53b3c41 | | c52fcf99-101d-427b-8a2d-c9ef54af8b1d | kolla-sample-vnf | {"VDU1": "10.0.0.10"} | ACTIVE | eb3aa497-192c-4557-a9d7-1dff6874a8e6 | 27e8ea98-f1ff-4a40-a45c-e829e53b3c41 |
+--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+ +--------------------------------------+------------------+-----------------------+--------+--------------------------------------+--------------------------------------+
.. end
Verify nova instance status is ACTIVE. Verify nova instance status is ACTIVE.
.. code-block:: console .. code-block:: console
@ -167,8 +151,6 @@ Verify nova instance status is ACTIVE.
| d2d59eeb-8526-4826-8f1b-c50b571395e2 | ta-cf99-101d-427b-8a2d-c9ef54af8b1d-VDU1-fchiv6saay7p | ACTIVE | demo-net=10.0.0.10 | cirros | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d-VDU1_flavor-yl4bzskwxdkn | | d2d59eeb-8526-4826-8f1b-c50b571395e2 | ta-cf99-101d-427b-8a2d-c9ef54af8b1d-VDU1-fchiv6saay7p | ACTIVE | demo-net=10.0.0.10 | cirros | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d-VDU1_flavor-yl4bzskwxdkn |
+--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+ +--------------------------------------+-------------------------------------------------------+--------+--------------------+--------+-----------------------------------------------------------------------------------------------------------------------+
.. end
Verify Heat stack status is CREATE_COMPLETE. Verify Heat stack status is CREATE_COMPLETE.
.. code-block:: console .. code-block:: console
@ -181,8 +163,6 @@ Verify Heat stack status is CREATE_COMPLETE.
| 289a6686-70f6-4db7-aa10-ed169fe547a6 | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d | 1243948e59054aab83dbf2803e109b3f | CREATE_COMPLETE | 2017-08-23T09:49:50Z | None | | 289a6686-70f6-4db7-aa10-ed169fe547a6 | tacker.vnfm.infra_drivers.openstack.openstack_OpenStack-c52fcf99-101d-427b-8a2d-c9ef54af8b1d | 1243948e59054aab83dbf2803e109b3f | CREATE_COMPLETE | 2017-08-23T09:49:50Z | None |
+--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+ +--------------------------------------+----------------------------------------------------------------------------------------------+----------------------------------+-----------------+----------------------+--------------+
.. end
After the correct functionality of tacker is verified, tacker demo After the correct functionality of tacker is verified, tacker demo
can be cleaned up executing ``cleanup-tacker`` script. can be cleaned up executing ``cleanup-tacker`` script.
@ -190,4 +170,3 @@ can be cleaned up executing ``cleanup-tacker`` script.
$ ./cleanup-tacker $ ./cleanup-tacker
.. end

View File

@ -92,24 +92,18 @@ For more information, please see `VMware NSX-V documentation <https://docs.vmwar
</service> </service>
</ConfigRoot> </ConfigRoot>
.. end
Then refresh the firewall config by: Then refresh the firewall config by:
.. code-block:: console .. code-block:: console
# esxcli network firewall refresh # esxcli network firewall refresh
.. end
Verify that the firewall config is applied: Verify that the firewall config is applied:
.. code-block:: console .. code-block:: console
# esxcli network firewall ruleset list # esxcli network firewall ruleset list
.. end
Deployment Deployment
---------- ----------
@ -121,8 +115,6 @@ Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
nova_compute_virt_type: "vmware" nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_nsxv" neutron_plugin_agent: "vmware_nsxv"
.. end
.. note:: .. note::
VMware NSX-V also supports Neutron FWaaS, LBaaS and VPNaaS services, you can enable VMware NSX-V also supports Neutron FWaaS, LBaaS and VPNaaS services, you can enable
@ -141,8 +133,6 @@ If you want to set VMware datastore as cinder backend, enable it in
cinder_backend_vmwarevc_vmdk: "yes" cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore" vmware_datastore_name: "TestDatastore"
.. end
If you want to set VMware datastore as glance backend, enable it in If you want to set VMware datastore as glance backend, enable it in
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
@ -152,8 +142,6 @@ If you want to set VMware datastore as glance backend, enable it in
vmware_vcenter_name: "TestDatacenter" vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore" vmware_datastore_name: "TestDatastore"
.. end
VMware options are required in ``/etc/kolla/globals.yml``, these options should VMware options are required in ``/etc/kolla/globals.yml``, these options should
be configured correctly according to your NSX-V environment. be configured correctly according to your NSX-V environment.
@ -167,8 +155,6 @@ Options for ``nova-compute`` and ``ceilometer``:
vmware_vcenter_insecure: "True" vmware_vcenter_insecure: "True"
vmware_vcenter_datastore_regex: ".*" vmware_vcenter_datastore_regex: ".*"
.. end
.. note:: .. note::
The VMware vCenter password has to be set in ``/etc/kolla/passwords.yml``. The VMware vCenter password has to be set in ``/etc/kolla/passwords.yml``.
@ -177,8 +163,6 @@ Options for ``nova-compute`` and ``ceilometer``:
vmware_vcenter_host_password: "admin" vmware_vcenter_host_password: "admin"
.. end
Options for Neutron NSX-V support: Options for Neutron NSX-V support:
.. code-block:: yaml .. code-block:: yaml
@ -214,8 +198,6 @@ Options for Neutron NSX-V support:
vmware_nsxv_password: "nsx_manager_password" vmware_nsxv_password: "nsx_manager_password"
.. end
Then you should start :command:`kolla-ansible` deployment normally as Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment. KVM/QEMU deployment.
@ -243,8 +225,6 @@ Enable VMware nova-compute plugin and NSX-V neutron-server plugin in
nova_compute_virt_type: "vmware" nova_compute_virt_type: "vmware"
neutron_plugin_agent: "vmware_dvs" neutron_plugin_agent: "vmware_dvs"
.. end
If you want to set VMware datastore as Cinder backend, enable it in If you want to set VMware datastore as Cinder backend, enable it in
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
@ -254,8 +234,6 @@ If you want to set VMware datastore as Cinder backend, enable it in
cinder_backend_vmwarevc_vmdk: "yes" cinder_backend_vmwarevc_vmdk: "yes"
vmware_datastore_name: "TestDatastore" vmware_datastore_name: "TestDatastore"
.. end
If you want to set VMware datastore as Glance backend, enable it in If you want to set VMware datastore as Glance backend, enable it in
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
@ -265,8 +243,6 @@ If you want to set VMware datastore as Glance backend, enable it in
vmware_vcenter_name: "TestDatacenter" vmware_vcenter_name: "TestDatacenter"
vmware_datastore_name: "TestDatastore" vmware_datastore_name: "TestDatastore"
.. end
VMware options are required in ``/etc/kolla/globals.yml``, these options should VMware options are required in ``/etc/kolla/globals.yml``, these options should
be configured correctly according to the vSphere environment you installed be configured correctly according to the vSphere environment you installed
before. All option for nova, cinder, glance are the same as VMware-NSX, except before. All option for nova, cinder, glance are the same as VMware-NSX, except
@ -282,8 +258,6 @@ Options for Neutron NSX-DVS support:
vmware_dvs_dvs_name: "VDS-1" vmware_dvs_dvs_name: "VDS-1"
vmware_dvs_dhcp_override_mac: "" vmware_dvs_dhcp_override_mac: ""
.. end
.. note:: .. note::
The VMware NSX-DVS password has to be set in ``/etc/kolla/passwords.yml``. The VMware NSX-DVS password has to be set in ``/etc/kolla/passwords.yml``.
@ -292,8 +266,6 @@ Options for Neutron NSX-DVS support:
vmware_dvs_host_password: "password" vmware_dvs_host_password: "password"
.. end
Then you should start :command:`kolla-ansible` deployment normally as Then you should start :command:`kolla-ansible` deployment normally as
KVM/QEMU deployment. KVM/QEMU deployment.

View File

@ -21,8 +21,6 @@ To allow Zun Compute connect to the Docker Daemon, add the following in the
ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375 ExecStart= -H tcp://<DOCKER_SERVICE_IP>:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://<DOCKER_SERVICE_IP>:2379 --cluster-advertise=<DOCKER_SERVICE_IP>:2375
.. end
.. note:: .. note::
``DOCKER_SERVICE_IP`` is zun-compute host IP address. ``2375`` is port that ``DOCKER_SERVICE_IP`` is zun-compute host IP address. ``2375`` is port that
@ -38,16 +36,12 @@ following variables:
enable_kuryr: "yes" enable_kuryr: "yes"
enable_etcd: "yes" enable_etcd: "yes"
.. end
Deploy the OpenStack cloud and zun. Deploy the OpenStack cloud and zun.
.. code-block:: console .. code-block:: console
$ kolla-ansible deploy $ kolla-ansible deploy
.. end
Verification Verification
------------ ------------
@ -57,16 +51,12 @@ Verification
$ kolla-ansible post-deploy $ kolla-ansible post-deploy
.. end
#. Source credentials file: #. Source credentials file:
.. code-block:: console .. code-block:: console
$ . /etc/kolla/admin-openrc.sh $ . /etc/kolla/admin-openrc.sh
.. end
#. Download and create a glance container image: #. Download and create a glance container image:
.. code-block:: console .. code-block:: console
@ -75,16 +65,12 @@ Verification
$ docker save cirros | openstack image create cirros --public \ $ docker save cirros | openstack image create cirros --public \
--container-format docker --disk-format raw --container-format docker --disk-format raw
.. end
#. Create zun container: #. Create zun container:
.. code-block:: console .. code-block:: console
$ zun create --name test --net network=demo-net cirros ping -c4 8.8.8.8 $ zun create --name test --net network=demo-net cirros ping -c4 8.8.8.8
.. end
.. note:: .. note::
Kuryr does not support networks with DHCP enabled, disable DHCP in the Kuryr does not support networks with DHCP enabled, disable DHCP in the
@ -94,8 +80,6 @@ Verification
$ openstack subnet set --no-dhcp <subnet> $ openstack subnet set --no-dhcp <subnet>
.. end
#. Verify container is created: #. Verify container is created:
.. code-block:: console .. code-block:: console
@ -108,8 +92,6 @@ Verification
| 3719a73e-5f86-47e1-bc5f-f4074fc749f2 | test | cirros | Created | None | 172.17.0.3 | [] | | 3719a73e-5f86-47e1-bc5f-f4074fc749f2 | test | cirros | Created | None | 172.17.0.3 | [] |
+--------------------------------------+------+---------------+---------+------------+------------+-------+ +--------------------------------------+------+---------------+---------+------------+------------+-------+
.. end
#. Start container: #. Start container:
.. code-block:: console .. code-block:: console
@ -117,8 +99,6 @@ Verification
$ zun start test $ zun start test
Request to start container test has been accepted. Request to start container test has been accepted.
.. end
#. Verify container: #. Verify container:
.. code-block:: console .. code-block:: console
@ -134,7 +114,5 @@ Verification
4 packets transmitted, 4 packets received, 0% packet loss 4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 95.884/96.376/96.721 ms round-trip min/avg/max = 95.884/96.376/96.721 ms
.. end
For more information about how zun works, see For more information about how zun works, see
`zun, OpenStack Container service <https://docs.openstack.org/zun/latest/>`__. `zun, OpenStack Container service <https://docs.openstack.org/zun/latest/>`__.

View File

@ -32,8 +32,6 @@ Keystone and Horizon are enabled:
enable_keystone: "yes" enable_keystone: "yes"
enable_horizon: "yes" enable_horizon: "yes"
.. end
Then, change the value of ``multiple_regions_names`` to add names of other Then, change the value of ``multiple_regions_names`` to add names of other
regions. In this example, we consider two regions. The current one, regions. In this example, we consider two regions. The current one,
formerly knows as RegionOne, that is hided behind formerly knows as RegionOne, that is hided behind
@ -46,8 +44,6 @@ formerly knows as RegionOne, that is hided behind
- "{{ openstack_region_name }}" - "{{ openstack_region_name }}"
- "RegionTwo" - "RegionTwo"
.. end
.. note:: .. note::
Kolla uses these variables to create necessary endpoints into Kolla uses these variables to create necessary endpoints into
@ -83,8 +79,6 @@ the value of ``kolla_internal_fqdn`` in RegionOne:
project_name: "admin" project_name: "admin"
domain_name: "default" domain_name: "default"
.. end
Configuration files of cinder,nova,neutron,glance... have to be updated to Configuration files of cinder,nova,neutron,glance... have to be updated to
contact RegionOne's Keystone. Fortunately, Kolla offers to override all contact RegionOne's Keystone. Fortunately, Kolla offers to override all
configuration files at the same time thanks to the configuration files at the same time thanks to the
@ -97,8 +91,6 @@ implies to create a ``global.conf`` file with the following content:
www_authenticate_uri = {{ keystone_internal_url }} www_authenticate_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_admin_url }} auth_url = {{ keystone_admin_url }}
.. end
The Placement API section inside the nova configuration file also has The Placement API section inside the nova configuration file also has
to be updated to contact RegionOne's Keystone. So create, in the same to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``nova.conf`` file with below content: directory, a ``nova.conf`` file with below content:
@ -108,8 +100,6 @@ directory, a ``nova.conf`` file with below content:
[placement] [placement]
auth_url = {{ keystone_admin_url }} auth_url = {{ keystone_admin_url }}
.. end
The Heat section inside the configuration file also The Heat section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``heat.conf`` file with below content: directory, a ``heat.conf`` file with below content:
@ -126,8 +116,6 @@ directory, a ``heat.conf`` file with below content:
[clients_keystone] [clients_keystone]
www_authenticate_uri = {{ keystone_internal_url }} www_authenticate_uri = {{ keystone_internal_url }}
.. end
The Ceilometer section inside the configuration file also The Ceilometer section inside the configuration file also
has to be updated to contact RegionOne's Keystone. So create, in the same has to be updated to contact RegionOne's Keystone. So create, in the same
directory, a ``ceilometer.conf`` file with below content: directory, a ``ceilometer.conf`` file with below content:
@ -137,8 +125,6 @@ directory, a ``ceilometer.conf`` file with below content:
[service_credentials] [service_credentials]
auth_url = {{ keystone_internal_url }} auth_url = {{ keystone_internal_url }}
.. end
And link the directory that contains these files into the And link the directory that contains these files into the
``/etc/kolla/globals.yml``: ``/etc/kolla/globals.yml``:
@ -146,16 +132,12 @@ And link the directory that contains these files into the
node_custom_config: path/to/the/directory/of/global&nova_conf/ node_custom_config: path/to/the/directory/of/global&nova_conf/
.. end
Also, change the name of the current region. For instance, RegionTwo: Also, change the name of the current region. For instance, RegionTwo:
.. code-block:: yaml .. code-block:: yaml
openstack_region_name: "RegionTwo" openstack_region_name: "RegionTwo"
.. end
Finally, disable the deployment of Keystone and Horizon that are Finally, disable the deployment of Keystone and Horizon that are
unnecessary in this region and run ``kolla-ansible``: unnecessary in this region and run ``kolla-ansible``:
@ -164,6 +146,4 @@ unnecessary in this region and run ``kolla-ansible``:
enable_keystone: "no" enable_keystone: "no"
enable_horizon: "no" enable_horizon: "no"
.. end
The configuration is the same for any other region. The configuration is the same for any other region.

View File

@ -28,8 +28,6 @@ currently running:
docker_registry: 192.168.1.100:5000 docker_registry: 192.168.1.100:5000
.. end
The Kolla community recommends using registry 2.3 or later. To deploy registry The Kolla community recommends using registry 2.3 or later. To deploy registry
with version 2.3 or later, do the following: with version 2.3 or later, do the following:
@ -38,8 +36,6 @@ with version 2.3 or later, do the following:
cd kolla cd kolla
tools/start-registry tools/start-registry
.. end
The Docker registry can be configured as a pull through cache to proxy the The Docker registry can be configured as a pull through cache to proxy the
official Kolla images hosted in Docker Hub. In order to configure the local official Kolla images hosted in Docker Hub. In order to configure the local
registry as a pull through cache, in the host machine set the environment registry as a pull through cache, in the host machine set the environment
@ -50,8 +46,6 @@ Docker Hub.
export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io export REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io
.. end
.. note:: .. note::
Pushing to a registry configured as a pull-through cache is unsupported. Pushing to a registry configured as a pull-through cache is unsupported.
@ -82,8 +76,6 @@ is currently running:
"insecure-registries" : ["192.168.1.100:5000"] "insecure-registries" : ["192.168.1.100:5000"]
} }
.. end
Restart Docker by executing the following commands: Restart Docker by executing the following commands:
For CentOS or Ubuntu with systemd: For CentOS or Ubuntu with systemd:
@ -92,16 +84,12 @@ For CentOS or Ubuntu with systemd:
systemctl restart docker systemctl restart docker
.. end
For Ubuntu with upstart or sysvinit: For Ubuntu with upstart or sysvinit:
.. code-block:: console .. code-block:: console
service docker restart service docker restart
.. end
.. _edit-inventory: .. _edit-inventory:
Edit the Inventory File Edit the Inventory File
@ -134,8 +122,6 @@ controls how ansible interacts with remote hosts.
control01 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file> control01 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file> 192.168.122.24 ansible_ssh_user=<ssh-username> ansible_become=True ansible_private_key_file=<path/to/private-key-file>
.. end
.. note:: .. note::
Additional inventory parameters might be required according to your Additional inventory parameters might be required according to your
@ -159,8 +145,6 @@ grouped together and changing these around can break your deployment:
[haproxy:children] [haproxy:children]
network network
.. end
Deploying Kolla Deploying Kolla
=============== ===============
@ -184,8 +168,6 @@ to them:
kolla-ansible prechecks -i <path/to/multinode/inventory/file> kolla-ansible prechecks -i <path/to/multinode/inventory/file>
.. end
.. note:: .. note::
RabbitMQ doesn't work with IP addresses, hence the IP address of RabbitMQ doesn't work with IP addresses, hence the IP address of
@ -198,4 +180,3 @@ Run the deployment:
kolla-ansible deploy -i <path/to/multinode/inventory/file> kolla-ansible deploy -i <path/to/multinode/inventory/file>
.. end

View File

@ -81,8 +81,6 @@ If upgrading from ``5.0.0`` to ``6.0.0``, upgrade the kolla-ansible package:
pip install --upgrade kolla-ansible==6.0.0 pip install --upgrade kolla-ansible==6.0.0
.. end
If this is a minor upgrade, and you do not wish to upgrade kolla-ansible If this is a minor upgrade, and you do not wish to upgrade kolla-ansible
itself, you may skip this step. itself, you may skip this step.
@ -118,8 +116,6 @@ For the kolla docker images, the ``openstack_release`` is updated to ``6.0.0``:
openstack_release: 6.0.0 openstack_release: 6.0.0
.. end
Once the kolla release, the inventory file, and the relevant configuration Once the kolla release, the inventory file, and the relevant configuration
files have been updated in this way, the operator may first want to 'pull' files have been updated in this way, the operator may first want to 'pull'
down the images to stage the ``6.0.0`` versions. This can be done safely down the images to stage the ``6.0.0`` versions. This can be done safely
@ -131,8 +127,6 @@ Run the command to pull the ``6.0.0`` images for staging:
kolla-ansible pull kolla-ansible pull
.. end
At a convenient time, the upgrade can now be run (it will complete more At a convenient time, the upgrade can now be run (it will complete more
quickly if the images have been staged ahead of time). quickly if the images have been staged ahead of time).
@ -145,8 +139,6 @@ To perform the upgrade:
kolla-ansible upgrade kolla-ansible upgrade
.. end
After this command is complete the containers will have been recreated from the After this command is complete the containers will have been recreated from the
new images. new images.
@ -220,4 +212,3 @@ For example:
kolla-genpwd -p passwords.yml.new kolla-genpwd -p passwords.yml.new
kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml kolla-mergepwd --old passwords.yml.old --new passwords.yml.new --final /etc/kolla/passwords.yml
.. end

View File

@ -35,8 +35,6 @@ Install dependencies
yum install python-pip yum install python-pip
pip install -U pip pip install -U pip
.. end
For Ubuntu, run: For Ubuntu, run:
.. code-block:: console .. code-block:: console
@ -45,8 +43,6 @@ Install dependencies
apt-get install python-pip apt-get install python-pip
pip install -U pip pip install -U pip
.. end
#. Install the following dependencies: #. Install the following dependencies:
For CentOS, run: For CentOS, run:
@ -55,16 +51,12 @@ Install dependencies
yum install python-devel libffi-devel gcc openssl-devel libselinux-python yum install python-devel libffi-devel gcc openssl-devel libselinux-python
.. end
For Ubuntu, run: For Ubuntu, run:
.. code-block:: console .. code-block:: console
apt-get install python-dev libffi-dev gcc libssl-dev python-selinux python-setuptools apt-get install python-dev libffi-dev gcc libssl-dev python-selinux python-setuptools
.. end
#. Install `Ansible <http://www.ansible.com>`__ from distribution packaging: #. Install `Ansible <http://www.ansible.com>`__ from distribution packaging:
.. note:: .. note::
@ -82,24 +74,18 @@ Install dependencies
yum install ansible yum install ansible
.. end
For Ubuntu, it can be installed by: For Ubuntu, it can be installed by:
.. code-block:: console .. code-block:: console
apt-get install ansible apt-get install ansible
.. end
#. Use ``pip`` to install or upgrade Ansible to latest version: #. Use ``pip`` to install or upgrade Ansible to latest version:
.. code-block:: console .. code-block:: console
pip install -U ansible pip install -U ansible
.. end
.. note:: .. note::
It is recommended to use virtualenv to install non-system packages. It is recommended to use virtualenv to install non-system packages.
@ -115,8 +101,6 @@ Install dependencies
pipelining=True pipelining=True
forks=100 forks=100
.. end
Install Kolla-ansible Install Kolla-ansible
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
@ -129,8 +113,6 @@ Install Kolla-ansible for deployment or evaluation
pip install kolla-ansible pip install kolla-ansible
.. end
#. Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory. #. Copy ``globals.yml`` and ``passwords.yml`` to ``/etc/kolla`` directory.
For CentOS, run: For CentOS, run:
@ -139,16 +121,12 @@ Install Kolla-ansible for deployment or evaluation
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/ cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/
.. end
For Ubuntu, run: For Ubuntu, run:
.. code-block:: console .. code-block:: console
cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/ cp -r /usr/local/share/kolla-ansible/etc_examples/kolla /etc/
.. end
#. Copy ``all-in-one`` and ``multinode`` inventory files to #. Copy ``all-in-one`` and ``multinode`` inventory files to
the current directory. the current directory.
@ -158,16 +136,12 @@ Install Kolla-ansible for deployment or evaluation
cp /usr/share/kolla-ansible/ansible/inventory/* . cp /usr/share/kolla-ansible/ansible/inventory/* .
.. end
For Ubuntu, run: For Ubuntu, run:
.. code-block:: console .. code-block:: console
cp /usr/local/share/kolla-ansible/ansible/inventory/* . cp /usr/local/share/kolla-ansible/ansible/inventory/* .
.. end
Install Kolla for development Install Kolla for development
----------------------------- -----------------------------
@ -178,8 +152,6 @@ Install Kolla for development
git clone https://github.com/openstack/kolla git clone https://github.com/openstack/kolla
git clone https://github.com/openstack/kolla-ansible git clone https://github.com/openstack/kolla-ansible
.. end
#. Install requirements of ``kolla`` and ``kolla-ansible``: #. Install requirements of ``kolla`` and ``kolla-ansible``:
.. code-block:: console .. code-block:: console
@ -187,8 +159,6 @@ Install Kolla for development
pip install -r kolla/requirements.txt pip install -r kolla/requirements.txt
pip install -r kolla-ansible/requirements.txt pip install -r kolla-ansible/requirements.txt
.. end
#. Copy the configuration files to ``/etc/kolla`` directory. #. Copy the configuration files to ``/etc/kolla`` directory.
``kolla-ansible`` holds the configuration files ( ``globals.yml`` and ``kolla-ansible`` holds the configuration files ( ``globals.yml`` and
``passwords.yml``) in ``etc/kolla``. ``passwords.yml``) in ``etc/kolla``.
@ -198,8 +168,6 @@ Install Kolla for development
mkdir -p /etc/kolla mkdir -p /etc/kolla
cp -r kolla-ansible/etc/kolla/* /etc/kolla cp -r kolla-ansible/etc/kolla/* /etc/kolla
.. end
#. Copy the inventory files to the current directory. ``kolla-ansible`` holds #. Copy the inventory files to the current directory. ``kolla-ansible`` holds
inventory files ( ``all-in-one`` and ``multinode``) in the inventory files ( ``all-in-one`` and ``multinode``) in the
``ansible/inventory`` directory. ``ansible/inventory`` directory.
@ -208,8 +176,6 @@ Install Kolla for development
cp kolla-ansible/ansible/inventory/* . cp kolla-ansible/ansible/inventory/* .
.. end
Prepare initial configuration Prepare initial configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -253,8 +219,6 @@ than one node, edit ``multinode`` inventory:
localhost ansible_connection=local become=true localhost ansible_connection=local become=true
# use localhost and sudo # use localhost and sudo
.. end
To learn more about inventory files, check To learn more about inventory files, check
`Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_. `Ansible documentation <http://docs.ansible.com/ansible/latest/intro_inventory.html>`_.
@ -264,8 +228,6 @@ than one node, edit ``multinode`` inventory:
ansible -i multinode all -m ping ansible -i multinode all -m ping
.. end
.. note:: .. note::
Ubuntu might not come with python pre-installed. That will cause Ubuntu might not come with python pre-installed. That will cause
@ -285,8 +247,6 @@ For deployment or evaluation, run:
kolla-genpwd kolla-genpwd
.. end
For development, run: For development, run:
.. code-block:: console .. code-block:: console
@ -294,8 +254,6 @@ For development, run:
cd kolla-ansible/tools cd kolla-ansible/tools
./generate_passwords.py ./generate_passwords.py
.. end
Kolla globals.yml Kolla globals.yml
----------------- -----------------
@ -324,8 +282,6 @@ There are a few options that are required to deploy Kolla-Ansible:
kolla_base_distro: "centos" kolla_base_distro: "centos"
.. end
Next "type" of installation needs to be configured. Next "type" of installation needs to be configured.
Choices are: Choices are:
@ -348,8 +304,6 @@ There are a few options that are required to deploy Kolla-Ansible:
kolla_install_type: "source" kolla_install_type: "source"
.. end
To use DockerHub images, the default image tag has to be overridden. Images are To use DockerHub images, the default image tag has to be overridden. Images are
tagged with release names. For example to use stable Pike images set tagged with release names. For example to use stable Pike images set
@ -357,8 +311,6 @@ There are a few options that are required to deploy Kolla-Ansible:
openstack_release: "pike" openstack_release: "pike"
.. end
It's important to use same version of images as kolla-ansible. That It's important to use same version of images as kolla-ansible. That
means if pip was used to install kolla-ansible, that means it's latest stable means if pip was used to install kolla-ansible, that means it's latest stable
version so ``openstack release`` should be set to queens. If git was used with version so ``openstack release`` should be set to queens. If git was used with
@ -369,8 +321,6 @@ There are a few options that are required to deploy Kolla-Ansible:
openstack_release: "master" openstack_release: "master"
.. end
* Networking * Networking
Kolla-Ansible requires a few networking options to be set. Kolla-Ansible requires a few networking options to be set.
@ -383,8 +333,6 @@ There are a few options that are required to deploy Kolla-Ansible:
network_interface: "eth0" network_interface: "eth0"
.. end
Second interface required is dedicated for Neutron external (or public) Second interface required is dedicated for Neutron external (or public)
networks, can be vlan or flat, depends on how the networks are created. networks, can be vlan or flat, depends on how the networks are created.
This interface should be active without IP address. If not, instances This interface should be active without IP address. If not, instances
@ -394,8 +342,6 @@ There are a few options that are required to deploy Kolla-Ansible:
neutron_external_interface: "eth1" neutron_external_interface: "eth1"
.. end
To learn more about network configuration, refer `Network overview To learn more about network configuration, refer `Network overview
<https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_. <https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html#network-configuration>`_.
@ -408,8 +354,6 @@ There are a few options that are required to deploy Kolla-Ansible:
kolla_internal_vip_address: "10.1.0.250" kolla_internal_vip_address: "10.1.0.250"
.. end
* Enable additional services * Enable additional services
By default Kolla-Ansible provides a bare compute kit, however it does provide By default Kolla-Ansible provides a bare compute kit, however it does provide
@ -420,8 +364,6 @@ There are a few options that are required to deploy Kolla-Ansible:
enable_cinder: "yes" enable_cinder: "yes"
.. end
Kolla now supports many OpenStack services, there is Kolla now supports many OpenStack services, there is
`a list of available services `a list of available services
<https://github.com/openstack/kolla-ansible/blob/master/README.rst#openstack-services>`_. <https://github.com/openstack/kolla-ansible/blob/master/README.rst#openstack-services>`_.
@ -446,24 +388,18 @@ the correct versions.
kolla-ansible -i ./multinode bootstrap-servers kolla-ansible -i ./multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts: #. Do pre-deployment checks for hosts:
.. code-block:: console .. code-block:: console
kolla-ansible -i ./multinode prechecks kolla-ansible -i ./multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment: #. Finally proceed to actual OpenStack deployment:
.. code-block:: console .. code-block:: console
kolla-ansible -i ./multinode deploy kolla-ansible -i ./multinode deploy
.. end
* For development, run: * For development, run:
#. Bootstrap servers with kolla deploy dependencies: #. Bootstrap servers with kolla deploy dependencies:
@ -473,24 +409,18 @@ the correct versions.
cd kolla-ansible/tools cd kolla-ansible/tools
./kolla-ansible -i ../ansible/inventory/multinode bootstrap-servers ./kolla-ansible -i ../ansible/inventory/multinode bootstrap-servers
.. end
#. Do pre-deployment checks for hosts: #. Do pre-deployment checks for hosts:
.. code-block:: console .. code-block:: console
./kolla-ansible -i ../ansible/inventory/multinode prechecks ./kolla-ansible -i ../ansible/inventory/multinode prechecks
.. end
#. Finally proceed to actual OpenStack deployment: #. Finally proceed to actual OpenStack deployment:
.. code-block:: console .. code-block:: console
./kolla-ansible -i ../ansible/inventory/multinode deploy ./kolla-ansible -i ../ansible/inventory/multinode deploy
.. end
When this playbook finishes, OpenStack should be up, running and functional! When this playbook finishes, OpenStack should be up, running and functional!
If error occurs during execution, refer to If error occurs during execution, refer to
`troubleshooting guide <https://docs.openstack.org/kolla-ansible/latest/user/troubleshooting.html>`_. `troubleshooting guide <https://docs.openstack.org/kolla-ansible/latest/user/troubleshooting.html>`_.
@ -504,8 +434,6 @@ Using OpenStack
pip install python-openstackclient python-glanceclient python-neutronclient pip install python-openstackclient python-glanceclient python-neutronclient
.. end
#. OpenStack requires an openrc file where credentials for admin user #. OpenStack requires an openrc file where credentials for admin user
are set. To generate this file: are set. To generate this file:
@ -516,8 +444,6 @@ Using OpenStack
kolla-ansible post-deploy kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh . /etc/kolla/admin-openrc.sh
.. end
* For development, run: * For development, run:
.. code-block:: console .. code-block:: console
@ -526,8 +452,6 @@ Using OpenStack
./kolla-ansible post-deploy ./kolla-ansible post-deploy
. /etc/kolla/admin-openrc.sh . /etc/kolla/admin-openrc.sh
.. end
#. Depending on how you installed Kolla-Ansible, there is a script that will #. Depending on how you installed Kolla-Ansible, there is a script that will
create example networks, images, and so on. create example networks, images, and so on.
@ -538,20 +462,15 @@ Using OpenStack
. /usr/share/kolla-ansible/init-runonce . /usr/share/kolla-ansible/init-runonce
.. end
Run ``init-runonce`` script on Ubuntu: Run ``init-runonce`` script on Ubuntu:
.. code-block:: console .. code-block:: console
. /usr/local/share/kolla-ansible/init-runonce . /usr/local/share/kolla-ansible/init-runonce
.. end
* For development, run: * For development, run:
.. code-block:: console .. code-block:: console
. kolla-ansible/tools/init-runonce . kolla-ansible/tools/init-runonce
.. end

View File

@ -26,8 +26,6 @@ process or a problem in the ``globals.yml`` configuration.
EOF EOF
systemctl restart docker systemctl restart docker
.. end
To correct the problem where Operators have a misconfigured environment, To correct the problem where Operators have a misconfigured environment,
the Kolla community has added a precheck feature which ensures the the Kolla community has added a precheck feature which ensures the
deployment targets are in a state where Kolla may deploy to them. To deployment targets are in a state where Kolla may deploy to them. To
@ -37,8 +35,6 @@ run the prechecks:
kolla-ansible prechecks kolla-ansible prechecks
.. end
If a failure during deployment occurs it nearly always occurs during evaluation If a failure during deployment occurs it nearly always occurs during evaluation
of the software. Once the Operator learns the few configuration options of the software. Once the Operator learns the few configuration options
required, it is highly unlikely they will experience a failure in deployment. required, it is highly unlikely they will experience a failure in deployment.
@ -54,8 +50,6 @@ remove the failed deployment:
kolla-ansible destroy -i <<inventory-file>> kolla-ansible destroy -i <<inventory-file>>
.. end
Any time the tags of a release change, it is possible that the container Any time the tags of a release change, it is possible that the container
implementation from older versions won't match the Ansible playbooks in a new implementation from older versions won't match the Ansible playbooks in a new
version. If running multinode from a registry, each node's Docker image cache version. If running multinode from a registry, each node's Docker image cache
@ -66,8 +60,6 @@ refresh the docker cache from the local Docker registry:
kolla-ansible pull kolla-ansible pull
.. end
Debugging Kolla Debugging Kolla
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
@ -78,8 +70,6 @@ targets by executing:
docker ps -a docker ps -a
.. end
If any of the containers exited, this indicates a bug in the container. Please If any of the containers exited, this indicates a bug in the container. Please
seek help by filing a `launchpad bug <https://bugs.launchpad.net/kolla-ansible/+filebug>`__ seek help by filing a `launchpad bug <https://bugs.launchpad.net/kolla-ansible/+filebug>`__
or contacting the developers via IRC. or contacting the developers via IRC.
@ -90,8 +80,6 @@ The logs can be examined by executing:
docker exec -it fluentd bash docker exec -it fluentd bash
.. end
The logs from all services in all containers may be read from The logs from all services in all containers may be read from
``/var/log/kolla/SERVICE_NAME`` ``/var/log/kolla/SERVICE_NAME``
@ -101,8 +89,6 @@ If the stdout logs are needed, please run:
docker logs <container-name> docker logs <container-name>
.. end
Note that most of the containers don't log to stdout so the above command will Note that most of the containers don't log to stdout so the above command will
provide no information. provide no information.