Merge "Upgrade the rst convention of the Reference Guide [1]"
This commit is contained in:
commit
a909943bf3
@ -2,8 +2,7 @@
|
||||
Bifrost Guide
|
||||
=============
|
||||
|
||||
From the bifrost developer documentation:
|
||||
|
||||
From the ``Bifrost`` developer documentation:
|
||||
Bifrost (pronounced bye-frost) is a set of Ansible playbooks that automates
|
||||
the task of deploying a base image onto a set of known hardware using
|
||||
Ironic. It provides modular utility for one-off operating system
|
||||
@ -16,7 +15,7 @@ container, as well as building a base OS image and provisioning it onto the
|
||||
baremetal nodes.
|
||||
|
||||
Hosts in the System
|
||||
===================
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In a system deployed by bifrost we define a number of classes of hosts.
|
||||
|
||||
@ -47,7 +46,7 @@ Bare metal compute hosts:
|
||||
OS images is currently out of scope.
|
||||
|
||||
Cloud Deployment Procedure
|
||||
==========================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Cloud deployment using kolla and bifrost follows the following high level
|
||||
steps:
|
||||
@ -59,7 +58,7 @@ steps:
|
||||
#. Deploy OpenStack services on the cloud hosts provisioned by bifrost.
|
||||
|
||||
Preparation
|
||||
===========
|
||||
~~~~~~~~~~~
|
||||
|
||||
Prepare the Control Host
|
||||
------------------------
|
||||
@ -78,16 +77,22 @@ has been configured to use, which with bifrost will be ``127.0.0.1``. Bifrost
|
||||
will attempt to modify ``/etc/hosts`` on the deployment host to ensure that
|
||||
this is the case. Docker bind mounts ``/etc/hosts`` into the container from a
|
||||
volume. This prevents atomic renames which will prevent Ansible from fixing
|
||||
the
|
||||
``/etc/hosts`` file automatically.
|
||||
the ``/etc/hosts`` file automatically.
|
||||
|
||||
To enable bifrost to be bootstrapped correctly add an entry to ``/etc/hosts``
|
||||
resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
||||
To enable bifrost to be bootstrapped correctly, add an entry to ``/etc/hosts``
|
||||
resolving the deployment host's hostname to ``127.0.0.1``, for example:
|
||||
|
||||
ubuntu@bifrost:/repo/kolla$ cat /etc/hosts
|
||||
.. code-block:: console
|
||||
|
||||
cat /etc/hosts
|
||||
127.0.0.1 bifrost localhost
|
||||
|
||||
# The following lines are desirable for IPv6 capable hosts
|
||||
.. end
|
||||
|
||||
The following lines are desirable for IPv6 capable hosts:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
::1 ip6-localhost ip6-loopback
|
||||
fe00::0 ip6-localnet
|
||||
ff00::0 ip6-mcastprefix
|
||||
@ -96,64 +101,72 @@ resolving the deployment host's hostname to ``127.0.0.1``, for example::
|
||||
ff02::3 ip6-allhosts
|
||||
192.168.100.15 bifrost
|
||||
|
||||
.. end
|
||||
|
||||
Build a Bifrost Container Image
|
||||
===============================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section provides instructions on how to build a container image for
|
||||
bifrost using kolla.
|
||||
|
||||
Enable Source Build Type
|
||||
------------------------
|
||||
Currently kolla only supports the ``source`` install type for the bifrost image.
|
||||
|
||||
Currently kolla only supports the source install type for the bifrost image.
|
||||
#. To generate kolla-build.conf configuration File
|
||||
|
||||
Configuration File
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If required, generate a default configuration file for ``kolla-build``::
|
||||
* If required, generate a default configuration file for :command:`kolla-build`:
|
||||
|
||||
cd kolla
|
||||
tox -e genconfig
|
||||
.. code-block:: console
|
||||
|
||||
Modify ``kolla-build.conf``, setting ``install_type`` to ``source``::
|
||||
cd kolla
|
||||
tox -e genconfig
|
||||
|
||||
install_type = source
|
||||
.. end
|
||||
|
||||
Command line
|
||||
~~~~~~~~~~~~
|
||||
* Modify ``kolla-build.conf``, setting ``install_type`` to ``source``:
|
||||
|
||||
Alternatively, instead of using ``kolla-build.conf``, a source build can be
|
||||
enabled by appending ``--type source`` to the ``kolla-build`` or
|
||||
.. path etc/kolla/kolla-build.conf
|
||||
.. code-block:: ini
|
||||
|
||||
install_type = source
|
||||
|
||||
.. end
|
||||
|
||||
Alternatively, instead of using ``kolla-build.conf``, a ``source`` build can
|
||||
be enabled by appending ``--type source`` to the :command:`kolla-build` or
|
||||
``tools/build.py`` command.
|
||||
|
||||
Build Container
|
||||
---------------
|
||||
#. To build images, for Development:
|
||||
|
||||
Development
|
||||
~~~~~~~~~~~
|
||||
.. code-block:: console
|
||||
|
||||
::
|
||||
cd kolla
|
||||
tools/build.py bifrost-deploy
|
||||
|
||||
cd kolla
|
||||
tools/build.py bifrost-deploy
|
||||
.. end
|
||||
|
||||
Production
|
||||
~~~~~~~~~~
|
||||
For Production:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
kolla-build bifrost-deploy
|
||||
kolla-build bifrost-deploy
|
||||
|
||||
.. note::
|
||||
.. end
|
||||
|
||||
By default kolla-build will build all containers using CentOS as the base
|
||||
image. To change this behavior, use the following parameter with
|
||||
``kolla-build`` or ``tools/build.py`` command::
|
||||
.. note::
|
||||
|
||||
--base [ubuntu|centos|oraclelinux]
|
||||
By default :command:`kolla-build` will build all containers using CentOS as
|
||||
the base image. To change this behavior, use the following parameter with
|
||||
:command:`kolla-build` or ``tools/build.py`` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--base [ubuntu|centos|oraclelinux]
|
||||
|
||||
.. end
|
||||
|
||||
Configure and Deploy a Bifrost Container
|
||||
========================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section provides instructions for how to configure and deploy a container
|
||||
running bifrost services.
|
||||
@ -166,8 +179,8 @@ group. In the ``all-in-one`` and ``multinode`` inventory files, a ``bifrost``
|
||||
group is defined which contains all hosts in the ``deployment`` group. This
|
||||
top level ``deployment`` group is intended to represent the host running the
|
||||
``bifrost_deploy`` container. By default, this group contains ``localhost``.
|
||||
See :doc:`/user/multinode`
|
||||
for details on how to modify the Ansible inventory in a multinode deployment.
|
||||
See :doc:`/user/multinode` for details on how to modify the Ansible inventory
|
||||
in a multinode deployment.
|
||||
|
||||
Bifrost does not currently support running on multiple hosts so the ``bifrost``
|
||||
group should contain only a single host, however this is not enforced by
|
||||
@ -189,6 +202,8 @@ different than ``network_interface``. For example to use ``eth1``:
|
||||
|
||||
bifrost_network_interface: eth1
|
||||
|
||||
.. end
|
||||
|
||||
Note that this interface should typically have L2 network connectivity with the
|
||||
bare metal cloud hosts in order to provide DHCP leases with PXE boot options.
|
||||
|
||||
@ -199,6 +214,8 @@ reflected in ``globals.yml``
|
||||
|
||||
kolla_install_type: source
|
||||
|
||||
.. end
|
||||
|
||||
Prepare Bifrost Configuration
|
||||
-----------------------------
|
||||
|
||||
@ -225,27 +242,29 @@ properties and a logical name.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
---
|
||||
cloud1:
|
||||
uuid: "31303735-3934-4247-3830-333132535336"
|
||||
driver_info:
|
||||
power:
|
||||
ipmi_username: "admin"
|
||||
ipmi_address: "192.168.1.30"
|
||||
ipmi_password: "root"
|
||||
nics:
|
||||
-
|
||||
mac: "1c:c1:de:1c:aa:53"
|
||||
-
|
||||
mac: "1c:c1:de:1c:aa:52"
|
||||
driver: "agent_ipmitool"
|
||||
ipv4_address: "192.168.1.10"
|
||||
properties:
|
||||
cpu_arch: "x86_64"
|
||||
ram: "24576"
|
||||
disk_size: "120"
|
||||
cpus: "16"
|
||||
name: "cloud1"
|
||||
---
|
||||
cloud1:
|
||||
uuid: "31303735-3934-4247-3830-333132535336"
|
||||
driver_info:
|
||||
power:
|
||||
ipmi_username: "admin"
|
||||
ipmi_address: "192.168.1.30"
|
||||
ipmi_password: "root"
|
||||
nics:
|
||||
-
|
||||
mac: "1c:c1:de:1c:aa:53"
|
||||
-
|
||||
mac: "1c:c1:de:1c:aa:52"
|
||||
driver: "agent_ipmitool"
|
||||
ipv4_address: "192.168.1.10"
|
||||
properties:
|
||||
cpu_arch: "x86_64"
|
||||
ram: "24576"
|
||||
disk_size: "120"
|
||||
cpus: "16"
|
||||
name: "cloud1"
|
||||
|
||||
.. end
|
||||
|
||||
The required inventory will be specific to the hardware and environment in use.
|
||||
|
||||
@ -254,9 +273,7 @@ Create Bifrost Configuration
|
||||
|
||||
The file ``bifrost.yml`` provides global configuration for the bifrost
|
||||
playbooks. By default kolla mostly uses bifrost's default variable values.
|
||||
For details on bifrost's variables see the bifrost documentation.
|
||||
|
||||
For example:
|
||||
For details on bifrost's variables see the bifrost documentation. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -269,6 +286,8 @@ For example:
|
||||
# dhcp_lease_time: 12h
|
||||
# dhcp_static_mask: 255.255.255.0
|
||||
|
||||
.. end
|
||||
|
||||
Create Disk Image Builder Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -278,165 +297,183 @@ building the baremetal OS and deployment images, and will build an
|
||||
**Ubuntu-based** image for deployment to nodes. For details on bifrost's
|
||||
variables see the bifrost documentation.
|
||||
|
||||
For example to use the ``debian`` Disk Image Builder OS element:
|
||||
For example, to use the ``debian`` Disk Image Builder OS element:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
dib_os_element: debian
|
||||
|
||||
.. end
|
||||
|
||||
See the `diskimage-builder documentation
|
||||
<https://docs.openstack.org/diskimage-builder/latest/>`__ for more details.
|
||||
|
||||
Deploy Bifrost
|
||||
--------------
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The bifrost container can be deployed either using kolla-ansible or manually.
|
||||
|
||||
Kolla-Ansible
|
||||
~~~~~~~~~~~~~
|
||||
Deploy Bifrost using Kolla-Ansible
|
||||
----------------------------------
|
||||
|
||||
Development
|
||||
___________
|
||||
|
||||
::
|
||||
|
||||
cd kolla-ansible
|
||||
tools/kolla-ansible deploy-bifrost
|
||||
|
||||
Production
|
||||
__________
|
||||
|
||||
::
|
||||
|
||||
kolla-ansible deploy-bifrost
|
||||
|
||||
Manual
|
||||
~~~~~~
|
||||
|
||||
Start Bifrost Container
|
||||
_______________________
|
||||
|
||||
::
|
||||
|
||||
docker run -it --net=host -v /dev:/dev -d \
|
||||
--privileged --name bifrost_deploy \
|
||||
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
||||
|
||||
Copy Configuration Files
|
||||
________________________
|
||||
For development:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy mkdir /etc/bifrost
|
||||
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
|
||||
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
||||
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
||||
cd kolla-ansible
|
||||
tools/kolla-ansible deploy-bifrost
|
||||
|
||||
Bootstrap Bifrost
|
||||
_________________
|
||||
.. end
|
||||
|
||||
::
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
|
||||
Generate an SSH Key
|
||||
___________________
|
||||
|
||||
::
|
||||
|
||||
ssh-keygen
|
||||
|
||||
Bootstrap and Start Services
|
||||
____________________________
|
||||
For Production:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd /bifrost
|
||||
./scripts/env-setup.sh
|
||||
. env-vars
|
||||
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
|
||||
HOME=/var/lib/rabbitmq
|
||||
EOF
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/target \
|
||||
/bifrost/playbooks/install.yaml \
|
||||
-e @/etc/bifrost/bifrost.yml \
|
||||
-e @/etc/bifrost/dib.yml \
|
||||
-e skip_package_install=true
|
||||
kolla-ansible deploy-bifrost
|
||||
|
||||
.. end
|
||||
|
||||
Deploy Bifrost manually
|
||||
-----------------------
|
||||
|
||||
#. Start Bifrost Container
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker run -it --net=host -v /dev:/dev -d \
|
||||
--privileged --name bifrost_deploy \
|
||||
kolla/ubuntu-source-bifrost-deploy:3.0.1
|
||||
|
||||
.. end
|
||||
|
||||
#. Copy Configuration Files
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy mkdir /etc/bifrost
|
||||
docker cp /etc/kolla/config/bifrost/servers.yml bifrost_deploy:/etc/bifrost/servers.yml
|
||||
docker cp /etc/kolla/config/bifrost/bifrost.yml bifrost_deploy:/etc/bifrost/bifrost.yml
|
||||
docker cp /etc/kolla/config/bifrost/dib.yml bifrost_deploy:/etc/bifrost/dib.yml
|
||||
|
||||
.. end
|
||||
|
||||
#. Bootstrap Bifrost
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
|
||||
.. end
|
||||
|
||||
#. Generate an SSH Key
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ssh-keygen
|
||||
|
||||
.. end
|
||||
|
||||
#. Bootstrap and Start Services
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cd /bifrost
|
||||
./scripts/env-setup.sh
|
||||
. env-vars
|
||||
cat > /etc/rabbitmq/rabbitmq-env.conf << EOF
|
||||
HOME=/var/lib/rabbitmq
|
||||
EOF
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/target \
|
||||
/bifrost/playbooks/install.yaml \
|
||||
-e @/etc/bifrost/bifrost.yml \
|
||||
-e @/etc/bifrost/dib.yml \
|
||||
-e skip_package_install=true
|
||||
|
||||
.. end
|
||||
|
||||
Validate the Deployed Container
|
||||
===============================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
|
||||
.. end
|
||||
|
||||
Running "ironic node-list" should return with no nodes, for example
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
(bifrost-deploy)[root@bifrost bifrost]# ironic node-list
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
+------+------+---------------+-------------+--------------------+-------------+
|
||||
|
||||
.. end
|
||||
|
||||
Enroll and Deploy Physical Nodes
|
||||
================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once we have deployed a bifrost container we can use it to provision the bare
|
||||
metal cloud hosts specified in the inventory file. Again, this can be done
|
||||
metal cloud hosts specified in the inventory file. Again, this can be done
|
||||
either using kolla-ansible or manually.
|
||||
|
||||
Kolla-Ansible
|
||||
-------------
|
||||
By Kolla-Ansible
|
||||
----------------
|
||||
|
||||
Development
|
||||
~~~~~~~~~~~
|
||||
For Development:
|
||||
|
||||
::
|
||||
|
||||
tools/kolla-ansible deploy-servers
|
||||
|
||||
Production
|
||||
~~~~~~~~~~
|
||||
|
||||
::
|
||||
|
||||
kolla-ansible deploy-servers
|
||||
|
||||
Manual
|
||||
------
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||
/bifrost/playbooks/enroll-dynamic.yaml \
|
||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||
-e @/etc/bifrost/bifrost.yml
|
||||
tools/kolla-ansible deploy-servers
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||
/bifrost/playbooks/deploy-dynamic.yaml \
|
||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||
-e @/etc/bifrost/bifrost.yml
|
||||
.. end
|
||||
|
||||
For Production:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
kolla-ansible deploy-servers
|
||||
|
||||
.. end
|
||||
|
||||
Manually
|
||||
--------
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||
/bifrost/playbooks/enroll-dynamic.yaml \
|
||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||
-e @/etc/bifrost/bifrost.yml
|
||||
|
||||
docker exec -it bifrost_deploy bash
|
||||
cd /bifrost
|
||||
. env-vars
|
||||
export BIFROST_INVENTORY_SOURCE=/etc/bifrost/servers.yml
|
||||
ansible-playbook -vvvv \
|
||||
-i /bifrost/playbooks/inventory/bifrost_inventory.py \
|
||||
/bifrost/playbooks/deploy-dynamic.yaml \
|
||||
-e "ansible_python_interpreter=/var/lib/kolla/venv/bin/python" \
|
||||
-e @/etc/bifrost/bifrost.yml
|
||||
|
||||
.. end
|
||||
|
||||
At this point Ironic should clean down the nodes and install the default
|
||||
OS image.
|
||||
|
||||
Advanced Configuration
|
||||
======================
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Bring Your Own Image
|
||||
--------------------
|
||||
@ -450,7 +487,7 @@ To use your own SSH key after you have generated the ``passwords.yml`` file
|
||||
update the private and public keys under ``bifrost_ssh_key``.
|
||||
|
||||
Known issues
|
||||
============
|
||||
~~~~~~~~~~~~
|
||||
|
||||
SSH daemon not running
|
||||
----------------------
|
||||
@ -458,18 +495,20 @@ SSH daemon not running
|
||||
By default ``sshd`` is installed in the image but may not be enabled. If you
|
||||
encounter this issue you will have to access the server physically in recovery
|
||||
mode to enable the ``sshd`` service. If your hardware supports it, this can be
|
||||
done remotely with ``ipmitool`` and Serial Over LAN. For example
|
||||
done remotely with :command:`ipmitool` and Serial Over LAN. For example
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
||||
ipmitool -I lanplus -H 192.168.1.30 -U admin -P root sol activate
|
||||
|
||||
.. end
|
||||
|
||||
References
|
||||
==========
|
||||
~~~~~~~~~~
|
||||
|
||||
Bifrost documentation: https://docs.openstack.org/bifrost/latest/
|
||||
* `Bifrost documentation <https://docs.openstack.org/bifrost/latest/>`__
|
||||
|
||||
Bifrost troubleshooting guide: https://docs.openstack.org/bifrost/latest/user/troubleshooting.html
|
||||
* `Bifrost troubleshooting guide <https://docs.openstack.org/bifrost/latest/user/troubleshooting.html>`__
|
||||
|
||||
Bifrost code repository: https://github.com/openstack/bifrost
|
||||
* `Bifrost code repository <https://github.com/openstack/bifrost>`__
|
||||
|
||||
|
@ -9,17 +9,19 @@ successfully monitor this and use it to diagnose problems, the standard "ssh
|
||||
and grep" solution quickly becomes unmanageable.
|
||||
|
||||
Preparation and deployment
|
||||
==========================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Modify the configuration file ``/etc/kolla/globals.yml`` and change
|
||||
the following:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_central_logging: "yes"
|
||||
enable_central_logging: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Elasticsearch
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Kolla deploys Elasticsearch as part of the E*K stack to store, organize
|
||||
and make logs easily accessible.
|
||||
@ -28,12 +30,11 @@ By default Elasticsearch is deployed on port ``9200``.
|
||||
|
||||
.. note::
|
||||
|
||||
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
||||
remember to give ``/var/lib/docker`` an adequate space.
|
||||
|
||||
Elasticsearch stores a lot of logs, so if you are running centralized logging,
|
||||
remember to give ``/var/lib/docker`` an adequate space.
|
||||
|
||||
Kibana
|
||||
======
|
||||
~~~~~~
|
||||
|
||||
Kolla deploys Kibana as part of the E*K stack in order to allow operators to
|
||||
search and visualise logs in a centralised manner.
|
||||
@ -82,19 +83,23 @@ host was found'.
|
||||
|
||||
First, re-run the server creation with ``--debug``:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack --debug server create --image cirros --flavor m1.tiny \
|
||||
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
||||
demo1
|
||||
openstack --debug server create --image cirros --flavor m1.tiny \
|
||||
--key-name mykey --nic net-id=00af016f-dffe-4e3c-a9b8-ec52ccd8ea65 \
|
||||
demo1
|
||||
|
||||
.. end
|
||||
|
||||
In this output, look for the key ``X-Compute-Request-Id``. This is a unique
|
||||
identifier that can be used to track the request through the system. An
|
||||
example ID looks like this:
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
||||
X-Compute-Request-Id: req-c076b50a-6a22-48bf-8810-b9f41176a6d5
|
||||
|
||||
.. end
|
||||
|
||||
Taking the value of ``X-Compute-Request-Id``, enter the value into the Kibana
|
||||
search bar, minus the leading ``req-``. Assuming some basic filters have been
|
||||
@ -124,7 +129,9 @@ generated and previewed. In the menu on the left, metrics for a chart can
|
||||
be chosen. The chart can be generated by pressing a green arrow on the top
|
||||
of the left-side menu.
|
||||
|
||||
.. note:: After creating a visualization, it can be saved by choosing *save
|
||||
.. note::
|
||||
|
||||
After creating a visualization, it can be saved by choosing *save
|
||||
visualization* option in the menu on the right. If it is not saved, it
|
||||
will be lost after leaving a page or creating another visualization.
|
||||
|
||||
@ -138,7 +145,9 @@ from all saved ones. The order and size of elements can be changed directly
|
||||
in this place by moving them or resizing. The color of charts can also be
|
||||
changed by checking a colorful dots on the legend near each visualization.
|
||||
|
||||
.. note:: After creating a dashboard, it can be saved by choosing *save dashboard*
|
||||
.. note::
|
||||
|
||||
After creating a dashboard, it can be saved by choosing *save dashboard*
|
||||
option in the menu on the right. If it is not saved, it will be lost after
|
||||
leaving a page or creating another dashboard.
|
||||
|
||||
@ -168,7 +177,7 @@ configuration files in ``/etc/kolla/config/fluentd/filter/*.conf`` on the
|
||||
control host.
|
||||
|
||||
Custom log forwarding
|
||||
=====================
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In some scenarios it may be useful to forward logs to a logging service other
|
||||
than elasticsearch. This can be done by configuring custom fluentd outputs.
|
||||
|
@ -10,13 +10,13 @@ tweaks to the Ceph cluster you can deploy a **healthy** cluster with a single
|
||||
host and a single block device.
|
||||
|
||||
Requirements
|
||||
============
|
||||
~~~~~~~~~~~~
|
||||
|
||||
* A minimum of 3 hosts for a vanilla deploy
|
||||
* A minimum of 1 block device per host
|
||||
|
||||
Preparation
|
||||
===========
|
||||
~~~~~~~~~~~
|
||||
|
||||
To prepare a disk for use as a
|
||||
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
||||
@ -26,26 +26,31 @@ will be reformatted so use caution.
|
||||
|
||||
To prepare an OSD as a storage drive, execute the following operations:
|
||||
|
||||
::
|
||||
.. warning::
|
||||
|
||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
||||
# where $DISK is /dev/sdb or something similar
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
|
||||
.. end
|
||||
|
||||
The following shows an example of using parted to configure ``/dev/sdb`` for
|
||||
usage with Kolla.
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
parted /dev/sdb print
|
||||
Model: VMware, VMware Virtual S (scsi)
|
||||
Disk /dev/sdb: 10.7GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: gpt
|
||||
Number Start End Size File system Name Flags
|
||||
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
parted /dev/sdb print
|
||||
Model: VMware, VMware Virtual S (scsi)
|
||||
Disk /dev/sdb: 10.7GB
|
||||
Sector size (logical/physical): 512B/512B
|
||||
Partition Table: gpt
|
||||
Number Start End Size File system Name Flags
|
||||
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
||||
|
||||
.. end
|
||||
|
||||
Using an external journal drive
|
||||
-------------------------------
|
||||
@ -59,19 +64,23 @@ journal drive. This section documents how to use an external journal drive.
|
||||
|
||||
Prepare the storage drive in the same way as documented above:
|
||||
|
||||
::
|
||||
.. warning::
|
||||
|
||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
||||
# where $DISK is /dev/sdb or something similar
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
||||
ALL DATA ON $DISK will be LOST! Where $DISK is /dev/sdb or something similar.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO 1 -1
|
||||
|
||||
.. end
|
||||
|
||||
To prepare the journal external drive execute the following command:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
||||
# where $DISK is /dev/sdc or something similar
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J 1 -1
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
@ -88,47 +97,57 @@ To prepare the journal external drive execute the following command:
|
||||
|
||||
|
||||
Configuration
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Edit the [storage] group in the inventory which contains the hostname of the
|
||||
Edit the ``[storage]`` group in the inventory which contains the hostname of the
|
||||
hosts that have the block devices you have prepped as shown above.
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
[storage]
|
||||
controller
|
||||
compute1
|
||||
[storage]
|
||||
controller
|
||||
compute1
|
||||
|
||||
.. end
|
||||
|
||||
Enable Ceph in ``/etc/kolla/globals.yml``:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_ceph: "yes"
|
||||
enable_ceph: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_ceph_rgw: "yes"
|
||||
enable_ceph_rgw: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
||||
start up, RGW will create several pools. The first pool should be in an
|
||||
operational state to proceed with the second one, and so on. So, in the case of
|
||||
an **all-in-one** deployment, it is necessary to change the default number of
|
||||
copies for the pools before deployment. Modify the file
|
||||
``/etc/kolla/config/ceph.conf`` and add the contents::
|
||||
``/etc/kolla/config/ceph.conf`` and add the contents:
|
||||
|
||||
[global]
|
||||
osd pool default size = 1
|
||||
osd pool default min size = 1
|
||||
.. path /etc/kolla/config/ceph.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[global]
|
||||
osd pool default size = 1
|
||||
osd pool default min size = 1
|
||||
|
||||
.. end
|
||||
|
||||
To build a high performance and secure Ceph Storage Cluster, the Ceph community
|
||||
recommend the use of two separate networks: public network and cluster network.
|
||||
Edit the ``/etc/kolla/globals.yml`` and configure the ``cluster_interface``:
|
||||
|
||||
.. code-block:: ini
|
||||
.. path /etc/kolla/globals.yml
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster_interface: "eth2"
|
||||
|
||||
@ -139,46 +158,52 @@ For more details, see `NETWORK CONFIGURATION REFERENCE
|
||||
of Ceph Documentation.
|
||||
|
||||
Deployment
|
||||
==========
|
||||
~~~~~~~~~~
|
||||
|
||||
Finally deploy the Ceph-enabled OpenStack:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
kolla-ansible deploy -i path/to/inventory
|
||||
kolla-ansible deploy -i path/to/inventory
|
||||
|
||||
Using a Cache Tier
|
||||
==================
|
||||
.. end
|
||||
|
||||
An optional `cache tier <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
||||
Using a Cache Tiering
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An optional `cache tiering <http://docs.ceph.com/docs/jewel/rados/operations/cache-tiering/>`_
|
||||
can be deployed by formatting at least one cache device and enabling cache.
|
||||
tiering in the globals.yml configuration file.
|
||||
|
||||
To prepare an OSD as a cache device, execute the following operations:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
# <WARNING ALL DATA ON $DISK will be LOST!>
|
||||
# where $DISK is /dev/sdb or something similar
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
||||
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
||||
|
||||
.. end
|
||||
|
||||
Enable the Ceph cache tier in ``/etc/kolla/globals.yml``:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_ceph: "yes"
|
||||
ceph_enable_cache: "yes"
|
||||
# Valid options are [ forward, none, writeback ]
|
||||
ceph_cache_mode: "writeback"
|
||||
enable_ceph: "yes"
|
||||
ceph_enable_cache: "yes"
|
||||
# Valid options are [ forward, none, writeback ]
|
||||
ceph_cache_mode: "writeback"
|
||||
|
||||
After this run the playbooks as you normally would. For example:
|
||||
.. end
|
||||
|
||||
::
|
||||
After this run the playbooks as you normally would, for example:
|
||||
|
||||
kolla-ansible deploy -i path/to/inventory
|
||||
.. code-block:: console
|
||||
|
||||
kolla-ansible deploy -i path/to/inventory
|
||||
|
||||
.. end
|
||||
|
||||
Setting up an Erasure Coded Pool
|
||||
================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
`Erasure code <http://docs.ceph.com/docs/jewel/rados/operations/erasure-code/>`_
|
||||
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
||||
@ -191,62 +216,73 @@ completely removing the pool and recreating it.
|
||||
To enable erasure coded pools add the following options to your
|
||||
``/etc/kolla/globals.yml`` configuration file:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
||||
# Valid options are [ erasure, replicated ]
|
||||
ceph_pool_type: "erasure"
|
||||
# Optionally, you can change the profile
|
||||
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
||||
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
||||
# Valid options are [ erasure, replicated ]
|
||||
ceph_pool_type: "erasure"
|
||||
# Optionally, you can change the profile
|
||||
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
||||
|
||||
.. end
|
||||
|
||||
Managing Ceph
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Check the Ceph status for more diagnostic information. The sample output below
|
||||
indicates a healthy cluster:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
docker exec ceph_mon ceph -s
|
||||
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
||||
health HEALTH_OK
|
||||
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
||||
election epoch 2, quorum 0 controller
|
||||
osdmap e18: 2 osds: 2 up, 2 in
|
||||
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
|
||||
68676 kB used, 20390 MB / 20457 MB avail
|
||||
64 active+clean
|
||||
docker exec ceph_mon ceph -s
|
||||
|
||||
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
||||
health HEALTH_OK
|
||||
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
||||
election epoch 2, quorum 0 controller
|
||||
osdmap e18: 2 osds: 2 up, 2 in
|
||||
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
|
||||
68676 kB used, 20390 MB / 20457 MB avail
|
||||
64 active+clean
|
||||
|
||||
If Ceph is run in an **all-in-one** deployment or with less than three storage
|
||||
nodes, further configuration is required. It is necessary to change the default
|
||||
number of copies for the pool. The following example demonstrates how to change
|
||||
the number of copies for the pool to 1:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
docker exec ceph_mon ceph osd pool set rbd size 1
|
||||
docker exec ceph_mon ceph osd pool set rbd size 1
|
||||
|
||||
.. end
|
||||
|
||||
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
||||
An example of modifying the pools to have 2 copies:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
||||
|
||||
.. end
|
||||
|
||||
If using a cache tier, these changes must be made as well:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
||||
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
||||
|
||||
.. end
|
||||
|
||||
The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
||||
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|
||||
|
||||
.. end
|
||||
|
||||
Troubleshooting
|
||||
===============
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Deploy fails with 'Fetching Ceph keyrings ... No JSON object could be decoded'
|
||||
------------------------------------------------------------------------------
|
||||
@ -258,16 +294,14 @@ successful deploy.
|
||||
In order to do this the operator should remove the `ceph_mon_config` volume
|
||||
from each Ceph monitor node:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
ansible \
|
||||
-i ansible/inventory/multinode \
|
||||
-a 'docker volume rm ceph_mon_config' \
|
||||
ceph-mon
|
||||
ansible -i ansible/inventory/multinode \
|
||||
-a 'docker volume rm ceph_mon_config' \
|
||||
ceph-mon
|
||||
|
||||
=====================
|
||||
Simple 3 Node Example
|
||||
=====================
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This example will show how to deploy Ceph in a very simple setup using 3
|
||||
storage nodes. 2 of those nodes (kolla1 and kolla2) will also provide other
|
||||
@ -288,86 +322,96 @@ implement caching.
|
||||
Here is the top part of the multinode inventory file used in the example
|
||||
environment before adding the 3rd node for Ceph:
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
[control]
|
||||
# These hostname must be resolvable from your deployment host
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[control]
|
||||
# These hostname must be resolvable from your deployment host
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[network]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[network]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[compute]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[compute]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[monitoring]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[storage]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[monitoring]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[storage]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
.. end
|
||||
|
||||
Configuration
|
||||
=============
|
||||
-------------
|
||||
|
||||
To prepare the 2nd disk (/dev/sdb) of each nodes for use by Ceph you will need
|
||||
to add a partition label to it as shown below:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
# <WARNING ALL DATA ON /dev/sdb will be LOST!>
|
||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
||||
|
||||
.. end
|
||||
|
||||
Make sure to run this command on each of the 3 nodes or the deployment will
|
||||
fail.
|
||||
|
||||
Next, edit the multinode inventory file and make sure the 3 nodes are listed
|
||||
under [storage]. In this example I will add kolla3.ducourrier.com to the
|
||||
under ``[storage]``. In this example I will add kolla3.ducourrier.com to the
|
||||
existing inventory file:
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
[control]
|
||||
# These hostname must be resolvable from your deployment host
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[control]
|
||||
# These hostname must be resolvable from your deployment host
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[network]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[network]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[compute]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[compute]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[monitoring]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
[monitoring]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
|
||||
[storage]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
kolla3.ducourrier.com
|
||||
[storage]
|
||||
kolla1.ducourrier.com
|
||||
kolla2.ducourrier.com
|
||||
kolla3.ducourrier.com
|
||||
|
||||
.. end
|
||||
|
||||
It is now time to enable Ceph in the environment by editing the
|
||||
``/etc/kolla/globals.yml`` file:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_ceph: "yes"
|
||||
enable_ceph_rgw: "yes"
|
||||
enable_cinder: "yes"
|
||||
glance_backend_file: "no"
|
||||
glance_backend_ceph: "yes"
|
||||
enable_ceph: "yes"
|
||||
enable_ceph_rgw: "yes"
|
||||
enable_cinder: "yes"
|
||||
glance_backend_file: "no"
|
||||
glance_backend_ceph: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Deployment
|
||||
----------
|
||||
|
||||
Finally deploy the Ceph-enabled configuration:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
kolla-ansible deploy -i path/to/inventory-file
|
||||
kolla-ansible deploy -i path/to/inventory-file
|
||||
|
||||
.. end
|
||||
|
@ -5,7 +5,8 @@ Hitachi NAS Platform iSCSI and NFS drives for OpenStack
|
||||
========================================================
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
|
||||
The Block Storage service provides persistent block storage resources that
|
||||
Compute instances can consume. This includes secondary attached storage similar
|
||||
to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write
|
||||
@ -14,6 +15,7 @@ instance.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
- Hitachi NAS Platform Models 3080, 3090, 4040, 4060, 4080, and 4100.
|
||||
|
||||
- HNAS/SMU software version is 12.2 or higher.
|
||||
@ -53,74 +55,86 @@ The NFS and iSCSI drivers support these operations:
|
||||
- Manage and unmanage snapshots (HNAS NFS only).
|
||||
|
||||
Configuration example for Hitachi NAS Platform iSCSI and NFS
|
||||
============================================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
iSCSI backend
|
||||
-------------
|
||||
|
||||
Enable cinder hnas backend iscsi in ``/etc/kolla/globals.yml``
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder_backend_hnas_iscsi: "yes"
|
||||
enable_cinder_backend_hnas_iscsi: "yes"
|
||||
|
||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and add the
|
||||
contents:
|
||||
|
||||
.. code-block:: console
|
||||
.. path /etc/kolla/config/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = hnas-iscsi
|
||||
[DEFAULT]
|
||||
enabled_backends = hnas-iscsi
|
||||
|
||||
[hnas-iscsi]
|
||||
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
|
||||
volume_iscsi_backend = hnas_iscsi_backend
|
||||
hnas_iscsi_username = supervisor
|
||||
hnas_iscsi_mgmt_ip0 = <hnas_ip>
|
||||
hnas_chap_enabled = True
|
||||
[hnas-iscsi]
|
||||
volume_driver = cinder.volume.drivers.hitachi.hnas_iscsi.HNASISCSIDriver
|
||||
volume_iscsi_backend = hnas_iscsi_backend
|
||||
hnas_iscsi_username = supervisor
|
||||
hnas_iscsi_mgmt_ip0 = <hnas_ip>
|
||||
hnas_chap_enabled = True
|
||||
|
||||
hnas_iscsi_svc0_volume_type = iscsi_gold
|
||||
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
||||
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
||||
hnas_iscsi_svc0_volume_type = iscsi_gold
|
||||
hnas_iscsi_svc0_hdp = FS-Baremetal1
|
||||
hnas_iscsi_svc0_iscsi_ip = <svc0_ip>
|
||||
|
||||
.. end
|
||||
|
||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
hnas_iscsi_password: supervisor
|
||||
hnas_iscsi_password: supervisor
|
||||
|
||||
.. end
|
||||
|
||||
NFS backend
|
||||
-----------
|
||||
|
||||
Enable cinder hnas backend nfs in ``/etc/kolla/globals.yml``
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder_backend_hnas_nfs: "yes"
|
||||
enable_cinder_backend_hnas_nfs: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Create or modify the file ``/etc/kolla/config/cinder.conf`` and
|
||||
add the contents:
|
||||
|
||||
.. code-block:: console
|
||||
.. path /etc/kolla/config/cinder.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enabled_backends = hnas-nfs
|
||||
[DEFAULT]
|
||||
enabled_backends = hnas-nfs
|
||||
|
||||
[hnas-nfs]
|
||||
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
|
||||
volume_nfs_backend = hnas_nfs_backend
|
||||
hnas_nfs_username = supervisor
|
||||
hnas_nfs_mgmt_ip0 = <hnas_ip>
|
||||
hnas_chap_enabled = True
|
||||
[hnas-nfs]
|
||||
volume_driver = cinder.volume.drivers.hitachi.hnas_nfs.HNASNFSDriver
|
||||
volume_nfs_backend = hnas_nfs_backend
|
||||
hnas_nfs_username = supervisor
|
||||
hnas_nfs_mgmt_ip0 = <hnas_ip>
|
||||
hnas_chap_enabled = True
|
||||
|
||||
hnas_nfs_svc0_volume_type = nfs_gold
|
||||
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
||||
hnas_nfs_svc0_volume_type = nfs_gold
|
||||
hnas_nfs_svc0_hdp = <svc0_ip>/<export_name>
|
||||
|
||||
.. end
|
||||
|
||||
Then set password for the backend in ``/etc/kolla/passwords.yml``:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
hnas_nfs_password: supervisor
|
||||
hnas_nfs_password: supervisor
|
||||
|
||||
.. end
|
||||
|
||||
Configuration on Kolla deployment
|
||||
---------------------------------
|
||||
@ -128,9 +142,11 @@ Configuration on Kolla deployment
|
||||
Enable Shared File Systems service and HNAS driver in
|
||||
``/etc/kolla/globals.yml``
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder: "yes"
|
||||
enable_cinder: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Configuration on HNAS
|
||||
---------------------
|
||||
@ -141,7 +157,9 @@ List the available tenants:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack project list
|
||||
openstack project list
|
||||
|
||||
.. end
|
||||
|
||||
Create a network to the given tenant (service), providing the tenant ID,
|
||||
a name for the network, the name of the physical network over which the
|
||||
@ -150,8 +168,10 @@ which the virtual network is implemented:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
neutron net-create --tenant-id <SERVICE_ID> hnas_network \
|
||||
--provider:physical_network=physnet2 --provider:network_type=flat
|
||||
|
||||
.. end
|
||||
|
||||
Create a subnet to the same tenant (service), the gateway IP of this subnet,
|
||||
a name for the subnet, the network ID created before, and the CIDR of
|
||||
@ -159,78 +179,86 @@ subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
neutron subnet-create --tenant-id <SERVICE_ID> --gateway <GATEWAY> \
|
||||
--name hnas_subnet <NETWORK_ID> <SUBNET_CIDR>
|
||||
|
||||
.. end
|
||||
|
||||
Add the subnet interface to a router, providing the router ID and subnet
|
||||
ID created before:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
neutron router-interface-add <ROUTER_ID> <SUBNET_ID>
|
||||
|
||||
.. end
|
||||
|
||||
Create volume
|
||||
=============
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Create a non-bootable volume.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 my-volume
|
||||
openstack volume create --size 1 my-volume
|
||||
|
||||
.. end
|
||||
|
||||
Verify Operation.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ cinder show my-volume
|
||||
cinder show my-volume
|
||||
|
||||
+--------------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
| attachments | [] |
|
||||
| availability_zone | nova |
|
||||
| bootable | false |
|
||||
| consistencygroup_id | None |
|
||||
| created_at | 2017-01-17T19:02:45.000000 |
|
||||
| description | None |
|
||||
| encrypted | False |
|
||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
| metadata | {} |
|
||||
| migration_status | None |
|
||||
| multiattach | False |
|
||||
| name | my-volume |
|
||||
| os-vol-host-attr:host | compute@hnas-iscsi#iscsi_gold |
|
||||
| os-vol-mig-status-attr:migstat | None |
|
||||
| os-vol-mig-status-attr:name_id | None |
|
||||
| os-vol-tenant-attr:tenant_id | 16def9176bc64bd283d419ac2651e299 |
|
||||
| replication_status | disabled |
|
||||
| size | 1 |
|
||||
| snapshot_id | None |
|
||||
| source_volid | None |
|
||||
| status | available |
|
||||
| updated_at | 2017-01-17T19:02:46.000000 |
|
||||
| user_id | fb318b96929c41c6949360c4ccdbf8c0 |
|
||||
| volume_type | None |
|
||||
+--------------------------------+--------------------------------------+
|
||||
+--------------------------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+--------------------------------+--------------------------------------+
|
||||
| attachments | [] |
|
||||
| availability_zone | nova |
|
||||
| bootable | false |
|
||||
| consistencygroup_id | None |
|
||||
| created_at | 2017-01-17T19:02:45.000000 |
|
||||
| description | None |
|
||||
| encrypted | False |
|
||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
| metadata | {} |
|
||||
| migration_status | None |
|
||||
| multiattach | False |
|
||||
| name | my-volume |
|
||||
| os-vol-host-attr:host | compute@hnas-iscsi#iscsi_gold |
|
||||
| os-vol-mig-status-attr:migstat | None |
|
||||
| os-vol-mig-status-attr:name_id | None |
|
||||
| os-vol-tenant-attr:tenant_id | 16def9176bc64bd283d419ac2651e299 |
|
||||
| replication_status | disabled |
|
||||
| size | 1 |
|
||||
| snapshot_id | None |
|
||||
| source_volid | None |
|
||||
| status | available |
|
||||
| updated_at | 2017-01-17T19:02:46.000000 |
|
||||
| user_id | fb318b96929c41c6949360c4ccdbf8c0 |
|
||||
| volume_type | None |
|
||||
+--------------------------------+--------------------------------------+
|
||||
|
||||
$ nova volume-attach INSTANCE_ID VOLUME_ID auto
|
||||
nova volume-attach INSTANCE_ID VOLUME_ID auto
|
||||
|
||||
+----------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+----------+--------------------------------------+
|
||||
| device | /dev/vdc |
|
||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
| serverId | 3bf5e176-be05-4634-8cbd-e5fe491f5f9c |
|
||||
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
+----------+--------------------------------------+
|
||||
+----------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+----------+--------------------------------------+
|
||||
| device | /dev/vdc |
|
||||
| id | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
| serverId | 3bf5e176-be05-4634-8cbd-e5fe491f5f9c |
|
||||
| volumeId | 4f5b8ae8-9781-411e-8ced-de616ae64cfd |
|
||||
+----------+--------------------------------------+
|
||||
|
||||
$ openstack volume list
|
||||
openstack volume list
|
||||
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
| ID | Display Name | Status | Size | Attached to |
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
| ID | Display Name | Status | Size | Attached to |
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
| 4f5b8ae8-9781-411e-8ced-de616ae64cfd | my-volume | in-use | 1 | Attached to private-instance on /dev/vdb |
|
||||
+--------------------------------------+---------------+----------------+------+-------------------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
For more information about how to manage volumes, see the
|
||||
`Manage volumes
|
||||
|
@ -5,7 +5,7 @@ Cinder in Kolla
|
||||
===============
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
|
||||
Cinder can be deploying using Kolla and supports the following storage
|
||||
backends:
|
||||
@ -18,106 +18,141 @@ backends:
|
||||
* nfs
|
||||
|
||||
LVM
|
||||
===
|
||||
~~~
|
||||
|
||||
When using the ``lvm`` backend, a volume group will need to be created on each
|
||||
storage node. This can either be a real physical volume or a loopback mounted
|
||||
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
||||
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
||||
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
||||
|
||||
pvcreate /dev/sdb /dev/sdc
|
||||
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
||||
pvcreate /dev/sdb /dev/sdc
|
||||
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
||||
|
||||
.. end
|
||||
|
||||
During development, it may be desirable to use file backed block storage. It
|
||||
is possible to use a file and mount it as a block device via the loopback
|
||||
system. ::
|
||||
system.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
free_device=$(losetup -f)
|
||||
fallocate -l 20G /var/lib/cinder_data.img
|
||||
losetup $free_device /var/lib/cinder_data.img
|
||||
pvcreate $free_device
|
||||
vgcreate cinder-volumes $free_device
|
||||
free_device=$(losetup -f)
|
||||
fallocate -l 20G /var/lib/cinder_data.img
|
||||
losetup $free_device /var/lib/cinder_data.img
|
||||
pvcreate $free_device
|
||||
vgcreate cinder-volumes $free_device
|
||||
|
||||
.. end
|
||||
|
||||
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
||||
|
||||
::
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder_backend_lvm: "yes"
|
||||
enable_cinder_backend_lvm: "yes"
|
||||
|
||||
.. note ::
|
||||
There are currently issues using the LVM backend in a multi-controller setup,
|
||||
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
There are currently issues using the LVM backend in a multi-controller setup,
|
||||
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
||||
|
||||
NFS
|
||||
===
|
||||
~~~
|
||||
|
||||
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
||||
where the volumes are to be stored::
|
||||
where the volumes are to be stored:
|
||||
|
||||
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
||||
.. code-block:: none
|
||||
|
||||
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
||||
|
||||
.. end
|
||||
|
||||
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
||||
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
||||
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
||||
prevent remote root users from having access to all files.
|
||||
|
||||
Then start ``nfsd``::
|
||||
Then start ``nfsd``:
|
||||
|
||||
systemctl start nfs
|
||||
.. code-block:: console
|
||||
|
||||
systemctl start nfs
|
||||
|
||||
.. end
|
||||
|
||||
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
||||
each storage node::
|
||||
each storage node:
|
||||
|
||||
storage01:/kolla_nfs
|
||||
storage02:/kolla_nfs
|
||||
.. code-block:: none
|
||||
|
||||
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``::
|
||||
storage01:/kolla_nfs
|
||||
storage02:/kolla_nfs
|
||||
|
||||
enable_cinder_backend_nfs: "yes"
|
||||
.. end
|
||||
|
||||
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder_backend_nfs: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Validation
|
||||
==========
|
||||
~~~~~~~~~~
|
||||
|
||||
Create a volume as follows:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack volume create --size 1 steak_volume
|
||||
<bunch of stuff printed>
|
||||
openstack volume create --size 1 steak_volume
|
||||
<bunch of stuff printed>
|
||||
|
||||
.. end
|
||||
|
||||
Verify it is available. If it says "error" here something went wrong during
|
||||
LVM creation of the volume. ::
|
||||
LVM creation of the volume.
|
||||
|
||||
$ openstack volume list
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
| ID | Display Name | Status | Size | Attached to |
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
.. code-block:: console
|
||||
|
||||
openstack volume list
|
||||
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
| ID | Display Name | Status | Size | Attached to |
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
||||
+--------------------------------------+--------------+-----------+------+-------------+
|
||||
|
||||
.. end
|
||||
|
||||
Attach the volume to a server using:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
||||
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
||||
|
||||
.. end
|
||||
|
||||
Check the console log added the disk:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
openstack console log show steak_server
|
||||
openstack console log show steak_server
|
||||
|
||||
.. end
|
||||
|
||||
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
||||
If the disk stays in the available state, something went wrong during the
|
||||
iSCSI mounting of the volume to the guest VM.
|
||||
|
||||
Cinder LVM2 back end with iSCSI
|
||||
===============================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
||||
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
||||
@ -127,12 +162,16 @@ a bridge between nova-compute process and the server hosting LVG.
|
||||
|
||||
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
||||
exist on the server and following parameter must be specified in
|
||||
``globals.yml`` ::
|
||||
``globals.yml``:
|
||||
|
||||
enable_cinder_backend_lvm: "yes"
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_cinder_backend_lvm: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
For Ubuntu and LVM2/iSCSI
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
-------------------------
|
||||
|
||||
``iscsd`` process uses configfs which is normally mounted at
|
||||
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
||||
@ -141,26 +180,33 @@ not the case on debian/ubuntu. Since ``iscsid`` container runs on every nova
|
||||
compute node, the following steps must be completed on every Ubuntu server
|
||||
targeted for nova compute role.
|
||||
|
||||
- Add configfs module to ``/etc/modules``
|
||||
- Rebuild initramfs using: ``update-initramfs -u`` command
|
||||
- Stop ``open-iscsi`` system service due to its conflicts
|
||||
with iscsid container.
|
||||
- Add configfs module to ``/etc/modules``
|
||||
- Rebuild initramfs using: ``update-initramfs -u`` command
|
||||
- Stop ``open-iscsi`` system service due to its conflicts
|
||||
with iscsid container.
|
||||
|
||||
Ubuntu 16.04 (systemd):
|
||||
``systemctl stop open-iscsi; systemctl stop iscsid``
|
||||
Ubuntu 16.04 (systemd):
|
||||
``systemctl stop open-iscsi; systemctl stop iscsid``
|
||||
|
||||
- Make sure configfs gets mounted during a server boot up process. There are
|
||||
multiple ways to accomplish it, one example:
|
||||
::
|
||||
- Make sure configfs gets mounted during a server boot up process. There are
|
||||
multiple ways to accomplish it, one example:
|
||||
|
||||
mount -t configfs /etc/rc.local /sys/kernel/config
|
||||
.. code-block:: console
|
||||
|
||||
mount -t configfs /etc/rc.local /sys/kernel/config
|
||||
|
||||
.. end
|
||||
|
||||
Cinder back end with external iSCSI storage
|
||||
===========================================
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In order to use external storage system (like one from EMC or NetApp)
|
||||
the following parameter must be specified in ``globals.yml`` ::
|
||||
the following parameter must be specified in ``globals.yml``:
|
||||
|
||||
enable_cinder_backend_iscsi: "yes"
|
||||
.. code-block:: yaml
|
||||
|
||||
Also ``enable_cinder_backend_lvm`` should be set to "no" in this case.
|
||||
enable_cinder_backend_iscsi: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Also ``enable_cinder_backend_lvm`` should be set to ``no`` in this case.
|
||||
|
@ -5,117 +5,141 @@ Designate in Kolla
|
||||
==================
|
||||
|
||||
Overview
|
||||
========
|
||||
~~~~~~~~
|
||||
|
||||
Designate provides DNSaaS services for OpenStack:
|
||||
|
||||
- REST API for domain/record management
|
||||
- Multi-tenant
|
||||
- Integrated with Keystone for authentication
|
||||
- Framework in place to integrate with Nova and Neutron
|
||||
notifications (for auto-generated records)
|
||||
- Support for PowerDNS and Bind9 out of the box
|
||||
- REST API for domain/record management
|
||||
- Multi-tenant
|
||||
- Integrated with Keystone for authentication
|
||||
- Framework in place to integrate with Nova and Neutron
|
||||
notifications (for auto-generated records)
|
||||
- Support for PowerDNS and Bind9 out of the box
|
||||
|
||||
Configuration on Kolla deployment
|
||||
---------------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Enable Designate service in ``/etc/kolla/globals.yml``
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
enable_designate: "yes"
|
||||
enable_designate: "yes"
|
||||
|
||||
.. end
|
||||
|
||||
Configure Designate options in ``/etc/kolla/globals.yml``
|
||||
|
||||
.. important::
|
||||
|
||||
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
||||
public network.
|
||||
Designate MDNS node requires the ``dns_interface`` to be reachable from
|
||||
public network.
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: yaml
|
||||
|
||||
dns_interface: "eth1"
|
||||
designate_backend: "bind9"
|
||||
designate_ns_record: "sample.openstack.org"
|
||||
dns_interface: "eth1"
|
||||
designate_backend: "bind9"
|
||||
designate_ns_record: "sample.openstack.org"
|
||||
|
||||
.. end
|
||||
|
||||
Neutron and Nova Integration
|
||||
----------------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create default Designate Zone for Neutron:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
||||
openstack zone create --email admin@sample.openstack.org sample.openstack.org.
|
||||
|
||||
.. end
|
||||
|
||||
Create designate-sink custom configuration folder:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mkdir -p /etc/kolla/config/designate/
|
||||
mkdir -p /etc/kolla/config/designate/
|
||||
|
||||
.. end
|
||||
|
||||
Append Designate Zone ID in ``/etc/kolla/config/designate/designate-sink.conf``
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[handler:nova_fixed]
|
||||
zone_id = <ZONE_ID>
|
||||
[handler:neutron_floatingip]
|
||||
zone_id = <ZONE_ID>
|
||||
[handler:nova_fixed]
|
||||
zone_id = <ZONE_ID>
|
||||
[handler:neutron_floatingip]
|
||||
zone_id = <ZONE_ID>
|
||||
|
||||
.. end
|
||||
|
||||
Reconfigure Designate:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
||||
kolla-ansible reconfigure -i <INVENTORY_FILE> --tags designate
|
||||
|
||||
.. end
|
||||
|
||||
Verify operation
|
||||
----------------
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
List available networks:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list
|
||||
openstack network list
|
||||
|
||||
.. end
|
||||
|
||||
Associate a domain to a network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
||||
neutron net-update <NETWORK_ID> --dns_domain sample.openstack.org.
|
||||
|
||||
.. end
|
||||
|
||||
Start an instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create \
|
||||
--image cirros \
|
||||
--flavor m1.tiny \
|
||||
--key-name mykey \
|
||||
--nic net-id=${NETWORK_ID} \
|
||||
my-vm
|
||||
openstack server create \
|
||||
--image cirros \
|
||||
--flavor m1.tiny \
|
||||
--key-name mykey \
|
||||
--nic net-id=${NETWORK_ID} \
|
||||
my-vm
|
||||
|
||||
.. end
|
||||
|
||||
Check DNS records in Designate:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack recordset list sample.openstack.org.
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
| id | name | type | records | status | action |
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
| 5aec6f5b-2121-4a2e-90d7-9e4509f79506 | sample.openstack.org. | SOA | sample.openstack.org. | ACTIVE | NONE |
|
||||
| | | | admin.sample.openstack.org. 1485266928 3514 | | |
|
||||
| | | | 600 86400 3600 | | |
|
||||
| 578dc94a-df74-4086-a352-a3b2db9233ae | sample.openstack.org. | NS | sample.openstack.org. | ACTIVE | NONE |
|
||||
| de9ff01e-e9ef-4a0f-88ed-6ec5ecabd315 | 192-168-190-232.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
| f67645ee-829c-4154-a988-75341050a8d6 | my-vm.None.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
openstack recordset list sample.openstack.org.
|
||||
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
| id | name | type | records | status | action |
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
| 5aec6f5b-2121-4a2e-90d7-9e4509f79506 | sample.openstack.org. | SOA | sample.openstack.org. | ACTIVE | NONE |
|
||||
| | | | admin.sample.openstack.org. 1485266928 3514 | | |
|
||||
| | | | 600 86400 3600 | | |
|
||||
| 578dc94a-df74-4086-a352-a3b2db9233ae | sample.openstack.org. | NS | sample.openstack.org. | ACTIVE | NONE |
|
||||
| de9ff01e-e9ef-4a0f-88ed-6ec5ecabd315 | 192-168-190-232.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
| f67645ee-829c-4154-a988-75341050a8d6 | my-vm.None.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
| e5623d73-4f9f-4b54-9045-b148e0c3342d | my-vm.sample.openstack.org. | A | 192.168.190.232 | ACTIVE | NONE |
|
||||
+--------------------------------------+---------------------------------------+------+---------------------------------------------+--------+--------+
|
||||
|
||||
.. end
|
||||
|
||||
Query instance DNS information to Designate ``dns_interface`` IP address:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
||||
192.168.190.232
|
||||
dig +short -p 5354 @<DNS_INTERFACE_IP> my-vm.sample.openstack.org. A
|
||||
192.168.190.232
|
||||
|
||||
.. end
|
||||
|
||||
For more information about how Designate works, see
|
||||
`Designate, a DNSaaS component for OpenStack
|
||||
|
Loading…
Reference in New Issue
Block a user