Upgrade the rst convention of the Reference Guide [2]
We upgrade the rst convention by following Documentation Contributor Guide[1]. [1] https://docs.openstack.org/doc-contrib-guide Change-Id: I825deadefcf996732a03e61c8fd19cfd6a498e77 Partially-Implements: blueprint optimize-the-documentation-format
This commit is contained in:
parent
378b8dd932
commit
c2d54b9737
@ -9,7 +9,7 @@ cluster instead of deploying it with Kolla. This can be achieved with only a
|
|||||||
few configuration steps in Kolla.
|
few configuration steps in Kolla.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
* An existing installation of Ceph
|
* An existing installation of Ceph
|
||||||
* Existing Ceph storage pools
|
* Existing Ceph storage pools
|
||||||
@ -17,92 +17,103 @@ Requirements
|
|||||||
(Glance, Cinder, Nova, Gnocchi)
|
(Glance, Cinder, Nova, Gnocchi)
|
||||||
|
|
||||||
Enabling External Ceph
|
Enabling External Ceph
|
||||||
======================
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Using external Ceph with Kolla means not to deploy Ceph via Kolla. Therefore,
|
Using external Ceph with Kolla means not to deploy Ceph via Kolla. Therefore,
|
||||||
disable Ceph deployment in ``/etc/kolla/globals.yml``
|
disable Ceph deployment in ``/etc/kolla/globals.yml``
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ceph: "no"
|
enable_ceph: "no"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
There are flags indicating individual services to use ceph or not which default
|
There are flags indicating individual services to use ceph or not which default
|
||||||
to the value of ``enable_ceph``. Those flags now need to be activated in order
|
to the value of ``enable_ceph``. Those flags now need to be activated in order
|
||||||
to activate external Ceph integration. This can be done individually per
|
to activate external Ceph integration. This can be done individually per
|
||||||
service in ``/etc/kolla/globals.yml``:
|
service in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
glance_backend_ceph: "yes"
|
glance_backend_ceph: "yes"
|
||||||
cinder_backend_ceph: "yes"
|
cinder_backend_ceph: "yes"
|
||||||
nova_backend_ceph: "yes"
|
nova_backend_ceph: "yes"
|
||||||
gnocchi_backend_storage: "ceph"
|
gnocchi_backend_storage: "ceph"
|
||||||
enable_manila_backend_ceph_native: "yes"
|
enable_manila_backend_ceph_native: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
|
The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
|
||||||
triggers the activation of external ceph mechanism in Kolla.
|
triggers the activation of external ceph mechanism in Kolla.
|
||||||
|
|
||||||
Edit the Inventory File
|
Edit the Inventory File
|
||||||
=======================
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
When using external Ceph, there may be no nodes defined in the storage group.
|
When using external Ceph, there may be no nodes defined in the storage group.
|
||||||
This will cause Cinder and related services relying on this group to fail.
|
This will cause Cinder and related services relying on this group to fail.
|
||||||
In this case, operator should add some nodes to the storage group, all the
|
In this case, operator should add some nodes to the storage group, all the
|
||||||
nodes where cinder-volume and cinder-backup will run:
|
nodes where ``cinder-volume`` and ``cinder-backup`` will run:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[storage]
|
[storage]
|
||||||
compute01
|
compute01
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuring External Ceph
|
Configuring External Ceph
|
||||||
=========================
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Glance
|
Glance
|
||||||
------
|
------
|
||||||
|
|
||||||
Configuring Glance for Ceph includes three steps:
|
Configuring Glance for Ceph includes three steps:
|
||||||
|
|
||||||
1) Configure RBD back end in glance-api.conf
|
#. Configure RBD back end in ``glance-api.conf``
|
||||||
2) Create Ceph configuration file in /etc/ceph/ceph.conf
|
#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
|
||||||
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
|
#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
|
||||||
|
|
||||||
Step 1 is done by using Kolla's INI merge mechanism: Create a file in
|
Step 1 is done by using Kolla's INI merge mechanism: Create a file in
|
||||||
``/etc/kolla/config/glance/glance-api.conf`` with the following contents:
|
``/etc/kolla/config/glance/glance-api.conf`` with the following contents:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[glance_store]
|
[glance_store]
|
||||||
stores = rbd
|
stores = rbd
|
||||||
default_store = rbd
|
default_store = rbd
|
||||||
rbd_store_pool = images
|
rbd_store_pool = images
|
||||||
rbd_store_user = glance
|
rbd_store_user = glance
|
||||||
rbd_store_ceph_conf = /etc/ceph/ceph.conf
|
rbd_store_ceph_conf = /etc/ceph/ceph.conf
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Now put ceph.conf and the keyring file (name depends on the username created in
|
Now put ceph.conf and the keyring file (name depends on the username created in
|
||||||
Ceph) into the same directory, for example:
|
Ceph) into the same directory, for example:
|
||||||
|
|
||||||
/etc/kolla/config/glance/ceph.conf
|
.. path /etc/kolla/config/glance/ceph.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
::
|
[global]
|
||||||
|
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
||||||
|
mon_initial_members = ceph-0
|
||||||
|
mon_host = 192.168.0.56
|
||||||
|
auth_cluster_required = cephx
|
||||||
|
auth_service_required = cephx
|
||||||
|
auth_client_required = cephx
|
||||||
|
|
||||||
[global]
|
.. end
|
||||||
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
|
||||||
mon_initial_members = ceph-0
|
|
||||||
mon_host = 192.168.0.56
|
|
||||||
auth_cluster_required = cephx
|
|
||||||
auth_service_required = cephx
|
|
||||||
auth_client_required = cephx
|
|
||||||
|
|
||||||
/etc/kolla/config/glance/ceph.client.glance.keyring
|
.. code-block:: none
|
||||||
|
|
||||||
::
|
$ cat /etc/kolla/config/glance/ceph.client.glance.keyring
|
||||||
|
|
||||||
[client.glance]
|
[client.glance]
|
||||||
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
|
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
|
||||||
|
|
||||||
Kolla will pick up all files named ceph.* in this directory an copy them to the
|
.. end
|
||||||
/etc/ceph/ directory of the container.
|
|
||||||
|
Kolla will pick up all files named ``ceph.*`` in this directory and copy them
|
||||||
|
to the ``/etc/ceph/`` directory of the container.
|
||||||
|
|
||||||
Cinder
|
Cinder
|
||||||
------
|
------
|
||||||
@ -110,61 +121,68 @@ Cinder
|
|||||||
Configuring external Ceph for Cinder works very similar to
|
Configuring external Ceph for Cinder works very similar to
|
||||||
Glance.
|
Glance.
|
||||||
|
|
||||||
Edit /etc/kolla/config/cinder/cinder-volume.conf with the following content:
|
Modify ``/etc/kolla/config/cinder/cinder-volume.conf`` file according to
|
||||||
|
the following configuration:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
enabled_backends=rbd-1
|
enabled_backends=rbd-1
|
||||||
|
|
||||||
[rbd-1]
|
[rbd-1]
|
||||||
rbd_ceph_conf=/etc/ceph/ceph.conf
|
rbd_ceph_conf=/etc/ceph/ceph.conf
|
||||||
rbd_user=cinder
|
rbd_user=cinder
|
||||||
backend_host=rbd:volumes
|
backend_host=rbd:volumes
|
||||||
rbd_pool=volumes
|
rbd_pool=volumes
|
||||||
volume_backend_name=rbd-1
|
volume_backend_name=rbd-1
|
||||||
volume_driver=cinder.volume.drivers.rbd.RBDDriver
|
volume_driver=cinder.volume.drivers.rbd.RBDDriver
|
||||||
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
|
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
|
``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
|
||||||
|
|
||||||
Edit /etc/kolla/config/cinder/cinder-backup.conf with the following content:
|
Modify ``/etc/kolla/config/cinder/cinder-backup.conf`` file according to
|
||||||
|
the following configuration:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[DEFAULT]
|
[DEFAULT]
|
||||||
backup_ceph_conf=/etc/ceph/ceph.conf
|
backup_ceph_conf=/etc/ceph/ceph.conf
|
||||||
backup_ceph_user=cinder-backup
|
backup_ceph_user=cinder-backup
|
||||||
backup_ceph_chunk_size = 134217728
|
backup_ceph_chunk_size = 134217728
|
||||||
backup_ceph_pool=backups
|
backup_ceph_pool=backups
|
||||||
backup_driver = cinder.backup.drivers.ceph
|
backup_driver = cinder.backup.drivers.ceph
|
||||||
backup_ceph_stripe_unit = 0
|
backup_ceph_stripe_unit = 0
|
||||||
backup_ceph_stripe_count = 0
|
backup_ceph_stripe_count = 0
|
||||||
restore_discard_excess_bytes = true
|
restore_discard_excess_bytes = true
|
||||||
|
|
||||||
Next, place the ceph.conf file into
|
.. end
|
||||||
/etc/kolla/config/cinder/ceph.conf:
|
|
||||||
|
|
||||||
::
|
Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
|
||||||
|
|
||||||
[global]
|
.. code-block:: ini
|
||||||
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
|
||||||
mon_initial_members = ceph-0
|
[global]
|
||||||
mon_host = 192.168.0.56
|
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
||||||
auth_cluster_required = cephx
|
mon_initial_members = ceph-0
|
||||||
auth_service_required = cephx
|
mon_host = 192.168.0.56
|
||||||
auth_client_required = cephx
|
auth_cluster_required = cephx
|
||||||
|
auth_service_required = cephx
|
||||||
|
auth_client_required = cephx
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Separate configuration options can be configured for
|
Separate configuration options can be configured for
|
||||||
cinder-volume and cinder-backup by adding ceph.conf files to
|
cinder-volume and cinder-backup by adding ceph.conf files to
|
||||||
/etc/kolla/config/cinder/cinder-volume and
|
``/etc/kolla/config/cinder/cinder-volume`` and
|
||||||
/etc/kolla/config/cinder/cinder-backup respectively. They
|
``/etc/kolla/config/cinder/cinder-backup`` respectively. They
|
||||||
will be merged with /etc/kolla/config/cinder/ceph.conf.
|
will be merged with ``/etc/kolla/config/cinder/ceph.conf``.
|
||||||
|
|
||||||
Ceph keyrings are deployed per service and placed into
|
Ceph keyrings are deployed per service and placed into
|
||||||
cinder-volume and cinder-backup directories, put the keyring files
|
``cinder-volume`` and ``cinder-backup`` directories, put the keyring files
|
||||||
to these directories, for example:
|
to these directories, for example:
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -172,111 +190,131 @@ to these directories, for example:
|
|||||||
``cinder-backup`` requires two keyrings for accessing volumes
|
``cinder-backup`` requires two keyrings for accessing volumes
|
||||||
and backup pool.
|
and backup pool.
|
||||||
|
|
||||||
/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
|
.. code-block:: console
|
||||||
|
|
||||||
::
|
$ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
|
||||||
|
|
||||||
[client.cinder]
|
[client.cinder]
|
||||||
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
|
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
|
||||||
|
|
||||||
/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
|
.. end
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
[client.cinder-backup]
|
$ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
|
||||||
key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
|
|
||||||
|
|
||||||
/etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
|
[client.cinder-backup]
|
||||||
|
key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
|
||||||
|
|
||||||
::
|
.. end
|
||||||
|
|
||||||
[client.cinder]
|
.. code-block:: console
|
||||||
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
|
|
||||||
|
|
||||||
It is important that the files are named ceph.client*.
|
$ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
|
||||||
|
|
||||||
|
[client.cinder]
|
||||||
|
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
It is important that the files are named ``ceph.client*``.
|
||||||
|
|
||||||
Nova
|
Nova
|
||||||
------
|
----
|
||||||
|
|
||||||
Put ceph.conf, nova client keyring file and cinder client keyring file into
|
Put ceph.conf, nova client keyring file and cinder client keyring file into
|
||||||
``/etc/kolla/config/nova``:
|
``/etc/kolla/config/nova``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ ls /etc/kolla/config/nova
|
$ ls /etc/kolla/config/nova
|
||||||
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
|
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configure nova-compute to use Ceph as the ephemeral back end by creating
|
Configure nova-compute to use Ceph as the ephemeral back end by creating
|
||||||
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
|
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
|
||||||
contents:
|
configurations:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[libvirt]
|
[libvirt]
|
||||||
images_rbd_pool=vms
|
images_rbd_pool=vms
|
||||||
images_type=rbd
|
images_type=rbd
|
||||||
images_rbd_ceph_conf=/etc/ceph/ceph.conf
|
images_rbd_ceph_conf=/etc/ceph/ceph.conf
|
||||||
rbd_user=nova
|
rbd_user=nova
|
||||||
|
|
||||||
.. note:: ``rbd_user`` might vary depending on your environment.
|
.. end
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
``rbd_user`` might vary depending on your environment.
|
||||||
|
|
||||||
Gnocchi
|
Gnocchi
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Edit ``/etc/kolla/config/gnocchi/gnocchi.conf`` with the following content:
|
Modify ``/etc/kolla/config/gnocchi/gnocchi.conf`` file according to
|
||||||
|
the following configuration:
|
||||||
|
|
||||||
::
|
.. code-block:: ini
|
||||||
|
|
||||||
[storage]
|
[storage]
|
||||||
driver = ceph
|
driver = ceph
|
||||||
ceph_username = gnocchi
|
ceph_username = gnocchi
|
||||||
ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
|
ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
|
||||||
ceph_conffile = /etc/ceph/ceph.conf
|
ceph_conffile = /etc/ceph/ceph.conf
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Put ceph.conf and gnocchi client keyring file in
|
Put ceph.conf and gnocchi client keyring file in
|
||||||
``/etc/kolla/config/gnocchi``:
|
``/etc/kolla/config/gnocchi``:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
$ ls /etc/kolla/config/gnocchi
|
$ ls /etc/kolla/config/gnocchi
|
||||||
ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
|
ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Manila
|
Manila
|
||||||
------
|
------
|
||||||
|
|
||||||
Configuring Manila for Ceph includes four steps:
|
Configuring Manila for Ceph includes four steps:
|
||||||
|
|
||||||
1) Configure CephFS backend, setting enable_manila_backend_ceph_native
|
#. Configure CephFS backend, setting ``enable_manila_backend_ceph_native``
|
||||||
2) Create Ceph configuration file in /etc/ceph/ceph.conf
|
#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
|
||||||
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
|
#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
|
||||||
4) Setup Manila in the usual way
|
#. Setup Manila in the usual way
|
||||||
|
|
||||||
Step 1 is done by using setting enable_manila_backend_ceph_native=true
|
Step 1 is done by using setting ``enable_manila_backend_ceph_native=true``
|
||||||
|
|
||||||
Now put ceph.conf and the keyring file (name depends on the username created
|
Now put ceph.conf and the keyring file (name depends on the username created
|
||||||
in Ceph) into the same directory, for example:
|
in Ceph) into the same directory, for example:
|
||||||
|
|
||||||
/etc/kolla/config/manila/ceph.conf
|
.. path /etc/kolla/config/manila/ceph.conf
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
::
|
[global]
|
||||||
|
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
||||||
|
mon_host = 192.168.0.56
|
||||||
|
auth_cluster_required = cephx
|
||||||
|
auth_service_required = cephx
|
||||||
|
auth_client_required = cephx
|
||||||
|
|
||||||
[global]
|
.. end
|
||||||
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
|
|
||||||
mon_host = 192.168.0.56
|
|
||||||
auth_cluster_required = cephx
|
|
||||||
auth_service_required = cephx
|
|
||||||
auth_client_required = cephx
|
|
||||||
|
|
||||||
/etc/kolla/config/manila/ceph.client.manila.keyring
|
.. code-block:: console
|
||||||
|
|
||||||
::
|
$ cat /etc/kolla/config/manila/ceph.client.manila.keyring
|
||||||
|
|
||||||
[client.manila]
|
[client.manila]
|
||||||
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
|
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more details on the rest of the Manila setup, such as creating the share
|
For more details on the rest of the Manila setup, such as creating the share
|
||||||
type ``default_share_type``, please see:
|
type ``default_share_type``, please see `Manila in Kolla
|
||||||
https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html
|
<https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__.
|
||||||
|
|
||||||
For more details on the CephFS Native driver, please see:
|
For more details on the CephFS Native driver, please see `CephFS driver
|
||||||
https://docs.openstack.org/manila/latest/admin/cephfs_driver.html
|
<https://docs.openstack.org/manila/latest/admin/cephfs_driver.html>`__.
|
||||||
|
@ -9,7 +9,7 @@ it might be necessary to use an externally managed database.
|
|||||||
This use case can be achieved by simply taking some extra steps:
|
This use case can be achieved by simply taking some extra steps:
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
============
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
* An existing MariaDB cluster / server, reachable from all of your
|
* An existing MariaDB cluster / server, reachable from all of your
|
||||||
nodes.
|
nodes.
|
||||||
@ -23,7 +23,7 @@ Requirements
|
|||||||
user accounts for all enabled services.
|
user accounts for all enabled services.
|
||||||
|
|
||||||
Enabling External MariaDB support
|
Enabling External MariaDB support
|
||||||
=================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In order to enable external mariadb support,
|
In order to enable external mariadb support,
|
||||||
you will first need to disable mariadb deployment,
|
you will first need to disable mariadb deployment,
|
||||||
@ -31,186 +31,163 @@ by ensuring the following line exists within ``/etc/kolla/globals.yml`` :
|
|||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_mariadb: "no"
|
enable_mariadb: "no"
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
There are two ways in which you can use
|
There are two ways in which you can use external MariaDB:
|
||||||
external MariaDB:
|
* Using an already load-balanced MariaDB address
|
||||||
|
* Using an external MariaDB cluster
|
||||||
|
|
||||||
Using an already load-balanced MariaDB address (recommended)
|
Using an already load-balanced MariaDB address (recommended)
|
||||||
------------------------------------------------------------
|
------------------------------------------------------------
|
||||||
|
|
||||||
If your external database already has a
|
If your external database already has a load balancer, you will
|
||||||
load balancer, you will need to do the following:
|
need to do the following:
|
||||||
|
|
||||||
* Within your inventory file, just add the hostname
|
#. Edit the inventory file, change ``control`` to the hostname of the load
|
||||||
of the load balancer within the mariadb group,
|
balancer within the ``mariadb`` group as below:
|
||||||
described as below:
|
|
||||||
|
|
||||||
Change the following
|
.. code-block:: ini
|
||||||
|
|
||||||
|
[mariadb:children]
|
||||||
|
myexternalmariadbloadbalancer.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
|
||||||
|
#. Define ``database_address`` in ``/etc/kolla/globals.yml`` file:
|
||||||
|
|
||||||
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
database_address: myexternalloadbalancer.com
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
If ``enable_external_mariadb_load_balancer`` is set to ``no``
|
||||||
|
(default), the external DB load balancer should be accessible
|
||||||
|
from all nodes during your deployment.
|
||||||
|
|
||||||
|
Using an external MariaDB cluster
|
||||||
|
---------------------------------
|
||||||
|
|
||||||
|
Using this way, you need to adjust the inventory file:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
[mariadb:children]
|
[mariadb:children]
|
||||||
control
|
myexternaldbserver1.com
|
||||||
|
myexternaldbserver2.com
|
||||||
.. end
|
myexternaldbserver3.com
|
||||||
|
|
||||||
so that it looks like below:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[mariadb]
|
|
||||||
myexternalmariadbloadbalancer.com
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Define **database_address** within ``/etc/kolla/globals.yml``
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
database_address: myexternalloadbalancer.com
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Please note that if **enable_external_mariadb_load_balancer** is
|
|
||||||
set to "no" - **default**, the external DB load balancer will need to be
|
|
||||||
accessible from all nodes within your deployment, which might
|
|
||||||
connect to it.
|
|
||||||
|
|
||||||
Using an external MariaDB cluster:
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
Then, you will need to adjust your inventory file:
|
|
||||||
|
|
||||||
Change the following
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[mariadb:children]
|
|
||||||
control
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
so that it looks like below:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[mariadb]
|
|
||||||
myexternaldbserver1.com
|
|
||||||
myexternaldbserver2.com
|
|
||||||
myexternaldbserver3.com
|
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
If you choose to use haproxy for load balancing between the
|
If you choose to use haproxy for load balancing between the
|
||||||
members of the cluster, every node within this group
|
members of the cluster, every node within this group
|
||||||
needs to be resolvable and reachable and resolvable from all
|
needs to be resolvable and reachable from all
|
||||||
the hosts within the **[haproxy:children]** group
|
the hosts within the ``[haproxy:children]`` group
|
||||||
of your inventory (defaults to **[network]**).
|
of your inventory (defaults to ``[network]``).
|
||||||
|
|
||||||
In addition to that, you also need to set the following within
|
In addition, configure the ``/etc/kolla/globals.yml`` file
|
||||||
``/etc/kolla/globals.yml``:
|
according to the following configuration:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_external_mariadb_load_balancer: yes
|
enable_external_mariadb_load_balancer: yes
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
Using External MariaDB with a privileged user
|
Using External MariaDB with a privileged user
|
||||||
=============================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In case your MariaDB user is root, just leave
|
In case your MariaDB user is root, just leave
|
||||||
everything as it is within globals.yml (Except the
|
everything as it is within globals.yml (Except the
|
||||||
internal mariadb deployment, which should be disabled),
|
internal mariadb deployment, which should be disabled),
|
||||||
and set the **database_password** field within
|
and set the ``database_password`` in ``/etc/kolla/passwords.yml`` file:
|
||||||
``/etc/kolla/passwords.yml``
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
database_password: mySuperSecurePassword
|
database_password: mySuperSecurePassword
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
In case your username is other than **root**, you will
|
If the MariaDB ``username`` is not ``root``, set ``database_username`` in
|
||||||
need to also set it, within ``/etc/kolla/globals.yml``
|
``/etc/kolla/globals.yml`` file:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
database_username: "privillegeduser"
|
database_username: "privillegeduser"
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
Using preconfigured databases / users:
|
Using preconfigured databases / users:
|
||||||
======================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The first step you need to take is the following:
|
The first step you need to take is to set ``use_preconfigured_databases`` to
|
||||||
|
``yes`` in the ``/etc/kolla/globals.yml`` file:
|
||||||
Within ``/etc/kolla/globals.yml``, set the following:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
use_preconfigured_databases: "yes"
|
use_preconfigured_databases: "yes"
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
.. note:: Please note that when the ``use_preconfigured_databases`` flag
|
.. note::
|
||||||
is set to ``"yes"``, you need to have the ``log_bin_trust_function_creators``
|
|
||||||
mysql variable set to ``1`` by your database administrator before running the
|
when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need
|
||||||
``upgrade`` command.
|
to make sure the mysql variable ``log_bin_trust_function_creators``
|
||||||
|
set to ``1`` by the database administrator before running the
|
||||||
|
:command:`upgrade` command.
|
||||||
|
|
||||||
Using External MariaDB with separated, preconfigured users and databases
|
Using External MariaDB with separated, preconfigured users and databases
|
||||||
------------------------------------------------------------------------
|
------------------------------------------------------------------------
|
||||||
|
|
||||||
In order to achieve this, you will need to define the user names within
|
In order to achieve this, you will need to define the user names in the
|
||||||
``/etc/kolla/globals.yml``, as illustrated by the example below:
|
``/etc/kolla/globals.yml`` file, as illustrated by the example below:
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
keystone_database_user: preconfigureduser1
|
keystone_database_user: preconfigureduser1
|
||||||
nova_database_user: preconfigureduser2
|
nova_database_user: preconfigureduser2
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
You will need to also set the passwords for all databases within
|
Also, you will need to set the passwords for all databases in the
|
||||||
``/etc/kolla/passwords.yml``
|
``/etc/kolla/passwords.yml`` file
|
||||||
|
|
||||||
|
|
||||||
However, fortunately, using a common user across
|
|
||||||
all databases is also possible.
|
|
||||||
|
|
||||||
|
However, fortunately, using a common user across all databases is possible.
|
||||||
|
|
||||||
Using External MariaDB with a common user across databases
|
Using External MariaDB with a common user across databases
|
||||||
----------------------------------------------------------
|
----------------------------------------------------------
|
||||||
|
|
||||||
In order to use a common, preconfigured user across all databases,
|
In order to use a common, preconfigured user across all databases,
|
||||||
all you need to do is the following:
|
all you need to do is the following steps:
|
||||||
|
|
||||||
* Within ``/etc/kolla/globals.yml``, add the following:
|
#. Edit the ``/etc/kolla/globals.yml`` file, add the following:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
use_common_mariadb_user: "yes"
|
use_common_mariadb_user: "yes"
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
* Set the database_user within ``/etc/kolla/globals.yml`` to
|
#. Set the database_user within ``/etc/kolla/globals.yml`` to
|
||||||
the one provided to you:
|
the one provided to you:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
database_user: mycommondatabaseuser
|
database_user: mycommondatabaseuser
|
||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
* Set the common password for all components within ``/etc/kolla/passwords.yml```.
|
#. Set the common password for all components within ``/etc/kolla/passwords.yml```.
|
||||||
In order to achieve that you could use the following command:
|
In order to achieve that you could use the following command:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
|
sed -i -r -e 's/([a-z_]{0,}database_password:+)(.*)$/\1 mycommonpass/gi' /etc/kolla/passwords.yml
|
||||||
|
|
||||||
.. end
|
.. end
|
@ -5,7 +5,7 @@ Nova-HyperV in Kolla
|
|||||||
====================
|
====================
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
Currently, Kolla can deploy the following OpenStack services for Hyper-V:
|
Currently, Kolla can deploy the following OpenStack services for Hyper-V:
|
||||||
|
|
||||||
* nova-compute
|
* nova-compute
|
||||||
@ -24,30 +24,28 @@ virtual machines from Horizon web interface.
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
HyperV services are not currently deployed as containers. This functionality
|
HyperV services are not currently deployed as containers. This functionality
|
||||||
is in development. The current implementation installs OpenStack services
|
is in development. The current implementation installs OpenStack services
|
||||||
via MSIs.
|
via MSIs.
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
HyperV services do not currently support outside the box upgrades. Manual
|
HyperV services do not currently support outside the box upgrades. Manual
|
||||||
upgrades are required for this process. MSI release versions can be found
|
upgrades are required for this process. MSI release versions can be found
|
||||||
`here
|
`here <https://cloudbase.it/openstack-hyperv-driver/>`__.
|
||||||
<https://cloudbase.it/openstack-hyperv-driver/>`__.
|
To upgrade an existing MSI to a newer version, simply uninstall the current
|
||||||
To upgrade an existing MSI to a newer version, simply uninstall the current
|
MSI and install the newer one. This will not delete the configuration files.
|
||||||
MSI and install the newer one. This will not delete the configuration files.
|
To preserve the configuration files, check the Skip configuration checkbox
|
||||||
To preserve the configuration files, check the Skip configuration checkbox
|
during installation.
|
||||||
during installation.
|
|
||||||
|
|
||||||
|
|
||||||
Preparation for Hyper-V node
|
Preparation for Hyper-V node
|
||||||
============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Ansible communicates with Hyper-V host via WinRM protocol. An HTTPS WinRM
|
Ansible communicates with Hyper-V host via WinRM protocol. An HTTPS WinRM
|
||||||
listener needs to be configured on the Hyper-V host, which can be easily
|
listener needs to be configured on the Hyper-V host, which can be easily
|
||||||
created with
|
created with `this PowerShell script
|
||||||
`this PowerShell script
|
|
||||||
<https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`__.
|
<https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`__.
|
||||||
|
|
||||||
|
|
||||||
@ -57,13 +55,16 @@ Virtual Interface the following PowerShell may be used:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
|
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
|
||||||
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
|
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
It is very important to make sure that when you are using a Hyper-V node with only 1 NIC the
|
It is very important to make sure that when you are using a Hyper-V node
|
||||||
-AllowManagementOS option is set on True, otherwise you will lose connectivity to the Hyper-V node.
|
with only 1 NIC the ``-AllowManagementOS`` option is set on ``True``,
|
||||||
|
otherwise you will lose connectivity to the Hyper-V node.
|
||||||
|
|
||||||
|
|
||||||
To prepare the Hyper-V node to be able to attach to volumes provided by
|
To prepare the Hyper-V node to be able to attach to volumes provided by
|
||||||
@ -72,72 +73,83 @@ running and started automatically.
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
|
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
|
||||||
PS C:\> Start-Service MSiSCSI
|
PS C:\> Start-Service MSiSCSI
|
||||||
|
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Preparation for Kolla deployer node
|
Preparation for Kolla deployer node
|
||||||
===================================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
|
Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_hyperv: "yes"
|
enable_hyperv: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Hyper-V options are also required in ``/etc/kolla/globals.yml``:
|
Hyper-V options are also required in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
hyperv_username: <HyperV username>
|
hyperv_username: <HyperV username>
|
||||||
hyperv_password: <HyperV password>
|
hyperv_password: <HyperV password>
|
||||||
vswitch_name: <HyperV virtual switch name>
|
vswitch_name: <HyperV virtual switch name>
|
||||||
nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
|
nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
If tenant networks are to be built using VLAN add corresponding type in
|
If tenant networks are to be built using VLAN add corresponding type in
|
||||||
``/etc/kolla/globals.yml``:
|
``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
neutron_tenant_network_types: 'flat,vlan'
|
neutron_tenant_network_types: 'flat,vlan'
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The virtual switch is the same one created on the HyperV setup part.
|
The virtual switch is the same one created on the HyperV setup part.
|
||||||
For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
|
For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
|
||||||
be found on `Cloudbase website
|
be found on `Cloudbase website
|
||||||
<https://cloudbase.it/openstack-hyperv-driver/>`__.
|
<https://cloudbase.it/openstack-hyperv-driver/>`__.
|
||||||
|
|
||||||
|
|
||||||
Add the Hyper-V node in ``ansible/inventory`` file:
|
Add the Hyper-V node in ``ansible/inventory`` file:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: none
|
||||||
|
|
||||||
[hyperv]
|
[hyperv]
|
||||||
<HyperV IP>
|
<HyperV IP>
|
||||||
|
|
||||||
[hyperv:vars]
|
[hyperv:vars]
|
||||||
ansible_user=<HyperV user>
|
ansible_user=<HyperV user>
|
||||||
ansible_password=<HyperV password>
|
ansible_password=<HyperV password>
|
||||||
ansible_port=5986
|
ansible_port=5986
|
||||||
ansible_connection=winrm
|
ansible_connection=winrm
|
||||||
ansible_winrm_server_cert_validation=ignore
|
ansible_winrm_server_cert_validation=ignore
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
``pywinrm`` package needs to be installed in order for Ansible to work
|
``pywinrm`` package needs to be installed in order for Ansible to work
|
||||||
on the HyperV node:
|
on the HyperV node:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
pip install "pywinrm>=0.2.2"
|
pip install "pywinrm>=0.2.2"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
In case of a test deployment with controller and compute nodes as virtual machines
|
In case of a test deployment with controller and compute nodes as
|
||||||
on Hyper-V, if VLAN tenant networking is used, trunk mode has to be enabled on the
|
virtual machines on Hyper-V, if VLAN tenant networking is used,
|
||||||
VMs:
|
trunk mode has to be enabled on the VMs:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
|
Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
networking-hyperv mechanism driver is needed for neutron-server to
|
networking-hyperv mechanism driver is needed for neutron-server to
|
||||||
communicate with HyperV nova-compute. This can be built with source
|
communicate with HyperV nova-compute. This can be built with source
|
||||||
@ -146,7 +158,9 @@ container with pip:
|
|||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
pip install "networking-hyperv>=4.0.0"
|
pip install "networking-hyperv>=4.0.0"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For neutron_extension_drivers, ``port_security`` and ``qos`` are
|
For neutron_extension_drivers, ``port_security`` and ``qos`` are
|
||||||
currently supported by the networking-hyperv mechanism driver.
|
currently supported by the networking-hyperv mechanism driver.
|
||||||
@ -154,20 +168,23 @@ By default only ``port_security`` is set.
|
|||||||
|
|
||||||
|
|
||||||
Verify Operations
|
Verify Operations
|
||||||
=================
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
OpenStack HyperV services can be inspected and managed from PowerShell:
|
OpenStack HyperV services can be inspected and managed from PowerShell:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
PS C:\> Get-Service nova-compute
|
PS C:\> Get-Service nova-compute
|
||||||
PS C:\> Get-Service neutron-hyperv-agent
|
PS C:\> Get-Service neutron-hyperv-agent
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
PS C:\> Restart-Service nova-compute
|
PS C:\> Restart-Service nova-compute
|
||||||
PS C:\> Restart-Service neutron-hyperv-agent
|
PS C:\> Restart-Service neutron-hyperv-agent
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information on OpenStack HyperV, see
|
For more information on OpenStack HyperV, see
|
||||||
`Hyper-V virtualization platform
|
`Hyper-V virtualization platform
|
||||||
|
@ -1,12 +1,12 @@
|
|||||||
Reference
|
Projects Deployment References
|
||||||
=========
|
==============================
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 2
|
||||||
|
|
||||||
ceph-guide
|
ceph-guide
|
||||||
central-logging-guide
|
|
||||||
external-ceph-guide
|
external-ceph-guide
|
||||||
|
central-logging-guide
|
||||||
external-mariadb-guide
|
external-mariadb-guide
|
||||||
cinder-guide
|
cinder-guide
|
||||||
cinder-guide-hnas
|
cinder-guide-hnas
|
||||||
|
@ -5,7 +5,7 @@ Ironic in Kolla
|
|||||||
===============
|
===============
|
||||||
|
|
||||||
Overview
|
Overview
|
||||||
========
|
~~~~~~~~
|
||||||
Currently Kolla can deploy the Ironic services:
|
Currently Kolla can deploy the Ironic services:
|
||||||
|
|
||||||
- ironic-api
|
- ironic-api
|
||||||
@ -16,61 +16,72 @@ Currently Kolla can deploy the Ironic services:
|
|||||||
As well as a required PXE service, deployed as ironic-pxe.
|
As well as a required PXE service, deployed as ironic-pxe.
|
||||||
|
|
||||||
Current status
|
Current status
|
||||||
==============
|
~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The Ironic implementation is "tech preview", so currently instances can only be
|
The Ironic implementation is "tech preview", so currently instances can only be
|
||||||
deployed on baremetal. Further work will be done to allow scheduling for both
|
deployed on baremetal. Further work will be done to allow scheduling for both
|
||||||
virtualized and baremetal deployments.
|
virtualized and baremetal deployments.
|
||||||
|
|
||||||
Pre-deployment Configuration
|
Pre-deployment Configuration
|
||||||
============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Enable Ironic role in ``/etc/kolla/globals.yml``:
|
Enable Ironic role in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_ironic: "yes"
|
enable_ironic: "yes"
|
||||||
|
|
||||||
Beside that an additional network type 'vlan,flat' has to be added to a list of
|
.. end
|
||||||
|
|
||||||
|
Beside that an additional network type ``vlan,flat`` has to be added to a list of
|
||||||
tenant network types:
|
tenant network types:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: yaml
|
||||||
|
|
||||||
neutron_tenant_network_types: "vxlan,vlan,flat"
|
neutron_tenant_network_types: "vxlan,vlan,flat"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Configuring Web Console
|
Configuring Web Console
|
||||||
=======================
|
~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Configuration based off upstream web_console_documentation_.
|
|
||||||
|
Configuration based off upstream `Node web console
|
||||||
|
<https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console>`__.
|
||||||
|
|
||||||
Serial speed must be the same as the serial configuration in the BIOS settings.
|
Serial speed must be the same as the serial configuration in the BIOS settings.
|
||||||
Default value: 115200bps, 8bit, non-parity.If you have different serial speed.
|
Default value: 115200bps, 8bit, non-parity.If you have different serial speed.
|
||||||
|
|
||||||
Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
|
Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
ironic_console_serial_speed: 9600n8
|
ironic_console_serial_speed: 9600n8
|
||||||
|
|
||||||
.. _web_console_documentation: https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console
|
.. end
|
||||||
|
|
||||||
Post-deployment configuration
|
Post-deployment configuration
|
||||||
=============================
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
Configuration based off upstream documentation_.
|
|
||||||
|
Configuration based off upstream `Ironic installation Documentation
|
||||||
|
<https://docs.openstack.org/ironic/latest/install/index.html>`__.
|
||||||
|
|
||||||
Again, remember that enabling Ironic reconfigures nova compute (driver and
|
Again, remember that enabling Ironic reconfigures nova compute (driver and
|
||||||
scheduler) as well as changes neutron network settings. Further neutron setup
|
scheduler) as well as changes neutron network settings. Further neutron setup
|
||||||
is required as outlined below.
|
is required as outlined below.
|
||||||
|
|
||||||
Create the flat network to launch the instances:
|
Create the flat network to launch the instances:
|
||||||
::
|
|
||||||
|
|
||||||
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
|
.. code-block:: console
|
||||||
--provider:network_type flat --provider:physical_network physnet1
|
|
||||||
|
|
||||||
neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
|
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
|
||||||
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
|
--provider:network_type flat --provider:physical_network physnet1
|
||||||
start=$START_IP,end=$END_IP --enable-dhcp
|
|
||||||
|
|
||||||
And then the above ID is used to set cleaning_network in the neutron
|
neutron subnet-create sharednet1 $NETWORK_CIDR --name $SUBNET_NAME \
|
||||||
section of ironic.conf.
|
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
|
||||||
|
start=$START_IP,end=$END_IP --enable-dhcp
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
|
And then the above ID is used to set ``cleaning_network`` in the neutron
|
||||||
|
section of ``ironic.conf``.
|
||||||
|
|
||||||
.. _documentation: https://docs.openstack.org/ironic/latest/install/index.html
|
|
||||||
|
@ -1,25 +1,28 @@
|
|||||||
|
==============
|
||||||
Kuryr in Kolla
|
Kuryr in Kolla
|
||||||
==============
|
==============
|
||||||
|
|
||||||
"Kuryr is a Docker network plugin that uses Neutron to provide networking
|
"Kuryr is a Docker network plugin that uses Neutron to provide networking
|
||||||
services to Docker containers. It provides containerized images for the common
|
services to Docker containers. It provides containerized images for the common
|
||||||
Neutron plugins" [1]. Kuryr requires at least Keystone and neutron. Kolla makes
|
Neutron plugins. Kuryr requires at least Keystone and neutron. Kolla makes
|
||||||
kuryr deployment faster and accessible.
|
kuryr deployment faster and accessible.
|
||||||
|
|
||||||
Requirements
|
Requirements
|
||||||
------------
|
~~~~~~~~~~~~
|
||||||
|
|
||||||
* A minimum of 3 hosts for a vanilla deploy
|
* A minimum of 3 hosts for a vanilla deploy
|
||||||
|
|
||||||
Preparation and Deployment
|
Preparation and Deployment
|
||||||
--------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To allow Docker daemon connect to the etcd, add the following in the
|
To allow Docker daemon connect to the etcd, add the following in the
|
||||||
docker.service file.
|
``docker.service`` file.
|
||||||
|
|
||||||
::
|
.. code-block:: none
|
||||||
|
|
||||||
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
|
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The IP address is host running the etcd service. ```2375``` is port that
|
The IP address is host running the etcd service. ```2375``` is port that
|
||||||
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
|
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
|
||||||
@ -29,36 +32,46 @@ By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
|
|||||||
In order to enable them, you need to edit the file globals.yml and set the
|
In order to enable them, you need to edit the file globals.yml and set the
|
||||||
following variables
|
following variables
|
||||||
|
|
||||||
::
|
.. code-block:: yaml
|
||||||
|
|
||||||
enable_etcd: "yes"
|
enable_etcd: "yes"
|
||||||
enable_kuryr: "yes"
|
enable_kuryr: "yes"
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Deploy the OpenStack cloud and kuryr network plugin
|
Deploy the OpenStack cloud and kuryr network plugin
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
kolla-ansible deploy
|
kolla-ansible deploy
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
Create a Virtual Network
|
Create a Virtual Network
|
||||||
--------------------------------
|
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
|
docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
To list the created network:
|
To list the created network:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
docker network ls
|
docker network ls
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
The created network is also available from OpenStack CLI:
|
The created network is also available from OpenStack CLI:
|
||||||
|
|
||||||
::
|
.. code-block:: console
|
||||||
|
|
||||||
openstack network list
|
openstack network list
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
For more information about how kuryr works, see
|
For more information about how kuryr works, see
|
||||||
`kuryr, OpenStack Containers Networking
|
`kuryr (OpenStack Containers Networking)
|
||||||
<https://docs.openstack.org/kuryr/latest/>`__.
|
<https://docs.openstack.org/kuryr/latest/>`__.
|
||||||
|
Loading…
Reference in New Issue
Block a user