Merge "Upstreaming WRO"

This commit is contained in:
Zuul 2021-03-31 16:05:29 +00:00 committed by Gerrit Code Review
commit f9d1b4905d
113 changed files with 3907 additions and 435 deletions

View File

View File

@ -0,0 +1,2 @@
.. [#] :ref:`Back up OpenStack <back-up-openstack>`

View File

View File

@ -1,31 +0,0 @@
- To remove labels from the host, do the following.
.. code-block:: none
~(keystone)admin)$ system host-label-remove compute-0 openstack-compute-node sriov
Deleted host label openstack-compute-node for host compute-0
Deleted host label SRIOV for host compute-0
- To assign Kubernetes labels to identify compute-0 as a compute node with
|SRIOV|, use the following command:
.. code-block:: none
~(keystone)admin)$ system host-label-assign compute-0 openstack-compute-node=enabled sriov=enabled
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e |
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
| label_key | openstack-compute-node |
| label_value | enabled |
+-------------+--------------------------------------+
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | d8e29e62-4173-4445-886c-9a95b0d6fee1 |
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
| label_key | SRIOV |
| label_value | enabled |
+-------------+--------------------------------------+

View File

@ -12,6 +12,8 @@
.. |prod-os| replace:: StarlingX OpenStack .. |prod-os| replace:: StarlingX OpenStack
.. |prod-dc| replace:: Distributed Cloud .. |prod-dc| replace:: Distributed Cloud
.. |prod-p| replace:: StarlingX Platform .. |prod-p| replace:: StarlingX Platform
.. |os-prod-hor-long| replace:: OpenStack Horizon Web Interface
.. |os-prod-hor| replace:: OpenStack Horizon
.. Guide names; will be formatted in italics by default. .. Guide names; will be formatted in italics by default.
.. |node-doc| replace:: :title:`StarlingX Node Configuration and Management` .. |node-doc| replace:: :title:`StarlingX Node Configuration and Management`
@ -28,7 +30,7 @@
.. |usertasks-doc| replace:: :title:`StarlingX User Tasks` .. |usertasks-doc| replace:: :title:`StarlingX User Tasks`
.. |admintasks-doc| replace:: :title:`StarlingX Administrator Tasks` .. |admintasks-doc| replace:: :title:`StarlingX Administrator Tasks`
.. |datanet-doc| replace:: :title:`StarlingX Data Networks` .. |datanet-doc| replace:: :title:`StarlingX Data Networks`
.. |os-intro-doc| replace:: :title:`OpenStack Introduction`
.. Name of downloads location .. Name of downloads location

View File

@ -1,30 +1,25 @@
.. Backup and Restore file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
================== ==================
Backup and Restore Backup and Restore
================== ==================
------------- ----------
System backup Kubernetes
------------- ----------
.. check what put here
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
backing-up-starlingx-system-data kubernetes/index
running-ansible-backup-playbook-locally-on-the-controller
running-ansible-backup-playbook-remotely
-------------------------- ---------
System and storage restore OpenStack
-------------------------- ---------
.. check what put here
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 2
restoring-starlingx-system-data-and-storage openstack/index
running-restore-playbook-locally-on-the-controller
system-backup-running-ansible-restore-playbook-remotely

View File

@ -0,0 +1,34 @@
.. Backup and Restore file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
----------
Kubernetes
----------
==================
Backup and Restore
==================
-------------
System backup
-------------
.. toctree::
:maxdepth: 1
backing-up-starlingx-system-data
running-ansible-backup-playbook-locally-on-the-controller
running-ansible-backup-playbook-remotely
--------------------------
System and storage restore
--------------------------
.. toctree::
:maxdepth: 1
restoring-starlingx-system-data-and-storage
running-restore-playbook-locally-on-the-controller
system-backup-running-ansible-restore-playbook-remotely

View File

@ -0,0 +1,52 @@
.. mdt1596804427371
.. _back-up-openstack:
=================
Back up OpenStack
=================
|prod-os| is backed up using the |prod| back-up facilities.
.. rubric:: |context|
The backup playbook will produce a OpenStack backup tarball in addition to the
platform tarball. This can be used to perform |prod-os| restores independently
of restoring the underlying platform.
.. note::
Data stored in Ceph such as Glance images, Cinder volumes or volume backups
or Rados objects \(images stored in ceph\) are not backed up automatically.
.. _back-up-openstack-ul-ohv-x3k-qmb:
- To backup glance images use the image\_backup.sh script. For example:
.. code-block:: none
~(keystone_admin)$ image-backup export <uuid>
- To back-up other Ceph data such as cinder volumes, backups in ceph or
rados objects use the :command:`rbd export` command for the data in
OpenStack pools cinder-volumes, cinder-backup and rados.
For example if you want to export a Cinder volume with the ID of:
611157b9-78a4-4a26-af16-f9ff75a85e1b you can use the following command:
.. code-block:: none
~(keystone_admin)$ rbd export -p cinder-volumes
611157b9-78a4-4a26-af16-f9ff75a85e1b
/tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b
To see the the Cinder volumes, use the :command:`openstack volume-list`
command.
After export, copy the data off-box for safekeeping.
For details on performing a |prod| back-up, see :ref:`
System Backup and Restore <backing-up-starlingx-system-data>`.

View File

@ -0,0 +1,15 @@
---------
OpenStack
---------
==================
Backup and Restore
==================
.. toctree::
:maxdepth: 1
back-up-openstack
restore-openstack-from-a-backup
openstack-backup-considerations

View File

@ -0,0 +1,13 @@
.. tye1591106946243
.. _openstack-backup-considerations:
=============================================
Containerized OpenStack Backup Considerations
=============================================
Backup of the containerized OpenStack application is performed as part of the
|prod-long| backup procedures.
See :ref:`System Backup and Restore <backing-up-starlingx-system-data>`.

View File

@ -0,0 +1,120 @@
.. gmx1612810318507
.. _restore-openstack-from-a-backup:
===============================
Restore OpenStack from a Backup
===============================
You can restore |prod-os| from a backup with or without Ceph.
.. rubric:: |prereq|
.. _restore-openstack-from-a-backup-ul-ylc-brc-s4b:
- You must have a backup of your |prod-os| installation as described in
:ref:`Back up OpenStack <back-up-openstack>`.
- You must have an operational |prod-long| deployment.
.. rubric:: |proc|
#. Delete the old OpenStack application and upload it again.
.. note::
Images and volumes will remain in Ceph.
.. code-block:: none
~(keystone_admin)$ system application-remove wr-openstack
~(keystone_admin)$ system application-delete wr-openstack
~(keystone_admin)$ system application-upload wr-openstack.tgz
#. Restore |prod-os|.
You can choose either of the following options:
- Restore only |prod-os| system. This option will not restore the Ceph
data \(that is, it will not run comands like :command:`rbd import`\).
This procedure will preserve any existing Ceph data at restore-time.
- Restore |prod-os| system data, Cinder volumes and Glance images. You'll
want to run this step if your Ceph data will be wiped after the backup.
.. table::
:widths: 200, 668
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Restore only OpenStack application system data:** | #. Run the following command: |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ |
| | restore_openstack.yml \ |
| | -e 'initial_backup_dir=<location_of_backup_filename> \ |
| | ansible_become_pass=<admin_password> \ |
| | admin_password=<admin_password> \ |
| | backup_filename=wr-openstack_backup.tgz' |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **Restore OpenStack application system data, cinder volumes and glance images:** | #. Run the following command: |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ |
| | restore_openstack.yml \ |
| | -e 'restore_cinder_glance_data=true \ |
| | initial_backup_dir=<location_of_backup_filename> \ |
| | ansible_become_pass=<admin_password> \ |
| | admin_password=<admin_password> \ |
| | backup_filename=wr-openstack_backup.tgz' |
| | |
| | When this step has completed, the Cinder, Glance and MariaDB services will be up, and Mariadb data restored. |
| | |
| | #. Restore Ceph data. |
| | |
| | |
| | #. Restore Cinder volumes using :command:`rbd import` command. |
| | |
| | For example: |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ rbd import -p cinder-volumes /tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b |
| | |
| | Where tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b is a file saved earlier at the backup procedure as described in [#]_ . |
| | |
| | #. Restore Glance images using the :command:`image-backup` script. |
| | |
| | For example if we have an archive named image\_3f30adc2-3e7c-45bf-9d4b-a4c1e191d879.tgz in the/opt/backups directory we can use restore it using the following command: |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ sudo image-backup.sh import image_3f30adc2-3e7c-45bf-9d4b-a4c1e191d879.tgz |
| | |
| | #. Use the :command:`tidy\_storage\_post\_restore` utilitary to detect any discrepancy between Cinder/Glance DB and rbd pools: |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ tidy_storage_post_restore <log_file> |
| | |
| | |
| | After the script finishes, some command output will be written to the log file. They will help reconcile discrepancies between the |prod-os| database and CEPH data. |
| | |
| | #. Run the playbook again with the restore\_openstack\_continue flag set to true to bring up the remaining Openstack services. |
| | |
| | .. code-block:: none |
| | |
| | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ |
| | restore_openstack.yml \ |
| | -e 'restore_openstack_continue=true \ |
| | initial_backup_dir=<location_of_backup_filename> |
| | ansible_become_pass=<admin_password> \ |
| | admin_password=<admin_password> \ |
| | backup_filename=wr-openstack_backup.tgz' |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. include:: ../../_includes/restore-openstack-from-a-backup.rest

View File

@ -12,31 +12,31 @@ administration interface or the CLI.
.. rubric:: |context| .. rubric:: |context|
For VXLAN connectivity between VMs, you must add appropriate endpoint IP For VXLAN connectivity between VMs, you must add appropriate endpoint IP
addresses to the worker node interfaces. You can add individual static addresses to the compute node interfaces. You can add individual static
addresses, or you can assign addresses from a pool associated with the addresses, or you can assign addresses from a pool associated with the
data interface. For more about using address pools, see :ref:`Using IP data interface. For more about using address pools, see :ref:`Using IP
Address Pools for Data Interfaces <using-ip-address-pools-for-data-interfaces>`. Address Pools for Data Interfaces <using-ip-address-pools-for-data-interfaces>`.
To add a static IP address using the web administration interface, refer to the To add a static IP address using the |os-prod-hor|, refer to the
following steps. To use the CLI, see :ref:`Managing Data Interface Static IP following steps. To use the CLI, see :ref:`Managing Data Interface Static IP
Addresses Using the CLI <managing-data-interface-static-ip-addresses-using-the-cli>`. Addresses Using the CLI <managing-data-interface-static-ip-addresses-using-the-cli>`.
.. rubric:: |prereq| .. rubric:: |prereq|
To make interface changes, you must lock the worker host first. To make interface changes, you must lock the compute host first.
.. rubric:: |proc| .. rubric:: |proc|
.. _adding-a-static-ip-address-to-a-data-interface-steps-zkx-d1h-hr: .. _adding-a-static-ip-address-to-a-data-interface-steps-zkx-d1h-hr:
#. Lock the worker host. #. Lock the compute host.
#. Set the interface to support an IPv4 or IPv6 address, or both. #. Set the interface to support an IPv4 or IPv6 address, or both.
#. Select **Admin** \> **Platform** \> **Host Inventory** to open the Host #. Select **Admin** \> **Platform** \> **Host Inventory** to open the Host
Inventory page. Inventory page.
#. Select the **Host** tab, and then double-click the worker host to open #. Select the **Host** tab, and then double-click the compute host to open
the Host Detail page. the Host Detail page.
#. Select the **Interfaces** tab and click **Edit Interface** for the data #. Select the **Interfaces** tab and click **Edit Interface** for the data
@ -63,7 +63,7 @@ To make interface changes, you must lock the worker host first.
The new address is added to the **Address List**. The new address is added to the **Address List**.
#. Unlock the worker node and wait for it to become available. #. Unlock the compute node and wait for it to become available.
For more information, see :ref:`Managing Data Interface Static IP Addresses For more information, see :ref:`Managing Data Interface Static IP Addresses
Using the CLI <managing-data-interface-static-ip-addresses-using-the-cli>` Using the CLI <managing-data-interface-static-ip-addresses-using-the-cli>`

View File

@ -11,7 +11,7 @@ the CLI.
.. rubric:: |prereq| .. rubric:: |prereq|
The worker node must be locked. The compute node must be locked.
.. rubric:: |proc| .. rubric:: |proc|
@ -24,7 +24,7 @@ To add routes, use the following command.
where where
**node** **node**
is the name or UUID of the worker node is the name or UUID of the compute node
**ifname** **ifname**
is the name of the interface is the name of the interface
@ -53,4 +53,4 @@ To list existing routes, including their UUIDs, use the following command.
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-route-list worker-0 ~(keystone_admin)]$ system host-route-list compute-0

View File

@ -32,6 +32,7 @@ You can use the CLI to add segmentation ranges to data networks.
--physical-network data-net-a \ --physical-network data-net-a \
--network-type vlan \ --network-type vlan \
--minimum 623 --minimum 623
--maximum 623
~(keystone_admin)]$ openstack network segment range create segment-a-project2 \ ~(keystone_admin)]$ openstack network segment range create segment-a-project2 \
--private \ --private \
@ -71,6 +72,8 @@ You can use the CLI to add segmentation ranges to data networks.
**maximum** **maximum**
is the maximum value of the segmentation range. is the maximum value of the segmentation range.
.. rubric:: |result|
You can also obtain information about segmentation ranges using the following command: You can also obtain information about segmentation ranges using the following command:
.. code-block:: none .. code-block:: none

View File

@ -8,12 +8,9 @@ Assign a Data Network to an Interface
In order to associate the L2 Network definition of a Data Network with a In order to associate the L2 Network definition of a Data Network with a
physical network, the Data Network must be mapped to an Ethernet or Aggregated physical network, the Data Network must be mapped to an Ethernet or Aggregated
Ethernet interface on a worker node. Ethernet interface on a compute node.
.. rubric:: |context|
The command for performing the mapping has the format: The command for performing the mapping has the format:
.. code-block:: none :command:`system interfacedatanetworkassign` <compute> <interface\_uuid>
<datanetwork\_uuid>
system interfacedatanetworkassign <worker> <interface\_uuid> <datanetwork\_uuid>

View File

@ -6,20 +6,18 @@
Change the MTU of a Data Interface Using the CLI Change the MTU of a Data Interface Using the CLI
================================================ ================================================
You can change the MTU value for a data interface from the OpenStack Horizon You can change the |MTU| value for a data interface from the |os-prod-hor-long|
Web interface or the CLI. or the |CLI|.
.. rubric:: |context| .. rubric:: |context|
The MTU must be changed while the worker host is locked. You can use |CLI| commands to lock and unlock hosts, and to modify the |MTU| on
the hosts.
You can use CLI commands to lock and unlock hosts, and to modify the MTU
on the hosts.
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-lock <nodeName> ~(keystone_admin)]$ system host-lock <nodeName>
~(keystone_admin)]$ system host-if-modify <nodeName> <interface name> --imtu <mtu_size> ~(keystone_admin)]$ system host-if-modify <nodeName> <interfaceName> --imtu <mtuSize>
~(keystone_admin)]$ system host-unlock <nodeName> ~(keystone_admin)]$ system host-unlock <nodeName>
where: where:
@ -31,15 +29,16 @@ where:
is the name of the interface is the name of the interface
**<mtu\_size>** **<mtu\_size>**
is the new MTU value is the new |MTU| value
For example: For example:
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-if-modify worker-0 enp0s8 --imtu 1496 ~(keystone_admin)]$ system host-if-modify compute-0 enp0s8 --imtu 1496
.. note:: .. note::
You cannot set the MTU on an openstack-compute-labeled worker node
interface to a value smaller than the largest MTU used on its data You cannot set the |MTU| on an openstack-compute-labeled compute node
interface to a value smaller than the largest |MTU| used on its data
networks. networks.

View File

@ -6,18 +6,14 @@
Change the MTU of a Data Interface Change the MTU of a Data Interface
================================== ==================================
You can change the MTU value for a data interface within limits determined by You can change the |MTU| value for a data interface within limits determined by
the data network to which the interface is attached. the data network to which the interface is attached.
.. rubric:: |context| .. rubric:: |context|
The data interface MTU must be equal to or greater than the MTU of the data The data interface |MTU| must be equal to or greater than the |MTU| of the data
network. network.
.. rubric:: |prereq|
You must lock the host for the interface on which you want to change the MTU.
.. rubric:: |proc| .. rubric:: |proc|
.. _changing-the-mtu-of-a-data-interface-steps-hfm-5nb-p5: .. _changing-the-mtu-of-a-data-interface-steps-hfm-5nb-p5:
@ -29,7 +25,7 @@ You must lock the host for the interface on which you want to change the MTU.
#. From the **Edit** menu for the standby controller, select **Lock Host**. #. From the **Edit** menu for the standby controller, select **Lock Host**.
#. On all the hosts, edit the interface to change the MTU value. #. On all the hosts, edit the interface to change the |MTU| value.
#. Click the name of the host, and then select the **Interfaces** tab and #. Click the name of the host, and then select the **Interfaces** tab and
click **Edit** for the interface you want to change. click **Edit** for the interface you want to change.
@ -41,4 +37,4 @@ You must lock the host for the interface on which you want to change the MTU.
From the **Edit** menu for the host, select **Unlock Host**. From the **Edit** menu for the host, select **Unlock Host**.
The network MTU is updated with the new value. The network |MTU| is updated with the new value.

View File

@ -7,10 +7,8 @@ Configure Data Interfaces for VXLANs
==================================== ====================================
For data interfaces attached to VXLAN-based data networks, endpoint IP For data interfaces attached to VXLAN-based data networks, endpoint IP
addresses, \(static or dynamic from a IP Address pool\) and possibly IP addresses, static or dynamic from a IP Address pool\) and possibly IP Routes
Routes are additionally required on the host data interfaces. are additionally required on the host data interfaces.
You can complete the VXLAN data network setup by using the web See :ref:`VXLAN Data Network Setup Completion
administration interface or the CLI. For more information on setting up <vxlan-data-network-setup-completion>` for details on this configuration.
VXLAN Data Networks, see tasks related to :ref:`VXLAN data network setup
completion <adding-a-static-ip-address-to-a-data-interface>`.

View File

@ -24,8 +24,7 @@ underlying network for OpenStack Neutron Tenant/Project Networks.
.. xreflink - :ref:`Configuring VLAN Interfaces <configuring-vlan-interfaces-using-the-cli>` .. xreflink - :ref:`Configuring VLAN Interfaces <configuring-vlan-interfaces-using-the-cli>`
For each of the above procedures, configure the node interface specifying the For each of the above procedures, configure the node interface specifying the
``ifclass`` as ``data`` and assign one or more data networks to the node 'ifclass' as 'data' and assign one or more data networks to the node interface.
interface.
.. xreflink As an example for an Ethernet interface, repeat the procedure in .. xreflink As an example for an Ethernet interface, repeat the procedure in
|node-doc|: :ref:`Configuring Ethernet Interfaces |node-doc|: :ref:`Configuring Ethernet Interfaces
@ -38,24 +37,35 @@ interface.
#. List the attached interfaces. #. List the attached interfaces.
To list all interfaces, use the :command:`system host-if-list` command and To list all interfaces, use the :command:`system host-if-list` command and
include the -a flag. include the ``-a`` flag.
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-if-list -a controller-0 ~(keystone_admin)]$ system host-if-list -a controller-0
+---...+----------+----------+...+---------------+...+-------------------+ +-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+
| uuid | name | class | | ports | | data networks | | uuid | name | class | type | vlan | ports | uses i/f | used by i/f | attributes |
+---...+----------+----------+...+---------------+...+-------------------+ | | | | | id | | | | |
| 68...| ens787f3 | None | | [u'ens787f3'] | | [] | +-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+
| 79...| data0 | data | | [u'ens787f0'] | | [u'group0-data0'] | | 0aa20d82-...| sriovvf2 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1500,max_tx_rate=100 |
| 78...| cluster0 | platform | | [] | | [] | | 0e5f162d-...| mgmt0 | platform | vlan | 163 | [] | [u'sriov0'] | [] | MTU=1500 |
| 89...| ens513f3 | None | | [u'ens513f3'] | | [] | | 14f2ed53-...| sriov0 | pci-sriov | ethernet | None | [u'enp24s0f0'] | [] | [u'sriovnet1', u'oam0', | MTU=9216 |
| 97...| ens803f1 | None | | [u'ens803f1'] | | [] | | | | | | | | | u'sriovnet2', u'sriovvf2', | |
| d6...| pxeboot0 | platform | | [u'eno2'] | | [] | | | | | | | | | u'sriovvf1', u'mgmt0', | |
| d6...| mgmt0 | platform | | [] | | [] | | | | | | | | | u'pxeboot0'] | |
+---...+----------+----------+...+---------------+...+-------------------+ | | | | | | | | | |
| 163592bd-...| data1 | data | ethernet | None | [u'enp24s0f1'] | [] | [] | MTU=1500,accelerated=True |
| 1831571d-...| sriovnet2 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1956,max_tx_rate=100 |
| 5741318f-...| eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 |
| 5bd79fbd-...| enp26s0f0 | None | ethernet | None | [u'enp26s0f0'] | [] | [] | MTU=1500 |
| 623d5494-...| oam0 | platform | vlan | 103 | [] | [u'sriov0'] | [] | MTU=1500 |
| 78b4080a-...| enp26s0f1 | None | ethernet | None | [u'enp26s0f1'] | [] | [] | MTU=1500 |
| a6f1f901-...| eno1 | None | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 |
| f37eac1b-...| pxeboot0 | platform | ethernet | None | [] | [u'sriov0'] | [] | MTU=1500 |
| f7c62216-...| sriovnet1 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1500,max_tx_rate=100 |
| fcbe3aca-...| sriovvf1 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1956,max_tx_rate=100 |
+-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+
#. Attach an interface to a data network. #. Attach an interface to a network.
Use a command sequence of the following form: Use a command sequence of the following form:
@ -73,7 +83,7 @@ interface.
The MTU for the interface. The MTU for the interface.
.. note:: .. note::
The MTU must be equal to or larger than the MTU of the data network The |MTU| must be equal to or larger than the |MTU| of the data network
to which the interface is attached. to which the interface is attached.
**ifclass** **ifclass**
@ -81,13 +91,13 @@ interface.
**data**, **pci-sriov**, and **pci-passthrough**. **data**, **pci-sriov**, and **pci-passthrough**.
**data network** **data network**
The name or ID of the data network to assign the interface to. The name or ID of the network to assign the interface to.
**hostname** **hostname**
The name or UUID of the host. The name or |UUID| of the host.
**ethname** **ethname**
The name or UUID of the Ethernet interface to use. The name or |UUID| of the Ethernet interface to use.
**ip4\_mode** **ip4\_mode**
The mode for assigning IPv4 addresses to a data interface \(static or The mode for assigning IPv4 addresses to a data interface \(static or

View File

@ -6,10 +6,10 @@
Dynamic VXLAN Dynamic VXLAN
============= =============
|prod-os| supports dynamic mode \(learning\) VXLAN implementation that has |prod-os| supports dynamic mode \(learning\) VXLAN implementation that has each
each vSwitch instance registered on the network for a particular IP vSwitch instance registered on the network for a particular IP multicast group,
multicast group, MAC addresses, and VTEP endpoints that are populated based on |MAC| addresses, and |VTEP| endpoints that are populated based on neutron
neutron configuration data. configuration data.
The IP multicast group, \(for example, 239.1.1.1\), is input when a new The IP multicast group, \(for example, 239.1.1.1\), is input when a new
neutron data network is provisioned. The selection of the IP multicast group neutron data network is provisioned. The selection of the IP multicast group
@ -18,25 +18,45 @@ group. The IP multicast network can work in both a single subnet \(that is,
local Layer2 environment\) or can span Layer3 segments in the customer network local Layer2 environment\) or can span Layer3 segments in the customer network
for more complex routing requirements but requires IP multicast enabled routers. for more complex routing requirements but requires IP multicast enabled routers.
In the dynamic VXLAN mode, when a VM instance sends a packet to some destination .. only:: starlingx
node the vSwitch VXLAN implementation examines the destination MAC address to
determine how to treat the packet. If the destination is known, a unicast packet In the dynamic |VXLAN| mode, when a VM instance sends a packet to some
is sent to the worker node hosting that VM instance. If the destination is destination node the |VXLAN| implementation examines the
unknown or the packet is a broadcast/multicast packet then a multicast packet destination MAC address to determine how to treat the packet. If the
is sent to all worker nodes. Once the destination VM instance receives the destination is known, a unicast packet is sent to the compute node hosting
packet and responds to the initial source worker node, it learns that the VM that VM instance. If the destination is unknown or the packet is a
is hosted from that worker node, and any future packets destined to that VM broadcast/multicast packet then a multicast packet is sent to all compute
instance are unicasted to that worker node. nodes. Once the destination VM instance receives the packet and responds to
the initial source compute node, it learns that the VM is hosted from that
compute node, and any future packets destined to that VM instance are
unicasted to that compute node.
.. only:: partner
.. include:: ../_includes/dynamic-vxlan.rest
:start-after: vswitch-text-1-begin
:end-before: vswitch-text-1-end
.. figure:: figures/eol1510005391750.png .. figure:: figures/eol1510005391750.png
`Multicast Endpoint Distribution` Multicast Endpoint Distribution
.. only:: starlingx
For broadcast and multicast packets originating from the VM instances
implements head-end replication to clone and send a copy of the packet to
each known compute node. This operation is expensive and will negatively
impact performance if the network is experiencing high volume of broadcast
or multicast packets.
.. only:: partner
.. include:: ../_includes/dynamic-vxlan.rest
:start-after: vswitch-text-1-begin
:end-before: vswitch-text-1-end
For broadcast and multicast packets originating from the VM instances the
vSwitch implements head-end replication to clone and send a copy of the
packet to each known worker node. This operation is expensive and will
negatively impact performance if the network is experiencing high volume of
broadcast or multicast packets.
.. _dynamic-vxlan-section-N10054-N1001F-N10001: .. _dynamic-vxlan-section-N10054-N1001F-N10001:
@ -44,20 +64,20 @@ broadcast or multicast packets.
Workflow to Configure Dynamic VXLAN Data Networks Workflow to Configure Dynamic VXLAN Data Networks
------------------------------------------------- -------------------------------------------------
Use the following workflow to create dynamic VXLAN data networks and add Use the following workflow to create dynamic |VXLAN| data networks and add
segmentation ranges using CLI. segmentation ranges using the |CLI|.
.. _dynamic-vxlan-ol-bpj-dlb-1cb: .. _dynamic-vxlan-ol-bpj-dlb-1cb:
#. Create a VXLAN data network, see :ref:`Adding Data Networks #. Create a VXLAN data network, see :ref:`Adding Data Networks
<adding-data-networks-using-the-cli>`. <adding-data-networks-using-the-cli>`.
#. Add segmentation ranges to dynamic VXLAN \(Multicast VXLAN\) data networks, #. Add segmentation ranges to dynamic |VXLAN| \(Multicast |VXLAN|\) data
see :ref:`Adding Segmentation Ranges Using the CLI networks, see :ref:`Adding Segmentation Ranges Using the CLI
<adding-segmentation-ranges-using-the-cli>`. <adding-segmentation-ranges-using-the-cli>`.
#. Configure the endpoint IP addresses of the worker nodes using the web #. Configure the endpoint IP addresses of the compute nodes using the
administration interface or the CLI: |os-prod-hor-long| or the |CLI|:
- To configure static IP addresses for individual data interfaces, see: - To configure static IP addresses for individual data interfaces, see:
@ -72,6 +92,6 @@ segmentation ranges using CLI.
#. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes #. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes
for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`. for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`.
For more information on the differences between the dynamic and static VXLAN For more information on the differences between the dynamic and static |VXLAN|
modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes
<differences-between-dynamic-and-static-vxlan-modes>`. <differences-between-dynamic-and-static-vxlan-modes>`.

View File

@ -26,6 +26,7 @@ Displaying data network information
displaying-data-network-information-using-horizon displaying-data-network-information-using-horizon
displaying-data-network-information-using-the-cli displaying-data-network-information-using-the-cli
the-data-network-topology-view the-data-network-topology-view
vxlan-data-networks
********************************************* *********************************************
Adding, assigning, and removing data networks Adding, assigning, and removing data networks
@ -99,3 +100,4 @@ VXLAN data network setup completion
using-ip-address-pools-for-data-interfaces using-ip-address-pools-for-data-interfaces
managing-ip-address-pools-using-the-cli managing-ip-address-pools-using-the-cli
adding-and-maintaining-routes-for-a-vxlan-network adding-and-maintaining-routes-for-a-vxlan-network
vxlan-data-network-setup-completion

View File

@ -7,7 +7,7 @@ Manage Data Interface Static IP Addresses Using the CLI
======================================================= =======================================================
If you prefer, you can create and manage static addresses for data interfaces If you prefer, you can create and manage static addresses for data interfaces
using the CLI. using the |CLI|.
.. rubric:: |context| .. rubric:: |context|
@ -17,15 +17,15 @@ For more information about using static addresses for data interfaces, see
.. rubric:: |prereq| .. rubric:: |prereq|
To make interface changes, you must lock the worker node first. To make interface changes, you must lock the compute node first.
.. rubric:: |proc| .. rubric:: |proc|
.. _managing-data-interface-static-ip-addresses-using-the-cli-steps-zkx-d1h-hr: .. _managing-data-interface-static-ip-addresses-using-the-cli-steps-zkx-d1h-hr:
1. Lock the worker node. #. Lock the compute node.
2. Set the interface to support an IPv4 or IPv6 address, or both. #. Set the interface to support an IPv4 or IPv6 address, or both.
.. code-block:: none .. code-block:: none
@ -34,7 +34,7 @@ To make interface changes, you must lock the worker node first.
where where
**node** **node**
is the name or UUID of the worker node is the name or |UUID| of the compute node
**ifname** **ifname**
is the name of the interface is the name of the interface
@ -54,7 +54,7 @@ To make interface changes, you must lock the worker node first.
where where
**node** **node**
is the name or UUID of the worker node is the name or |UUID| of the compute node
**ifname** **ifname**
is the name of the interface is the name of the interface
@ -71,12 +71,12 @@ To make interface changes, you must lock the worker node first.
~(keystone_admin)]$ system host-addr-list <hostname/ID> ~(keystone_admin)]$ system host-addr-list <hostname/ID>
This displays the UUIDs of existing addresses, as shown in this example This displays the |UUIDs| of existing addresses, as shown in this example
below. below.
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-addr-list worker-0 ~(keystone_admin)]$ system host-addr-list compute-0
+-----------------------+--------+------------------------+--------+ +-----------------------+--------+------------------------+--------+
| uuid | ifname | address | prefix | | uuid | ifname | address | prefix |
+-----------------------+--------+------------------------+--------+ +-----------------------+--------+------------------------+--------+
@ -89,6 +89,6 @@ To make interface changes, you must lock the worker node first.
~(keystone_admin)]$ system host-addr-delete <uuid> ~(keystone_admin)]$ system host-addr-delete <uuid>
where **uuid** is the UUID of the address. where **uuid** is the |UUID| of the address.
#. Unlock the worker node and wait for it to become available. #. Unlock the compute node and wait for it to become available.

View File

@ -6,20 +6,25 @@
Manage IP Address Pools Using the CLI Manage IP Address Pools Using the CLI
===================================== =====================================
You can create and manage address pools using the CLI: You can create and manage address pools using the |CLI|:
.. contents:: .. contents::
:local: :local:
:depth: 1 :depth: 1
.. rubric:: |context|
For more information about address pools, see :ref:`Using IP Address Pools for
Data Interfaces <using-ip-address-pools-for-data-interfaces>`.
.. rubric:: |prereq| .. rubric:: |prereq|
To make interface changes, you must lock the worker node first. To make interface changes, you must lock the compute node first.
.. _managing-ip-address-pools-using-the-cli-section-N1003C-N1001F-N10001: .. _managing-ip-address-pools-using-the-cli-section-N1003C-N1001F-N10001:
------------------------ ------------------------
Creating an address pool Creating an Address pool
------------------------ ------------------------
To create an address pool, use a command of the following form: To create an address pool, use a command of the following form:

View File

@ -6,29 +6,36 @@
Static VXLAN Static VXLAN
============ ============
The static unicast mode relies on the mapping of neutron ports to worker nodes The static unicast mode relies on the mapping of neutron ports to compute nodes
to receive the packet in order to reach the VM. to receive the packet in order to reach the |VM|.
In this mode there is no multicast addressing or multicast packets sent from .. only:: starlingx
the worker nodes, neither is there any learning. In contrast to the dynamic
VXLAN mode, any packets destined to unknown MAC addresses are dropped. To
ensure that there are no unknown endpoints the system examines the neutron
port DB and gathers the list of mappings between port MAC/IP addresses and the
hostname on which they reside. This information is then propagated throughout
the system to pre-provision endpoint entries into all vSwitch instances. This
ensures that each vSwitch knows how to reach all VM instances that are related
to any local VM instances.
Static VXLAN is limited to use on one data network. If configured, it must be In this mode there is no multicast addressing and no multicast packets are
enabled on all OpenStack worker nodes. sent from the compute nodes, neither is there any learning. In contrast to
the dynamic |VXLAN| mode, any packets destined to unknown MAC addresses are
dropped. To ensure that there are no unknown endpoints the system examines
the neutron port DB and gathers the list of mappings between port |MAC|/IP
addresses and the hostname on which they reside.
.. only:: partner
.. include:: ../_includes/static-vxlan.rest
:start-after: vswitch-text-1-begin
:end-before: vswitch-text-1-end
Static |VXLAN| is limited to use on one data network. If configured, it must be
enabled on all OpenStack compute nodes.
.. figure:: figures/oeg1510005898965.png .. figure:: figures/oeg1510005898965.png
`Static Endpoint Distribution` `Static Endpoint Distribution`
.. note:: .. note::
In the static mode there is no dynamic endpoint learning. This means that In the static mode there is no dynamic endpoint learning. This means that
if a node does not have an entry for some destination MAC address it will if a node does not have an entry for some destination |MAC| address it will
not create an entry even if it receives a packet from that device. not create an entry even if it receives a packet from that device.
.. _static-vxlan-section-N1006B-N1001F-N10001: .. _static-vxlan-section-N1006B-N1001F-N10001:
@ -38,19 +45,19 @@ Workflow to Configure Static VXLAN Data Networks
------------------------------------------------ ------------------------------------------------
Use the following workflow to create static VXLAN data networks and add Use the following workflow to create static VXLAN data networks and add
segmentation ranges using the CLI. segmentation ranges using the |CLI|.
.. _static-vxlan-ol-bpj-dlb-1cb: .. _static-vxlan-ol-bpj-dlb-1cb:
#. Create a VXLAN data network, see :ref:`Adding Data Networks Using the CLI #. Create a |VXLAN| data network, see :ref:`Adding Data Networks Using the CLI
<adding-data-networks-using-the-cli>`. <adding-data-networks-using-the-cli>`.
#. Add segmentation ranges to static VXLAN data networks, see :ref:`Adding #. Add segmentation ranges to static |VXLAN| data networks, see :ref:`Adding
Segmentation Ranges Using the CLI <adding-segmentation-ranges-using-the-cli>`. Segmentation Ranges Using the CLI <adding-segmentation-ranges-using-the-cli>`.
#. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes #. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes
for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`. for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`.
For more information on the differences between the dynamic and static VXLAN For more information on the differences between the dynamic and static |VXLAN|
modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes
<differences-between-dynamic-and-static-vxlan-modes>`. <differences-between-dynamic-and-static-vxlan-modes>`.

View File

@ -2,11 +2,11 @@
.. vkv1559818533210 .. vkv1559818533210
.. _the-data-network-topology-view: .. _the-data-network-topology-view:
============================== ==========================
The Data Network Topology View Data Network Topology View
============================== ==========================
The Data Network Topology view shows data networks and worker host data The Data Network Topology view shows data networks and compute host data
interface connections for the system using a color-coded graphical display. interface connections for the system using a color-coded graphical display.
Active alarm information is also shown in real time. You can select individual Active alarm information is also shown in real time. You can select individual
hosts or networks to highlight their connections and obtain more details. hosts or networks to highlight their connections and obtain more details.

View File

@ -11,14 +11,14 @@ You can create pools of IP addresses for use with data interfaces.
.. rubric:: |context| .. rubric:: |context|
As an alternative to manually adding static IP addresses to data interfaces for As an alternative to manually adding static IP addresses to data interfaces for
use with VXLANs, you can define pools of IP addresses and associate them with use with |VXLANs|, you can define pools of IP addresses and associate them with
one or more data interfaces. Each pool consists of one or more contiguous one or more data interfaces. Each pool consists of one or more contiguous
ranges of IPv4 or IPv6 addresses. When a data interface is associated with a ranges of IPv4 or IPv6 addresses. When a data interface is associated with a
pool, its IP address is allocated from the pool. The allocation may be either pool, its IP address is allocated from the pool. The allocation may be either
random or sequential, depending on the settings for the pool. random or sequential, depending on the settings for the pool.
You can use the web administration interface or the CLI to create and manage You can use the |os-prod-hor| or the |CLI| to create and manage
address pools. For information about using the CLI, see :ref:`Managing IP address pools. For information about using the |CLI|, see :ref:`Managing IP
Address Pools Using the CLI <managing-ip-address-pools-using-the-cli>`. Address Pools Using the CLI <managing-ip-address-pools-using-the-cli>`.
.. rubric:: |prereq| .. rubric:: |prereq|

View File

@ -0,0 +1,20 @@
.. yxz1511555520499
.. _vxlan-data-network-setup-completion:
===================================
VXLAN Data Network Setup Completion
===================================
You can complete the |VXLAN| data network setup by using the |os-prod-hor-long|
or the |CLI|.
For more information on setting up |VXLAN| Data Networks, see :ref:`VXLAN Data Networks <vxlan-data-networks>`.
- :ref:`Adding a Static IP Address to a Data Interface <adding-a-static-ip-address-to-a-data-interface>`
- :ref:`Using IP Address Pools for Data Interfaces <using-ip-address-pools-for-data-interfaces>`
- :ref:`Adding and Maintaining Routes for a VXLAN Network <adding-and-maintaining-routes-for-a-vxlan-network>`

View File

@ -0,0 +1,50 @@
.. wic1511538154740
.. _vxlan-data-networks:
===================
VXLAN Data Networks
===================
Virtual eXtensible Local Area Networks \(|VXLANs|\) data networks are an
alternative to |VLAN| data networks.
A |VXLAN| data network is implemented over a range of |VXLAN| Network
Identifiers \(|VNIs|.\) This is similar to the |VLAN| option, but allows
multiple data networks to be defined over the same physical network using
unique |VNIs| defined in segmentation ranges.
Packets sent between |VMs| over virtual project networks backed by a |VXLAN|
data network are encapsulated with IP, |UDP|, and |VXLAN| headers and sent as
Layer 3 packets. The IP addresses of the source and destination compute nodes
are included in the outer IP header.
.. only:: starlingx
|prod-os| supports two configurations for |VXLANs|:
.. only:: partner
.. include:: ../_includes/vxlan-data-networks.rest
.. _vxlan-data-networks-ul-rzs-kqf-zbb:
- Dynamic |VXLAN|, see :ref:`Dynamic VXLAN <dynamic-vxlan>`
- Static |VXLAN|, see :ref:`Static VXLAN <static-vxlan>`
.. _vxlan-data-networks-section-N10067-N1001F-N10001:
.. rubric:: |prereq|
Before you can create project networks on a |VXLAN| provider network, you must
define at least one network segment range.
- :ref:`Dynamic VXLAN <dynamic-vxlan>`
- :ref:`Static VXLAN <static-vxlan>`
- :ref:`Differences Between Dynamic and Static VXLAN Modes <differences-between-dynamic-and-static-vxlan-modes>`

View File

@ -6,130 +6,107 @@
Alarm Messages - 300s Alarm Messages - 300s
===================== =====================
.. include:: ../../_includes/openstack-alarm-messages-xxxs.rest The system inventory and maintenance service reports system changes with
different degrees of severity. Use the reported alarms to monitor the overall
health of the system.
For more information, see :ref:`Overview
<openstack-fault-management-overview>`.
In the following tables, the severity of the alarms is represented by one or
more letters, as follows:
.. _alarm-messages-300s-ul-jsd-jkg-vp:
- C: Critical
- M: Major
- m: Minor
- W: Warning
A slash-separated list of letters is used when the alarm can be triggered with
one of several severity levels.
An asterisk \(\*\) indicates the management-affecting severity, if any. A
management-affecting alarm is one that cannot be ignored at the indicated
severity level or higher by using relaxed alarm rules during an orchestrated
patch or upgrade operation.
.. note::
Differences exist between the terminology emitted by some alarms and that
used in the |CLI|, GUI, and elsewhere in the documentations:
.. _alarm-messages-300s-ul-dsf-dxn-bhb:
- References to provider networks in alarms refer to data networks.
- References to data networks in alarms refer to physical networks.
- References to tenant networks in alarms refer to project networks.
.. _alarm-messages-300s-table-zrd-tg5-v5: .. _alarm-messages-300s-table-zrd-tg5-v5:
.. list-table:: .. table:: Table 1. Alarm Messages
:widths: 6 15 :widths: auto
:header-rows: 0
* - **Alarm ID: 300.003** +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
- Networking Agent not responding. | Alarm ID | Description | Severity | Proposed Repair Action |
* - Entity Instance + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
- host=<hostname>.agent=<agent-uuid> | | Entity Instance ID |
* - Severity: +==========+=====================================================================================+==========+===================================================================================================+
- M\* | 300.003 | Networking Agent not responding. | M\* | If condition persists, attempt to clear issue by administratively locking and unlocking the Host. |
* - Proposed Repair Action + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
- If condition persists, attempt to clear issue by administratively locking and unlocking the Host. | | host=<hostname>.agent=<agent-uuid> |
+----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
.. list-table:: | 300.004 | No enabled compute host with connectivity to provider network. | M\* | Enable compute hosts with required provider network connectivity. |
:widths: 6 15 + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
:header-rows: 0 | | host=<hostname>.providernet=<pnet-uuid> |
+----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
* - **Alarm ID: 300.004** | 300.005 | Communication failure detected over provider network x% for ranges y% on host z%. | M\* | Check neighbor switch port VLAN assignments. |
- No enabled compute host with connectivity to provider network. | | | | |
* - Entity Instance | | or | | |
- host=<hostname>.providernet=<pnet-uuid> | | | | |
* - Severity: | | Communication failure detected over provider network x% on host z%. | | |
- M\* + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
* - Proposed Repair Action | | providernet=<pnet-uuid>.host=<hostname> |
- Enable compute hosts with required provider network connectivity. +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| 300.010 | ML2 Driver Agent non-reachable | M\* | Monitor and if condition persists, contact next level of support. |
.. list-table:: | | | | |
:widths: 6 15 | | or | | |
:header-rows: 0 | | | | |
| | ML2 Driver Agent reachable but non-responsive | | |
* - **Alarm ID: 300.005** | | | | |
- Communication failure detected over provider network x% for ranges y% on host z%. | | or | | |
| | | | |
or | | ML2 Driver Agent authentication failure | | |
| | | | |
Communication failure detected over provider network x% on host z%. | | or | | |
* - Entity Instance | | | | |
- providernet=<pnet-uuid>.host=<hostname> | | ML2 Driver Agent is unable to sync Neutron database | | |
* - Severity: + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
- M\* | | host=<hostname>.ml2driver=<driver> |
* - Proposed Repair Action +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
- Check neighbor switch port VLAN assignments. | 300.012 | Openflow Controller connection failed. | M\* | Check cabling and far-end port configuration and status on adjacent equipment. |
+ +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
.. list-table:: | | host=<hostname>.openflow-controller=<uri> |
:widths: 6 15 +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
:header-rows: 0 | 300.013 | No active Openflow controller connections found for this network. | C, M\* | Check cabling and far-end port configuration and status on adjacent equipment. |
| | | | |
* - **Alarm ID: 300.010** | | or | | |
- ML2 Driver Agent non-reachable | | | | |
| | One or more Openflow controller connections in disconnected state for this network. | | |
or + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| | host=<hostname>.openflow-network=<name> |
ML2 Driver Agent reachable but non-responsive +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| 300.015 | No active OVSDB connections found. | C\* | Check cabling and far-end port configuration and status on adjacent equipment. |
or + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| | host=<hostname> |
ML2 Driver Agent authentication failure +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| 300.016 | Dynamic routing agent x% lost connectivity to peer y% | M\* | If condition persists, fix connectivity to peer |
or + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
| | host=<hostname>,agent=<agent-uuid>,bgp-peer=<bgp-peer> |
ML2 Driver Agent is unable to sync Neutron database +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+
* - Entity Instance
- host=<hostname>.ml2driver=<driver>
* - Severity:
- M\*
* - Proposed Repair Action
- Monitor and if condition persists, contact next level of support.
.. list-table::
:widths: 6 15
:header-rows: 0
* - **Alarm ID: 300.012**
- Openflow Controller connection failed.
* - Entity Instance
- host=<hostname>.openflow-controller=<uri>
* - Severity:
- M\*
* - Proposed Repair Action
- Check cabling and far-end port configuration and status on adjacent equipment.
.. list-table::
:widths: 6 15
:header-rows: 0
* - **Alarm ID: 300.013**
- No active Openflow controller connections found for this network.
or
One or more Openflow controller connections in disconnected state for this network.
* - Entity Instance
- host=<hostname>.openflow-network=<name>
* - Severity:
- C, M\*
* - Proposed Repair Action
- host=<hostname>.openflow-network=<name>
.. list-table::
:widths: 6 15
:header-rows: 0
* - **Alarm ID: 300.015**
- No active OVSDB connections found.
* - Entity Instance
- host=<hostname>
* - Severity:
- C\*
* - Proposed Repair Action
- Check cabling and far-end port configuration and status on adjacent equipment.
.. list-table::
:widths: 6 15
:header-rows: 0
* - **Alarm ID: 300.016**
- Dynamic routing agent x% lost connectivity to peer y%
* - Entity Instance
- host=<hostname>,agent=<agent-uuid>,bgp-peer=<bgp-peer>
* - Severity:
- M\*
* - Proposed Repair Action
- If condition persists, fix connectivity to peer.

View File

@ -14,11 +14,11 @@ This section provides the list of OpenStack related Alarms and Customer Logs
that are monitored and reported for the |prod-os| application through the that are monitored and reported for the |prod-os| application through the
|prod| fault management interfaces. |prod| fault management interfaces.
All Fault Management related interfaces for displaying alarms and logs, All fault management related interfaces for displaying alarms and logs,
suppressing/unsuppressing events, and enabling :abbr:`SNMP (Simple Network suppressing/unsuppressing events, and enabling :abbr:`SNMP (Simple Network
Management Protocol)` are available on the |prod| REST APIs, :abbr:`CLIs Management Protocol)` are available on the |prod| REST APIs, :abbr:`CLIs
(Command Line Interfaces)` and/or GUIs. (Command Line Interfaces)` and/or GUIs.
.. :only: partner .. only:: partner
.. include:: ../../_includes/openstack-fault-management-overview.rest .. include:: ../../_includes/openstack-fault-management-overview.rest

View File

@ -178,11 +178,23 @@ Backup and restore
Container integration Container integration
--------------------- ---------------------
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
container_integration/kubernetes/index container_integration/kubernetes/index
-------
Updates
-------
.. toctree::
:maxdepth: 2
updates/index
--------- ---------
Reference Reference
--------- ---------

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

@ -93,23 +93,24 @@ For more information, see :ref:`Provision SR-IOV Interfaces using the CLI
interface. interface.
**drivername** **drivername**
An optional virtual function driver to use. Valid choices are 'vfio' An optional virtual function driver to use. Valid choices are |VFIO|
and 'netdevice'. The default value is netdevice, which will cause and 'netdevice'. The default value is netdevice, which will cause
|SRIOV| virtual function interfaces to appear as kernel network devices' |SRIOV| virtual function interfaces to appear as kernel network devices'
in the container. A value of '**vfio**' will cause the device to be in the container. A value of '**vfio**' will cause the device to be
bound to the vfio-pci driver. Vfio based devices will not appear as bound to the vfio-pci driver. |VFIO| based devices will not appear as
kernel network interfaces, but may be used by |DPDK| based kernel network interfaces, but may be used by |DPDK| based
applications. applications.
.. note:: .. note::
- Applications backed by Mellanox AVS should use the
netdevice |VF| driver
- If the driver for the |VF| interface and parent |SRIOV| - If the driver for the |VF| interface and parent |SRIOV|
interface differ, a separate data network should be created interface differ, a separate data network should be created
for each interface. for each interface.
.. only:: partner
.. include:: ../../../_includes/provisioning-sr-iov-vf-interfaces-using-the-cli.rest
**networks** **networks**
A list of data networks that are attached to the interface, delimited A list of data networks that are attached to the interface, delimited
by quotes and separated by commas; for example, by quotes and separated by commas; for example,

View File

@ -6,15 +6,14 @@
Add Compute Nodes to an Existing Duplex System Add Compute Nodes to an Existing Duplex System
============================================== ==============================================
You can add up to 4 compute nodes to an existing Duplex system by following You can add up to 6 compute nodes to an existing Duplex system by following
the standard procedures for adding compute nodes to a system. the standard procedures for adding compute nodes to a system.
.. rubric:: |prereq| .. rubric:: |prereq|
Before adding compute nodes to an existing duplex-direct system, you must .. only:: partner
convert the system to use switch-based network connections.
.. xbooklink For more information, see |sysconf-doc|: `Converting a Duplex System to Switch-Based Connection <converting-a-duplex-system-to-switch-based-connection>`. .. include:: ../../_includes/adding-compute-nodes-to-an-existing-duplex-system.rest
Before adding compute nodes to a duplex system, you can either add and Before adding compute nodes to a duplex system, you can either add and
provision platform RAM and CPU cores on the controllers or reallocate RAM and provision platform RAM and CPU cores on the controllers or reallocate RAM and

View File

@ -0,0 +1,134 @@
.. vib1596720522530
.. _configuring-a-flavor-to-use-a-generic-pci-device:
==============================================
Configure a Flavor to Use a Generic PCI Device
==============================================
To provide |VM| access to a generic |PCI| passthrough device, you must use a flavor
with an extra specification identifying the device |PCI| alias.
The Nova scheduler attempts to schedule the |VM| on a host containing the device.
If no suitable compute node is available, the error **No valid host was found**
is reported. If a suitable compute node is available, then the scheduler
attempts to instantiate the |VM| in a |NUMA| node with direct access to the
device, subject to the **PCI NUMA Affinity** extra specification.
.. caution::
When this extra spec is used, an eligible host |NUMA| node is required for
each virtual |NUMA| node in the instance. If this requirement cannot be met,
the instantiation fails.
You can use the |os-prod-hor| interface or the |CLI| to add a |PCI| alias
extra specification. From the |os-prod-hor| interface, use the **Custom
Extra Spec** selection in the **Create Flavor Extra Spec** drop-down menu. For
the **Key**, use **pci\_passthrough:alias**.
.. image:: ../figures/kho1513370501907.png
.. note::
To edit the |PCI| alias for a QuickAssist-|SRIOV| device, you can use the
Update Flavor Metadata dialog box accessible from the Flavors page. This
supports editing for a QuickAssist-|SRIOV| |PCI| alias only. It cannot be
used to edit the |PCI| Alias for GPU devices or multiple devices.
To access the Update Flavor Metadata dialog box, go to the Flavors page,
open the **Edit Flavor** drop-down menu, and then select **Update
Metadata**.
.. rubric:: |prereq|
To be available for use by |VMs|, the device must be exposed, and it must also
have a PCI alias. To expose a device, see :ref:`Exposing a Generic PCI Device
Using the CLI <exposing-a-generic-pci-device-using-the-cli>` or :ref:`Expose
a Generic PCI Device for Use by VMs
<expose-a-generic-pci-device-for-use-by-vms>`. To assign a PCI alias, see
:ref:`Configuring a PCI Alias in Nova <configuring-a-pci-alias-in-nova>`
.. rubric:: |proc|
- Use the :command:`openstack flavor set` command to add the extra spec.
.. code-block:: none
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="pci_alias[:number_of_devices]"
where
**<flavor\_name>**
is the name of the flavor
**<pci\_alias>**
is the PCI alias of the device
.. note::
The parameter pci\_passthrough:alias is used for both |PCI|
passthrough devices and |SRIOV| devices.
Depending on the device type, the following default |PCI| alias options
are available:
**qat-vf**
Exposes an Intel AV-ICE02 VPN Acceleration Card for |SRIOV| access.
For more information, see :ref:`SR-IOV Encryption Acceleration
<sr-iov-encryption-acceleration>`.
The following device specific options are available for qat-vf:
qat-dh895xcc-vf
qat-c62x-vf
.. note::
Due to driver limitations, |PCI| passthrough access for the Intel
AV-ICE02 VPN Acceleration Card \(qat-pf option\) is not
supported.
**gpu**
Exposes a graphical processing unit \(gpu\) with the |PCI|-SIG
defined class code for 'Display Controller' \(0x03\).
.. note::
On a system with multiple cards that use the same default |PCI|
alias, you must assign and use a unique |PCI| alias for each one.
**<number\_of\_devices>**
is the number of |SRIOV| or |PCI| passthrough devices to expose to the VM
For example, to make two QuickAssist |SRIOV| devices available to a guest:
.. code-block:: none
~(keystone_admin)$ openstack flavor set <flavor_name> --property "pci_passthrough:alias"="qat-dh895xcc-vf:2"
To make a GPU device available to a guest:
.. code-block:: none
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1"
To make a GPU device from a specific vendor available to a guest:
.. code-block:: none
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="nvidia-tesla-p40:1"
To make multiple |PCI| devices available, use the following command:
.. code-block:: none
~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1, qat-c62x-vf:2"

View File

@ -0,0 +1,204 @@
.. wjw1596720840345
.. _configure-pci-passthrough-ethernet-interfaces:
=============================================
Configure PCI Passthrough Ethernet Interfaces
=============================================
A passthrough Ethernet interface is a physical |PCI| Ethernet |NIC| on a compute
node to which a virtual machine is granted direct access. This minimizes packet
processing delays but at the same time demands special operational
considerations.
.. rubric:: |context|
You can specify interfaces when you launch an instance.
.. rubric:: |prereq|
.. note::
To use |PCI| passthrough or |SRIOV| devices, you must have Intel VT-x and
Intel VT-d features enabled in the BIOS.
The exercise assumes that the underlying data network **group0-data0** exists
already, and that |VLAN| ID 10 is a valid segmentation ID assigned to
**project1**.
.. rubric:: |proc|
#. Log in as the **admin** user to the |os-prod-hor| interface.
#. Lock the compute node you want to configure.
#. Configure the Ethernet interface to be used as a PCI passthrough interface.
#. Select **Admin** \> **Platform** \> **Host Inventory** from the left-hand pane.
#. Select the **Hosts** tab.
#. Click the name of the compute host.
#. Select the **Interfaces** tab.
#. Click the **Edit Interface** button associated with the interface you
want to configure.
The Edit Interface dialog appears.
.. image:: ../figures/ptj1538163621289.png
Select **pci-passthrough**, from the **Interface Class** drop-down, and
then select the data network to attach the interface.
You may also need to change the |MTU|.
The interface can also be configured from the |CLI| as illustrated below:
.. code-block:: none
~(keystone_admin)$ system host-if-modify -c pci-passthrough compute-0 enp0s3
~(keystone_admin)$ system interface-datanetwork-assign compute-0 <enp0s3_interface_uuid> <group0_data0_data_network_uuid>
#. Create the **net0** project network
Select **Admin** \> **Network** \> **Networks**, select the Networks tab, and then click **Create Network**. Fill in the Create Network dialog box as illustrated below. You must ensure that:
- **project1** has access to the project network, either assigning it as
the owner, as in the illustration \(using **Project**\), or by enabling
the shared flag.
- The segmentation ID is set to 10.
.. image:: ../figures/bek1516655307871.png
Click the **Next** button to proceed to the Subnet tab.
Click the **Next** button to proceed to the Subnet Details tab.
#. Configure the access switch. Refer to the OEM documentation to configure
the access switch.
Configure the physical port on the access switch used to connect to
Ethernet interface **enp0s3** as an access port with default |VLAN| ID of 10.
Traffic across the connection is therefore untagged, and effectively
integrated into the targeted project network.
You can also use a trunk port on the access switch so that it handles
tagged packets as well. However, this opens the possibility for guest
applications to join other project networks using tagged packets with
different |VLAN| IDs, which might compromise the security of the system.
See |os-intro-doc|: :ref:`L2 Access Switches
<network-planning-l2-access-switches>` for other details regarding the
configuration of the access switch.
#. Unlock the compute node.
#. Create a neutron port with a |VNIC| type, direct-physical.
The neutron port can also be created from the |CLI|, using the following
command. First, you must set up the environment and determine the correct
network |UUID| to use with the port.
.. code-block:: none
~(keystone_admin)$ source /etc/platform/openrc
~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
~(keystone_admin)$ openstack network list | grep net0
~(keystone_admin)$ openstack port create --network <uuid_of_net0> --vnic-type direct-physical <port_name>
You have now created a port to be used when launching the server in the
next step.
#. Launch the virtual machine, specifying the port uuid created in *Step 7*.
.. note::
You will need to source to the same project selected in the Create
Network 'net0' in *step 4*.
.. code-block:: none
~(keystone_admin)$ openstack server create --flavor <flavor_name> --image <image_name> --nic port-id=<port_uuid> <name>
For more information, see the Neutron documentation at:
`https://docs.openstack.org/neutron/train/admin/config-sriov.html
<https://docs.openstack.org/neutron/train/admin/config-sriov.html>`__.
.. rubric:: |result|
The new virtual machine instance is up now. It has a PCI passthrough connection
to the **net0** project network identified with |VLAN| ID 10.
.. only:: partner
.. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest
:start-after: warning-text-begin
:end-before: warning-text-end
.. rubric:: |prereq|
Access switches must be properly configured to ensure that virtual machines
using |PCI|-passthrough or |SRIOV| Ethernet interfaces have the expected
connectivity. In a common scenario, the virtual machine using these interfaces
connects to external end points only, that is, it does not connect to other
virtual machines in the same |prod-os| cluster. In this case:
.. _configure-pci-passthrough-ethernet-interfaces-ul-pz2-w4w-rr:
- Traffic between the virtual machine and the access switch can be tagged or
untagged.
- The connecting port on the access switch is part of a port-based |VLAN|.
.. only:: partner
.. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest
:start-after: vlan-bullet-1-begin
:end-before: vlan-bullet-1-end
- The port-based |VLAN| provides the required connectivity to external
switching and routing equipment needed by guest applications to establish
connections to the intended end points.
For connectivity to other virtual machines in the |prod-os| cluster the
following configuration is also required:
.. _configure-pci-passthrough-ethernet-interfaces-ul-ngs-nvw-rr:
- The |VLAN| ID used for the project network, 10 in this example, and the
default port |VLAN| ID of the access port on the switch are the same. This
ensures that incoming traffic from the virtual machine is tagged internally by
the switch as belonging to |VLAN| ID 10, and switched to the appropriate exit
ports.
.. only:: partner
.. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest
:start-after: vlan-bullet-2-begin
:end-before: vlan-bullet-2-end
.. only:: partner
.. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest
:start-after: vlan-bullet-3-begin
:end-before: vlan-bullet-3-end

View File

@ -0,0 +1,76 @@
.. akw1596720643112
.. _expose-a-generic-pci-device-for-use-by-vms:
==========================================
Expose a Generic PCI Device for Use by VMs
==========================================
You can configure generic |PCI|-passthrough or |SRIOV| devices \(i.e. not network
interface devices/cards\) so that they are accessible to |VMs|.
.. rubric:: |context|
.. note::
For network cards, you must use the network interface settings to configure
VM access. You can do this from either the |os-prod-hor| interface or
the |CLI|. For more information, see :ref:`Configuring PCI Passthrough
Ethernet Interfaces <configure-pci-passthrough-ethernet-interfaces>`.
For generic |PCI|-passthrough or SR-IOV devices, you must
.. _expose-a-generic-pci-device-for-use-by-vms-ul-zgb-zpc-fcb:
- on each host where an instance of the device is installed, enable the
device For this, you can use the |os-prod-hor| interface or the |CLI|.
- assign a system-wide |PCI| alias to the device. For this, you must use the
|CLI|.
To enable devices and assign a |PCI| alias using the |CLI|, see :ref:`Exposing a
Generic PCI Device Using the CLI
<exposing-a-generic-pci-device-using-the-cli>`.
.. rubric:: |prereq|
To edit a device, you must first lock the host.
.. rubric:: |proc|
#. Select the **Devices** tab on the Host Detail page for the host.
#. Click **Edit Device**.
.. image:: ../figures/jow1452530556357.png
#. Update the information as required.
**Name**
Sets the system inventory name for the device.
**Enabled**
Controls whether the device is exposed for use by |VMs|.
#. Repeat the above steps for other hosts where the same type of device is
installed.
#. Assign a |PCI| alias.
The |PCI| alias is a system-wide setting. It is used for all devices of the
same type across multiple hosts.
For more information, see :ref:`Configuring a PCI Alias in Nova
<configuring-a-pci-alias-in-nova>`.
.. rubric:: |postreq|
After completing the steps above, unlock the host.
To access a device from a |VM|, you must configure a flavor with a reference to
the |PCI| alias. For more information, see :ref:`Configuring a Flavor to Use a
Generic PCI Device <configuring-a-flavor-to-use-a-generic-pci-device>`.

View File

@ -0,0 +1,90 @@
.. dxo1596720611892
.. _exposing-a-generic-pci-device-using-the-cli:
=========================================
Expose a Generic PCI Device Using the CLI
=========================================
For generic PCI-passthrough or |SRIOV| devices \(i.e not network interface
devices or cards\), you can configure |VM| access using the |CLI|.
.. rubric:: |context|
To expose a device for |VM| access, you must
.. _exposing-a-generic-pci-device-using-the-cli-ul-zgb-zpc-fcb:
- enable the device on each host where it is installed
- assign a system-wide |PCI| alias to the device. For more information, see
:ref:`Configuring a PCI Alias in Nova <configuring-a-pci-alias-in-nova>`.
.. rubric:: |prereq|
To edit a device, you must first lock the host.
.. rubric:: |proc|
#. List the non-|NIC| devices on the host for which |VM| access is supported. Use
``-a`` to list disabled devices.
.. code-block:: none
~(keystone_admin)$ system host-device-list compute-0 -a
+------------+----------+------+-------+-------+------+--------+--------+-----------+---------+
| name | address | class| vendor| device| class| vendor | device | numa_node | enabled |
| | | id | id | id | | name | name | | |
+------------+----------+------+-------+-------+------+--------+--------+-----------+---------+
|pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
|pci_0000_00.| 0000:00:.| 0c0. | 8086 | 8d2d | USB | Intel | C610/x9| 0 | False |
+------------+----------+------+-------+-------+------+--------+--------+-----------+---------+
This list shows the |PCI| address needed to enable a device, and the device
ID and vendor ID needed to add a |PCI| Alias.
#. On each host where the device is installed, enable the device.
.. code-block:: none
~(keystone_admin)$system host-device-modify <hostname> <pci_address>
--enable=True [--name="<devicename>"]
where
**<hostname>**
is the name of the host where the device is installed
**<pci\_address>**
is the address shown in the device list
**<devicename>**
is an optional descriptive name for display purposes
For example:
.. code-block:: none
~(keystone_admin)$ system host-device-modify --name="Encryption1" --enable=True compute-0 0000:09:00.0
#. Assign a |PCI| alias.
The |PCI| alias is a system-wide setting. It is used for all devices of the
same type across multiple hosts. For more information, see
:ref:`Configuring a PCI Alias in Nova <configuring-a-pci-alias-in-nova>`.
As the change is applied, **Config-out-of-date** alarms are raised. The
alarms are automatically cleared when the change is complete.
.. rubric:: |result|
The device is added to the list of available devices.
.. rubric:: |postreq|
To access a device from a |VM|, you must configure a flavor with a reference to
the |PCI| alias. For more information, see :ref:`Configuring a Flavor to Use a
Generic PCI Device <configuring-a-flavor-to-use-a-generic-pci-device>`.

View File

@ -0,0 +1,71 @@
.. dze1596720804160
.. _generic-pci-passthrough:
=======================
Generic PCI Passthrough
=======================
.. rubric:: |prereq|
Before you can enable a device, you must lock the compute host.
If you want to enable a device that is in the inventory for pci-passthrough,
the device must be enabled and a Nova |PCI| Alias must be configured with
vendor-id, product-id and alias name.
You can use the following command from the |CLI|, to view devices that are
automatically inventoried on a host:
.. code-block:: none
~(keystone_admin)$ system host-device-list controller-0 --all
You can use the following command from the |CLI| to list the devices for a
host, for example:
.. code-block:: none
~(keystone_admin)$ system host-device-list --all controller-0
+-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+
| name | address | class| vendor| device| class| vendor | device | numa_node |enabled|
| | | id | id | id | | name | name | | |
+------------+----------+-------+-------+-------+------+--------+--------+-------------+-------+
| pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
| pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True |
+-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+
The ``--alloption`` displays both enabled and disabled devices.
.. note::
Depending on the system, not all devices in this list can be accessed via
pci-passthrough, based on hardware/driver limitations.
To enable or disable a device using the |CLI|, do the following:
.. rubric:: |prereq|
To edit a device, you must first lock the host.
.. rubric:: |proc|
#. Enable the device.
.. code-block:: none
~(keystone_admin)$ system host-device-modify <compute_node>
<pci_address> --enable=True
#. Add a |PCI| alias.
For more information, see :ref:`Configuring a PCI Alias in Nova
<configuring-a-pci-alias-in-nova>`.
.. rubric:: |postreq|
Refer to :ref:`Configuring a Flavor to Use a Generic PCI Device
<configuring-a-flavor-to-use-a-generic-pci-device>` for details on how to
launch the |VM| with a |PCI| interface to this Generic |PCI| Device.

View File

@ -5,5 +5,24 @@ Contents
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
node-management-overview
adding-compute-nodes-to-an-existing-duplex-system adding-compute-nodes-to-an-existing-duplex-system
using-labels-to-identify-openstack-nodes using-labels-to-identify-openstack-nodes
-------------------------
PCI Device Access for VMs
-------------------------
.. toctree::
:maxdepth: 1
sr-iov-encryption-acceleration
configuring-pci-passthrough-ethernet-interfaces
pci-passthrough-ethernet-interface-devices
configuring-a-flavor-to-use-a-generic-pci-device
generic-pci-passthrough
pci-device-access-for-vms
pci-sr-iov-ethernet-interface-devices
exposing-a-generic-pci-device-for-use-by-vms
exposing-a-generic-pci-device-using-the-cli

View File

@ -0,0 +1,21 @@
.. zmd1590003300772
.. _node-management-overview:
========
Overview
========
You can add OpenStack compute nodes to an existing |AIO| Duplex system, and use
labels to identify OpenStack Nodes.
Guidelines for |VMs| in a Duplex system remain unchanged.
For more information on using labels to identify OpenStack Nodes, see
:ref:`Using Labels to Identify OpenStack Nodes
<using-labels-to-identify-openstack-nodes>`.
For more information on adding compute nodes to an existing Duplex system, see
:ref:`Adding Compute Nodes to an Existing Duplex System
<adding-compute-nodes-to-an-existing-duplex-system>`.

View File

@ -0,0 +1,64 @@
.. sip1596720928269
.. _pci-device-access-for-vms:
=========================
PCI Device Access for VMs
=========================
You can provide |VMs| with |PCI| passthrough or |SRIOV| access to network interface
cards and other |PCI| devices.
.. note::
To use |PCI| passthrough or |SRIOV| devices, you must have Intel-VTx and
Intel VT-d features enabled in the BIOS.
.. note::
When starting a |VM| where interfaces have **binding\_vif\_type**, the
following parameter is required for the |VM| flavor, hw:mem\_page\_size=large
enabled
where, page size is one of the following:
.. _pci-device-access-for-vms-ul-cz3-mtd-z4b:
- small: Requests the smallest available size on the compute node, which
is always 4KiB of regular memory.
- large: Requests the largest available huge page size, 1GiB or 2MiB.
- any: Requests any available size, including small pages. Cloud platform
uses the largest available size, 1GiB, then 2MiB, and then 4KiB.
For a network interface card, you can provide |VM| access by configuring the
network interface. For more information, see :ref:`Configuring PCI Passthrough
Ethernet Interfaces <configure-pci-passthrough-ethernet-interfaces>`.
For other types of device, you can provide |VM| access by assigning a |PCI| alias
to the device, and then referencing the |PCI| alias in a flavor extra
specification. For more information, see :ref:`Expose a Generic PCI Device
for Use by VMs <expose-a-generic-pci-device-for-use-by-vms>` and
:ref:`Configuring a Flavor to Use a Generic PCI Device
<configuring-a-flavor-to-use-a-generic-pci-device>`.
- :ref:`PCI Passthrough Ethernet Interface Devices <pci-passthrough-ethernet-interface-devices>`
- :ref:`Configuring PCI Passthrough Ethernet Interfaces <configure-pci-passthrough-ethernet-interfaces>`
- :ref:`PCI SR-IOV Ethernet Interface Devices <pci-sr-iov-ethernet-interface-devices>`
- :ref:`Generic PCI Passthrough <generic-pci-passthrough>`
- :ref:`SR-IOV Encryption Acceleration <sr-iov-encryption-acceleration>`
- :ref:`Expose a Generic PCI Device for Use by VMs <expose-a-generic-pci-device-for-use-by-vms>`
- :ref:`Exposing a Generic PCI Device Using the CLI <exposing-a-generic-pci-device-using-the-cli>`
- :ref:`Configure a Flavor to Use a Generic PCI Device <configuring-a-flavor-to-use-a-generic-pci-device>`

View File

@ -0,0 +1,64 @@
.. pqu1596720884619
.. _pci-passthrough-ethernet-interface-devices:
==========================================
PCI Passthrough Ethernet Interface Devices
==========================================
For all purposes, a |PCI| passthrough interface behaves as if it were physically
attached to the virtual machine.
Therefore, any potential throughput limitations coming from the virtualized
environment, such as the ones introduced by internal copying of data buffers,
are eliminated. However, by bypassing the virtualized environment, the use of
|PCI| passthrough Ethernet devices introduces several restrictions that must be
taken into consideration. They include:
.. _pci-passthrough-ethernet-interface-devices-ul-mjs-m52-tp:
- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring
- no support for live migration
.. only:: partner
.. include:: ../../_includes/pci-passthrough-ethernet-interface-devices.rest
:start-after: avs-bullet-3-begin
:end-before: avs-bullet-3-end
.. only:: starlingx
A passthrough interface is attached directly to the provider network's
access switch. Therefore, proper routing of traffic to connect the
passthrough interface to a particular project network depends entirely on
the |VLAN| tagging options configured on both the passthrough interface and
the access port on the switch.
.. only:: partner
.. include:: ../../_includes/pci-passthrough-ethernet-interface-devices.rest
:start-after: avs-text-begin
:end-before: avs-text-end
The access switch routes incoming traffic based on a |VLAN| ID, which ultimately
determines the project network to which the traffic belongs. The |VLAN| ID is
either explicit, as found in incoming tagged packets, or implicit, as defined
by the access port's default |VLAN| ID when the incoming packets are untagged. In
both cases the access switch must be configured to process the proper |VLAN| ID,
which therefore has to be known in advance.
.. caution::
On cold migration, a |PCI| passthrough interface receives a new |MAC| address,
and therefore a new **eth** x interface. The IP address is retained.
In the following example a new virtual machine is launched by user **user1** on
project **project1**, with a passthrough interface connected to the project
network **net0** identified with |VLAN| ID 10. See :ref:`Configure PCI
Passthrough ethernet Interfaces <configure-pci-passthrough-ethernet-interfaces>`

View File

@ -0,0 +1,61 @@
.. vic1596720744539
.. _pci-sr-iov-ethernet-interface-devices:
=====================================
PCI SR-IOV Ethernet Interface Devices
=====================================
A |SRIOV| ethernet interface is a physical |PCI| ethernet |NIC| that implements
hardware-based virtualization mechanisms to expose multiple virtual network
interfaces that can be used by one or more virtual machines simultaneously.
The |PCI|-SIG Single Root I/O Virtualization and Sharing \(|SRIOV|\) specification
defines a standardized mechanism to create individual virtual ethernet devices
from a single physical ethernet interface. For each exposed virtual ethernet
device, formally referred to as a Virtual Function \(VF\), the |SRIOV| interface
provides separate management memory space, work queues, interrupts resources,
and |DMA| streams, while utilizing common resources behind the host interface.
Each VF therefore has direct access to the hardware and can be considered to be
an independent ethernet interface.
When compared with a |PCI| Passthrough ethernet interface, a |SRIOV| ethernet
interface:
.. _pci-sr-iov-ethernet-interface-devices-ul-tyq-ymg-rr:
- Provides benefits similar to those of a |PCI| Passthrough ethernet interface,
including lower latency packet processing.
- Scales up more easily in a virtualized environment by providing multiple
VFs that can be attached to multiple virtual machine interfaces.
- Shares the same limitations, including the lack of support for |LAG|, |QoS|,
|ACL|, and live migration.
- Has the same requirements regarding the |VLAN| configuration of the access
switches.
- Provides a similar configuration workflow when used on |prod-os|.
The configuration of a |PCI| |SRIOV| ethernet interface is identical to
:ref:`Configure PCI Passthrough ethernet Interfaces
<configure-pci-passthrough-ethernet-interfaces>` except that
.. _pci-sr-iov-ethernet-interface-devices-ul-ikt-nvz-qmb:
- you use **pci-sriov** instead of **pci-passthrough** when defining the
network type of an interface
- the segmentation ID of the project network\(s\) used is more significant
here since this identifies the particular |VF| of the |SRIOV| interface
- when creating the neutron port, you must use ``--vnic-typedirect``
- when creating a neutron port backed by an |SRIOV| |VF|, you must use
``--vnic-type direct``

View File

@ -0,0 +1,33 @@
.. psa1596720683716
.. _sr-iov-encryption-acceleration:
==============================
SR-IOV Encryption Acceleration
==============================
|prod-os| supports |PCI| |SRIOV| access for encryption acceleration.
|prod-os| supports |SRIOV| access for acceleration devices based on
Intel QuickAssist™ technology, specifically Coleto Creek 8925/8950, and C62X
chipset. Other QuickAssist™ devices are currently not supported.
If acceleration devices have to be used, the devices have to be present as
virtual devices \(qat-dh895xcc-vfor qat-c62x-vf\) on the |PCI| bus. Physical
devices \(qat-pf\) are currently not supported.
If hardware is present \(for example, Intel AV-ICE02 VPN Acceleration Card\) on
an available host, you can provide |VMs| with |PCI| passthrough access to one or
more of the supported virtual |SRIOV| acceleration devices to improve
performance for encrypted communications.
.. caution::
Live migration is not supported for instances using |SRIOV| devices.
To expose the device to |VMs|, see :ref:`Exposing a Generic PCI Device for Use
by VMs <expose-a-generic-pci-device-for-use-by-vms>`.
.. note::
To use |PCI| passthrough or |SRIOV| devices, you must have Intel VT-x and
Intel VT-d features enabled in the BIOS.

View File

@ -2,25 +2,53 @@
.. rho1557409702625 .. rho1557409702625
.. _using-labels-to-identify-openstack-nodes: .. _using-labels-to-identify-openstack-nodes:
====================================== ========================================
Use Labels to Identify OpenStack Nodes Use Labels to Identify OpenStack Nodes
====================================== ========================================
The |prod-os| application is deployed on the nodes of the |prod| based on node The |prod-os| application is deployed on the nodes of the |prod| based on node
labels. labels.
.. rubric:: |context|
Prior to initially installing the |prod-os| application or when adding nodes to Prior to initially installing the |prod-os| application or when adding nodes to
a |prod-os| deployment, you need to label the nodes appropriately for their a |prod-os| deployment, you need to label the nodes appropriately for their
OpenStack role. OpenStack role.
.. _using-labels-to-identify-openstack-nodes-table-xyl-qmy-thb: .. _using-labels-to-identify-openstack-nodes-table-xyl-qmy-thb:
.. Common OpenStack labels
.. include:: ../../_includes/common-openstack-labels.rest
For more information, see |node-doc|: :ref:`Configure Node Labels from The CLI .. only:: starlingx
<assigning-node-labels-from-the-cli>`.
.. table:: Table 1. Common OpenStack Labels
:widths: auto
+-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Label | Worker/Controller | Description |
+=============================+===========================+=======================================================================================================================================================================+
| **openstack-control-plane** | - Controller | Identifies a node to deploy openstack controller services on. |
| | | |
| | - All-in-One Controller | |
+-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **openstack-compute-node** | Worker | Identifies a node to deploy openstack compute agents on. |
| | | |
| | | .. note:: |
| | | Adding or removing this label, or removing a node with this label from a cluster, triggers the regeneration and application of the helm chart override by Armada. |
+-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **sriov** | - Worker | Identifies a node as supporting sr-iov. |
| | | |
| | - All-in-One Controller | |
+-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. only:: partner
.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest
:start-after: table-1-of-contents-begin
:end-before: table-1-of-contents-end
For more information. see |node-doc|: :ref:`Configuring Node Labels from The CLI <assigning-node-labels-from-the-cli>`.
.. rubric:: |prereq| .. rubric:: |prereq|
@ -28,4 +56,40 @@ Nodes must be locked before labels can be assigned or removed.
.. rubric:: |proc| .. rubric:: |proc|
.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest .. only:: starlingx
#. To assign Kubernetes labels to identify compute-0 as a compute node with and SRIOV, use the following command:
.. code-block:: none
~(keystone)admin)$ system host-label-assign compute-0 openstack-compute-node=enabled sriov=enabled
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e |
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
| label_key | openstack-compute-node |
| label_value | enabled |
+-------------+--------------------------------------+
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| uuid | d8e29e62-4173-4445-886c-9a95b0d6fee1 |
| host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 |
| label_key | sriov |
| label_value | enabled |
+-------------+--------------------------------------+
#. To remove the labels from the host, do the following.
.. code-block:: none
~(keystone)admin)$ system host-label-remove compute-0 openstack-compute-node sriov
Deleted host label openstack-compute-node for host compute-0
Deleted host label sriov for host compute-0
.. only:: partner
.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest
:start-after: table-1-of-contents-end

View File

@ -31,12 +31,12 @@ You can allocate root disk storage for virtual machines using the following:
The use of Cinder volumes or ephemeral storage is determined by the **Instance The use of Cinder volumes or ephemeral storage is determined by the **Instance
Boot Source** setting when an instance is launched. Boot from volume results in Boot Source** setting when an instance is launched. Boot from volume results in
the use of a Cinder volume, while Boot from image results in the use of the use of a Cinder volume, while Boot from image results in the use of
ephemeral storage. Ephemeral storage.
.. note:: .. note::
On systems with one or more single-disk compute hosts configured with local On systems with one or more single-disk compute hosts configured with local
instance backing, the use of Boot from volume for all |VMs| is strongly instance backing, the use of Boot from volume for all |VMs| is strongly
recommended. This helps prevent the use of local ephemeral storage on these recommended. This helps prevent the use of local Ephemeral storage on these
hosts. hosts.
On systems without dedicated storage hosts, Cinder-backed persistent storage On systems without dedicated storage hosts, Cinder-backed persistent storage
@ -52,27 +52,27 @@ On systems with dedicated hosts, Cinder storage is provided using Ceph-backed
Ephemeral and Swap Disk Storage for VMs Ephemeral and Swap Disk Storage for VMs
--------------------------------------- ---------------------------------------
Storage for |VM| ephemeral and swap disks, and for ephemeral boot disks if the Storage for |VM| Ephemeral and swap disks, and for Ephemeral boot disks if the
|VM| is launched from an image rather than a volume, is provided using the |VM| is launched from an image rather than a volume, is provided using the
**nova-local** local volume group defined on compute hosts. **nova-local** local volume group defined on compute hosts.
The **nova-local** group provides either local ephemeral storage using The **nova-local** group provides either local Ephemeral storage using
|CoW|-image-backed storage resources on compute hosts, or remote ephemeral |CoW|-image-backed storage resources on compute hosts, or remote Ephemeral
storage, using Ceph-backed resources on storage hosts. You must configure the storage, using Ceph-backed resources on storage hosts. You must configure the
storage backing type at installation before you can unlock a compute host. The storage backing type at installation before you can unlock a compute host. The
default type is image-backed local ephemeral storage. You can change the default type is image-backed local Ephemeral storage. You can change the
configuration after installation. configuration after installation.
.. xbooklink For more information, see |stor-doc|: :ref:`Working with Local Volume Groups <working-with-local-volume-groups>`. .. xbooklink For more information, see |stor-doc|: :ref:`Working with Local Volume Groups <working-with-local-volume-groups>`.
.. caution:: .. caution::
On a compute node with a single disk, local ephemeral storage uses the root On a compute node with a single disk, local Ephemeral storage uses the root
disk. This can adversely affect the disk I/O performance of the host. To disk. This can adversely affect the disk I/O performance of the host. To
avoid this, ensure that single-disk compute nodes use remote Ceph-backed avoid this, ensure that single-disk compute nodes use remote Ceph-backed
storage if available. If Ceph storage is not available on the system, or is storage if available. If Ceph storage is not available on the system, or is
not used for one or more single-disk compute nodes, then you must ensure not used for one or more single-disk compute nodes, then you must ensure
that all VMs on the system are booted from Cinder volumes and do not use that all VMs on the system are booted from Cinder volumes and do not use
ephemeral or swap disks. Ephemeral or swap disks.
On |prod-os| Simplex or Duplex systems that use a single disk, the same On |prod-os| Simplex or Duplex systems that use a single disk, the same
consideration applies. Since the disk also provides Cinder support, adverse consideration applies. Since the disk also provides Cinder support, adverse
@ -83,11 +83,11 @@ The backing type is set individually for each host using the **Instance
Backing** parameter on the **nova-local** local volume group. Backing** parameter on the **nova-local** local volume group.
**Local CoW Image backed** **Local CoW Image backed**
This provides local ephemeral storage using a |CoW| sparse-image-format This provides local Ephemeral storage using a |CoW| sparse-image-format
backend, to optimize launch and delete performance. backend, to optimize launch and delete performance.
**Remote RAW Ceph storage backed** **Remote RAW Ceph storage backed**
This provides remote ephemeral storage using a Ceph backend on a system This provides remote Ephemeral storage using a Ceph backend on a system
with storage nodes, to optimize migration capabilities. Ceph backing uses a with storage nodes, to optimize migration capabilities. Ceph backing uses a
Ceph storage pool configured from the storage host resources. Ceph storage pool configured from the storage host resources.
@ -96,17 +96,18 @@ storage by setting a flavor extra specification.
.. xbooklink For more information, see OpenStack Configuration and Management: :ref:`Specifying the Storage Type for VM Ephemeral Disks <specifying-the-storage-type-for-vm-ephemeral-disks>`. .. xbooklink For more information, see OpenStack Configuration and Management: :ref:`Specifying the Storage Type for VM Ephemeral Disks <specifying-the-storage-type-for-vm-ephemeral-disks>`.
.. _block-storage-for-virtual-machines-d29e17:
.. caution:: .. caution::
Unlike Cinder-based storage, ephemeral storage does not persist if the Unlike Cinder-based storage, Ephemeral storage does not persist if the
instance is terminated or the compute node fails. instance is terminated or the compute node fails.
In addition, for local ephemeral storage, migration and resizing support .. _block-storage-for-virtual-machines-d29e17:
In addition, for local Ephemeral storage, migration and resizing support
depends on the storage backing type specified for the instance, as well as depends on the storage backing type specified for the instance, as well as
the boot source selected at launch. the boot source selected at launch.
The **nova-local** storage type affects migration behavior. Live migration is The **nova-local** storage type affects migration behavior. Live migration is
not always supported for |VM| disks using local ephemeral storage. not always supported for |VM| disks using local Ephemeral storage.
.. xbooklink For more information, see :ref:`VM Storage Settings for Migration, Resize, or Evacuation <vm-storage-settings-for-migration-resize-or-evacuation>`. .. xbooklink For more information, see :ref:`VM Storage Settings for Migration,
Resize, or Evacuation <vm-storage-settings-for-migration-resize-or-evacuation>`.

View File

@ -25,6 +25,7 @@ Data networks
network-planning-data-networks network-planning-data-networks
physical-network-planning physical-network-planning
resource-planning
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -0,0 +1,84 @@
.. jow1454003783557
.. _resource-planning:
==================
Resource Placement
==================
.. only:: starlingx
For |VMs| requiring maximum determinism and throughput, the |VM| must be
placed in the same NUMA node as all of its resources, including |VM|
memory, |NICs|, and any other resource such as |SRIOV| or |PCI|-Passthrough
devices.
VNF 1 and VNF 2 in the example figure are examples of |VMs| deployed for
maximum throughput with |SRIOV|.
.. only:: starlingx
A |VM| such as VNF 6 in NUMA-REF will not have the same performance as VNF
1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in
this case:
.. From NUMA-REF
.. xbooklink :ref:`VM scheduling and placement - NUMA
architecture <vm-scheduling-and-placement-numa-architecture>`
.. only:: partner
.. include:: ../../_includes/resource-planning.rest
:start-after: avs-text-1-begin
:end-before: avs-text-2-end
.. only:: partner
.. include:: ../../_includes/resource-planning.rest
:start-after: avs-text-2-begin
:end-before: avs-text-2-end
.. _resource-planning-ul-tcb-ssz-55:
.. only:: partner
.. include:: ../../_includes/resource-planning.rest
:start-after: avs-text-1-end
If accessing |PCIe| devices directly from a |VM| using |PCI|-Passthrough or
|SRIOV|, maximum performance can only be achieved by pinning the |VM| cores
to the same NUMA node as the |PCIe| device. For example, VNF1 and VNF2
will have optimum SR-IOV performance if deployed on NUMA node 0 and VNF6
will have maximum |PCI|-Passthrough performance if deployed in NUMA node 1.
Options for controlling access to |PCIe| devices are:
.. _resource-planning-ul-ogh-xsz-55:
- Use pci\_numa\_affinity flavor extra specs to force VNF6 to be scheduled on
NUMA nodes where the |PCI| device is running. This is the recommended option
because it does not require prior knowledge of which socket a |PCI| device
resides on. The affinity may be **strict** or **prefer**:
- **Strict** affinity guarantees scheduling on the same NUMA node as a
|PCIe| Device or the VM will not be scheduled.
- **Prefer** affinity uses best effort so it will only schedule the VM on
a NUMA node if no NUMA nodes with that |PCIe| device are available. Note
that prefer mode does not provide the same performance or determinism
guarantees as strict, but may be good enough for some applications.
- Pin the VM to the NUMA node 0 with the |PCI| device using flavor extra
specs or image properties. This will force the scheduler to schedule the VM
on NUMA node 0. However, this requires knowledge of which cores the
applicable |PCIe| devices run on and does not work well unless all nodes
have that type of |PCIe| node attached to the same socket.

View File

@ -17,14 +17,14 @@ Storage on controller hosts
.. _storage-configuration-storage-on-controller-hosts: .. _storage-configuration-storage-on-controller-hosts:
The controllers provide storage for the |prod-os|'s OpenStack Controller The controllers provide storage for the |prod-os|'s OpenStack Controller
Services through a combination of local container ephemeral disk, |PVCs| backed Services through a combination of local container Ephemeral disk, |PVCs| backed
by Ceph and a containerized HA mariadb deployment. by Ceph and a containerized HA mariadb deployment.
On systems configured for controller storage with a small Ceph cluster on the On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, they also provide persistent block storage for master/controller nodes, they also provide persistent block storage for
persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and
storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or storage for |VM| remote Ephemeral volumes \(Nova\). On All-in-One Simplex or
Duplex systems, the controllers also provide nova-local storage for ephemeral Duplex systems, the controllers also provide nova-local storage for Ephemeral
|VM| volumes. |VM| volumes.
On systems configured for controller storage, the master/controller's root disk On systems configured for controller storage, the master/controller's root disk
@ -51,7 +51,7 @@ Glance, Cinder, and remote Nova storage
On systems configured for controller storage with a small Ceph cluster on the On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, this small Ceph cluster on the controller provides master/controller nodes, this small Ceph cluster on the controller provides
Glance image storage, Cinder block storage, Cinder backup storage, and Nova Glance image storage, Cinder block storage, Cinder backup storage, and Nova
remote ephemeral block storage. For more information, see :ref:`Block Storage remote Ephemeral block storage. For more information, see :ref:`Block Storage
for Virtual Machines <block-storage-for-virtual-machines>`. for Virtual Machines <block-storage-for-virtual-machines>`.
.. _storage-configuration-storage-on-controller-hosts-section-N101BB-N10029-N10001: .. _storage-configuration-storage-on-controller-hosts-section-N101BB-N10029-N10001:
@ -61,7 +61,7 @@ Nova-local storage
****************** ******************
Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute
function, and therefore provide **nova-local** storage for ephemeral disks. On function, and therefore provide **nova-local** storage for Ephemeral disks. On
other systems, **nova-local** storage is provided by compute hosts. For more other systems, **nova-local** storage is provided by compute hosts. For more
information about this type of storage, see :ref:`Storage on Compute Hosts information about this type of storage, see :ref:`Storage on Compute Hosts
<storage-on-compute-hosts>` and :ref:`Block Storage for Virtual Machines <storage-on-compute-hosts>` and :ref:`Block Storage for Virtual Machines
@ -78,17 +78,16 @@ Storage on storage hosts
------------------------ ------------------------
|prod-os| creates default Ceph storage pools for Glance images, Cinder volumes, |prod-os| creates default Ceph storage pools for Glance images, Cinder volumes,
Cinder backups, and Nova ephemeral data. Cinder backups, and Nova Ephemeral data. For more information, see the
:ref:`Platform Storage Configuration <storage-configuration-storage-resources>`
.. xbooklink For more information, see the :ref:`Platform Storage Configuration <storage-configuration-storage-resources>` guide for details on configuring the internal Ceph cluster on either controller or storage hosts. guide for details on configuring the internal Ceph cluster on either controller
or storage hosts.
.. _storage-on-compute-hosts:
------------------------ ------------------------
Storage on compute hosts Storage on compute hosts
------------------------ ------------------------
Compute-labelled worker hosts can provide ephemeral storage for |VM| disks. Compute-labelled worker hosts can provide Ephemeral storage for |VM| disks.
.. note:: .. note::
On All-in-One Simplex or Duplex systems, compute storage is provided using On All-in-One Simplex or Duplex systems, compute storage is provided using

View File

@ -7,7 +7,7 @@ VM Storage Settings for Migration, Resize, or Evacuation
======================================================== ========================================================
The migration, resize, or evacuation behavior for an instance depends on the The migration, resize, or evacuation behavior for an instance depends on the
type of ephemeral storage used. type of Ephemeral storage used.
.. note:: .. note::
Live migration behavior can also be affected by flavor extra Live migration behavior can also be affected by flavor extra
@ -16,6 +16,7 @@ type of ephemeral storage used.
The following table summarizes the boot and local storage configurations needed The following table summarizes the boot and local storage configurations needed
to support various behaviors. to support various behaviors.
.. _vm-storage-settings-for-migration-resize-or-evacuation-table-wmf-qdh-v5: .. _vm-storage-settings-for-migration-resize-or-evacuation-table-wmf-qdh-v5:
.. table:: .. table::
@ -43,16 +44,16 @@ to support various behaviors.
In addition to the behavior summarized in the table, system-initiated cold In addition to the behavior summarized in the table, system-initiated cold
migrate \(e.g. when locking a host\) and evacuate restrictions may be applied migrate \(e.g. when locking a host\) and evacuate restrictions may be applied
if a |VM| with a large root disk size exists on the host. For a Local |CoW| if a |VM| with a large root disk size exists on the host. For a Local CoW Image
Image Backed \(local\_image\) storage type, the VIM can cold migrate or Backed \(local\_image\) storage type, the VIM can cold migrate or evacuate
evacuate |VMs| with disk sizes up to 60 GB |VMs| with disk sizes up to 60 GB
.. note:: .. note::
The criteria for live migration are independent of disk size. The criteria for live migration are independent of disk size.
.. note:: .. note::
The **Local Storage Backing** is a consideration only for instances that The **Local Storage Backing** is a consideration only for instances that
use local ephemeral or swap disks. use local Ephemeral or swap disks.
The boot configuration for an instance is determined by the **Instance Boot The boot configuration for an instance is determined by the **Instance Boot
Source** selected at launch. Source** selected at launch.

View File

@ -32,3 +32,14 @@ Kubernetes
:maxdepth: 2 :maxdepth: 2
kubernetes/index kubernetes/index
---------
OpenStack
---------
.. check what put here
.. toctree::
:maxdepth: 2
openstack/index

View File

@ -0,0 +1,22 @@
.. rhv1589993884379
.. _access-using-the-default-set-up:
===============================
Access Using the Default Set-up
===============================
Upon installation, you can access the system using either the local |CLI| via
the local console and/or ssh, and |os-prod-hor-long|, the WRO administrative
web service.
For details on the local |CLI|, see :ref:`Use Local CLIs <use-local-clis>`.
For convenience, the |prod-os| administrative web service, Horizon, is
initially made available on node port 31000, i.e. at URL
http://<oam-floating-ip-address>:31000.
After setting the domain name, see :ref:`Configure Remote CLIs
<configure-remote-clis-and-clients>`, |os-prod-hor-long| is accessed by a
different URL.

View File

@ -0,0 +1,113 @@
.. jcc1605727727548
.. _config-and-management-using-container-backed-remote-clis-and-clients:
============================================
Use Container-backed Remote CLIs and Clients
============================================
Remote openstack |CLIs| can be used in any shell after sourcing the generated
remote |CLI|/client RC file. This RC file sets up the required environment
variables and aliases for the remote |CLI| commands.
.. rubric:: |context|
.. note::
If you specified repositories that require authentication when configuring
the container-backed remote |CLIs|, you must perform a :command:`docker
login` to that repository before using remote |CLIs| for the first time
.. rubric:: |prereq|
.. _config-and-management-using-container-backed-remote-clis-and-clients-ul-lgr-btf-14b:
- Consider adding the following command to your .login or shell rc file, such
that your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands. Otherwise, execute it before
proceeding:
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# source remote_client_platform.sh
- You must have completed the configuration steps in :ref:`Configure Remote
CLIs <configure-remote-clis-and-clients>` before proceeding.
.. rubric:: |proc|
- Test workstation access to the remote OpenStack |CLI|.
Enter your OpenStack password when prompted.
.. note::
The first usage of a command will be slow as it requires that the
docker image supporting the remote clients be pulled from the remote
registry.
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh
Please enter your OpenStack Password for project admin as user admin:
root@myclient:/home/user/remote_cli_wd# openstack endpoint list
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+
| 0342460b877d4d0db407580a2bb13733 | RegionOne | glance | image | True | internal | http://glance.openstack.svc.cluster.local/ |
| 047a2a63a53a4178b8ae1d093487e99e | RegionOne | keystone | identity | True | internal | http://keystone.openstack.svc.cluster.local/v3 |
| 05d5d4bbffb842fea0f81c9eb2784f05 | RegionOne | keystone | identity | True | public | http://keystone.openstack.svc.cluster.local/v3 |
| 07195197201441f9b065dde45c94ef2b | RegionOne | keystone | identity | True | admin | http://keystone.openstack.svc.cluster.local/v3 |
| 0f5c6d0bc626409faedb207b84998e74 | RegionOne | heat-cfn | cloudformation | True | admin | http://cloudformation.openstack.svc.cluster.local/v1 |
| 16806fa22ca744298e5a7ce480bcb885 | RegionOne | cinderv2 | volumev2 | True | admin | http://cinder.openstack.svc.cluster.local/v2/%(tenant_id)s |
| 176cd2168303457fbaf24fca96c6195e | RegionOne | neutron | network | True | admin | http://neutron.openstack.svc.cluster.local/ |
| 21bd7488f8e44a9787f7b3301e666da8 | RegionOne | heat | orchestration | True | admin | http://heat.openstack.svc.cluster.local/v1/%(project_id)s |
| 356fa0758af44a72adeec421ccaf2f2a | RegionOne | nova | compute | True | admin | http://nova.openstack.svc.cluster.local/v2.1/%(tenant_id)s |
| 35a42c23cb8841958885b8b01defa839 | RegionOne | fm | faultmanagement | True | admin | http://fm.openstack.svc.cluster.local/ |
| 37dfe2902a834efdbdcd9f2b9cf2c6e7 | RegionOne | cinder | volume | True | internal | http://cinder.openstack.svc.cluster.local/v1/%(tenant_id)s |
| 3d94abf91e334a74bdb01d8fad455a38 | RegionOne | cinderv2 | volumev2 | True | public | http://cinder.openstack.svc.cluster.local/v2/%(tenant_id)s |
| 433f1e8860ff4d57a7eb64e6ae8669bd | RegionOne | cinder | volume | True | public | http://cinder.openstack.svc.cluster.local/v1/%(tenant_id)s |
| 454b21f41806464580a1f6290cb228ec | RegionOne | placement | placement | True | public | http://placement.openstack.svc.cluster.local/ |
| 561be1aa00da4e4fa64791110ed99852 | RegionOne | heat-cfn | cloudformation | True | public | http://cloudformation.openstack.svc.cluster.local/v1 |
| 6068407def6b4a38b862c89047319f77 | RegionOne | cinderv3 | volumev3 | True | admin | http://cinder.openstack.svc.cluster.local/v3/%(tenant_id)s |
| 77e886bc903a4484a25944c1e99bdf1f | RegionOne | nova | compute | True | internal | http://nova.openstack.svc.cluster.local/v2.1/%(tenant_id)s |
| 7c3e0ce3b69d45878c1152473719107c | RegionOne | fm | faultmanagement | True | internal | http://fm.openstack.svc.cluster.local/ |
+----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+
root@myclient:/home/user/remote_cli_wd# openstack volume list --all-projects
+--------------------------------------+-----------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+-----------+-----------+------+-------------+
| f2421d88-69e8-4e2f-b8aa-abd7fb4de1c5 | my-volume | available | 8 | |
+--------------------------------------+-----------+-----------+------+-------------+
root@myclient:/home/user/remote_cli_wd#
.. note::
Some commands used by remote |CLI| are designed to leave you in a shell
prompt, for example:
.. code-block:: none
root@myclient:/home/user/remote_cli_wd# openstack
In some cases the mechanism for identifying commands that should leave
you at a shell prompt does not identify them correctly. If you
encounter such scenarios, you can force-enable or disable the shell
options using the <FORCE\_SHELL> or <FORCE\_NO\_SHELL> variables before
the command.
You cannot use both variables at the same time.
- If you need to run a remote |CLI| command that references a local file, then
that file must be copied to or created in the working directory specified on
the ./config\_client.sh command and referenced as under /wd/ in the actual
command.
For example:
.. code-block:: none
root@myclient:/home/user# cd $HOME/remote_cli_wd
root@myclient:/home/user/remote_cli_wd# openstack image create --public
--disk-format qcow2 --container-format bare --file ubuntu.qcow2
ubuntu_image

View File

@ -0,0 +1,182 @@
.. fvv1597424560931
.. _configure-remote-clis-and-clients:
=====================
Configure Remote CLIs
=====================
The |prod-os| command lines can be accessed from remote computers running
Linux, MacOS, and Windows.
.. rubric:: |context|
This functionality is made available using a docker image for connecting to the
|prod-os| remotely. This docker image is pulled as required by configuration
scripts.
.. rubric:: |prereq|
You must have Docker installed on the remote systems you connect from. For more
information on installing Docker, see `https://docs.docker.com/install/
<https://docs.docker.com/install/>`__. For Windows remote workstations, Docker
is only supported on Windows 10.
For Windows remote workstations, you must run the following commands from a
Cygwin terminal. See `https://www.cygwin.com/ <https://www.cygwin.com/>`__ for
more information about the Cygwin project.
For Windows remote workstations, you must also have :command:`winpty`
installed. Download the latest release tarball for Cygwin from
`https://github.com/rprichard/winpty/releases
<https://github.com/rprichard/winpty/releases>`__. After downloading the
tarball, extract it to any location and change the Windows <PATH> variable to
include its bin folder from the extracted winpty folder.
The following procedure shows how to configure the Container-backed Remote
|CLIs| for OpenStack remote access.
.. rubric:: |proc|
.. _configure-remote-clis-and-clients-steps-fvl-n4d-tkb:
#. Copy the remote client tarball file from |dnload-loc| to the remote
workstation, and extract its content.
- The tarball is available from the |prod-os| area on |dnload-loc|.
- You can extract the tarball's contents anywhere on your workstation system.
.. parsed-literal::
$ cd $HOME
$ tar xvf |prefix|-remote-clients-<version>.tgz
#. Download the user/tenant **openrc** file from the |os-prod-hor-long| to the
remote workstation.
#. Log in to |os-prod-hor| interface as the user and tenant that you want
to configure remote access for.
In this example, we use the 'admin' user in the 'admin' tenant.
#. Navigate to **Project** \> **API Access** \> **Download Openstack RCfile**.
#. Select **Openstack RC file**.
The file admin-openrc.sh downloads.
#. On the remote workstation, configure the client access.
#. Change to the location of the extracted tarball.
.. parsed-literal::
$ cd $HOME/|prefix|-remote-clients-<version>/
#. Create a working directory that will be mounted by the container
implementing the remote |CLIs|.
.. code-block:: none
$ mkdir -p $HOME/remote_cli_wd
#. Run the :command:`configure\_client.sh` script.
.. parsed-literal::
$ ./configure_client.sh -t openstack -r admin_openrc.sh -w
$HOME/remote_cli_wd -p
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
If you specify repositories that require authentication, as shown
above, you must remember to perform a :command:`docker login` to that
repository before using remote |CLIs| for the first time.
The options for configure\_client.sh are:
**-t**
The type of client configuration. The options are platform \(for
|prod-long| |CLI| and clients\) and openstack \(for
|prod-os| application |CLI| and clients\).
The default value is platform.
**-r**
The user/tenant RC file to use for 'openstack' |CLI| commands.
The default value is admin-openrc.sh.
**-o**
The remote |CLI|/workstation RC file generated by this script.
This RC file needs to be sourced in the shell, to setup required
environment variables and aliases, before running any remote |CLI|
commands.
For the platform client setup, the default is
remote\_client\_platform.sh. For the openstack application client
setup, the default is remote\_client\_openstack.sh.
**-w**
The working directory that will be mounted by the container
implementing the remote |CLIs|. When using the remote |CLIs|, any files
passed as arguments to the remote |CLI| commands need to be in this
directory in order for the container to access the files. The
default value is the directory from which the
:command:`configure\_client.sh` command was run.
**-p**
Override the container image for the platform |CLI| and clients.
By default, the platform |CLIs| and clients container image is pulled
from docker.io/starlingx/stx-platformclients.
For example, to use the container images from the |prod| |AWS| ECR:
.. parsed-literal::
$ ./configure_client.sh -t platform -r admin-openrc.sh -k
admin-kubeconfig -w $HOME/remote_cli_wd -p
625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1
If you specify repositories that require authentication, you must
first perform a :command:`docker login` to that repository before
using remote |CLIs|.
**-a**
Override the OpenStack application image.
By default, the OpenStack |CLIs| and clients container image is
pulled from docker.io/starlingx/stx-openstackclients.
The :command:`configure-client.sh` command will generate a
remote\_client\_openstack.sh RC file. This RC file needs to be sourced
in the shell to set up required environment variables and aliases
before any remote |CLI| commands can be run.
#. Copy the file remote\_client\_platform.sh to $HOME/remote\_cli\_wd
.. rubric:: |postreq|
After configuring the |prod-os| container-backed remote |CLIs|/clients, the
remote |prod-os| |CLIs| can be used in any shell after sourcing the generated
remote |CLI|/client RC file. This RC file sets up the required environment
variables and aliases for the remote |CLI| commands.
.. note::
Consider adding this command to your .login or shell rc file, such that
your shells will automatically be initialized with the environment
variables and aliases for the remote |CLI| commands.
See :ref:`Use Container-backed Remote |CLI|s and Clients
<config-and-management-using-container-backed-remote-clis-and-clients>` for
details.

View File

@ -0,0 +1,21 @@
---------
OpenStack
---------
=================
Access the System
=================
.. toctree::
:maxdepth: 1
security-overview
access-using-the-default-set-up
use-local-clis
update-the-domain-name
configure-remote-clis-and-clients
config-and-management-using-container-backed-remote-clis-and-clients
install-a-trusted-ca-certificate
install-rest-api-and-horizon-certificate
openstack-keystone-accounts
security-system-account-password-rules

View File

@ -0,0 +1,37 @@
.. fak1590002084693
.. _install-a-trusted-ca-certificate:
================================
Install a Trusted CA Certificate
================================
A trusted |CA| certificate can be added to the |prod-os| service containers
such that the containerized OpenStack services can validate certificates of
far-end systems connecting or being connected to over HTTPS. The most common
use case here would be to enable certificate validation of clients connecting
to OpenStack service REST API endpoints.
.. rubric:: |proc|
.. _install-a-trusted-ca-certificate-steps-unordered-am5-xgt-vlb:
#. Install a trusted |CA| certificate for OpenStack using the following
command to override all OpenStack Helm Charts.
.. code-block:: none
~(keystone_admin)$ system certificate-install -m openstack_ca <certificate_file>
where <certificate\_file> contains a single |CA| certificate to be trusted.
Running the command again with a different |CA| certificate in the file will
*replace* this openstack trusted |CA| certificate.
#. Apply the updated Helm chart overrides containing the certificate changes:
.. code-block:: none
~(keystone_admin)$ system application-apply wr-openstack

View File

@ -0,0 +1,43 @@
.. pmb1590001656644
.. _install-rest-api-and-horizon-certificate:
========================================
Install REST API and Horizon Certificate
========================================
.. rubric:: |context|
This certificate must be valid for the domain configured for OpenStack, see the
sections on :ref:`Accessing the System <access-using-the-default-set-up>`.
.. rubric:: |proc|
#. Install the certificate for OpenStack as Helm chart overrides.
.. code-block:: none
~(keystone_admin)$ system certificate-install -m openstack <certificate_file>
where <certificate\_file> is a pem file containing both the certificate and
private key.
.. note::
The OpenStack certificate must be created with wildcard SAN.
For example, to create a certificate for |FQDN|: west2.us.example.com,
the following entry must be included in the certificate:
.. code-block:: none
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.west2.us.example.com
#. Apply the Helm chart overrides containing the certificate changes.
.. code-block:: none
~(keystone_admin)$ system application-apply wr-openstack

View File

@ -0,0 +1,22 @@
.. xdd1485354265196
.. _openstack-keystone-accounts:
===========================
OpenStack Keystone Accounts
===========================
|prod-os| uses Keystone for Identity Management which defines projects/tenants
for grouping OpenStack resources and users for managing access to these
resources.
|prod-os| provides a local SQL Backend for Keystone.
You can create OpenStack projects and users from the |os-prod-hor-long|
or the CLI. Projects and users can also be managed using the OpenStack REST
API.
.. seealso::
:ref:`System Account Password Rules <security-system-account-password-rules>`

View File

@ -0,0 +1,24 @@
.. iad1589999522755
.. _security-overview:
========
Overview
========
|prod-os| is a containerized application running on top of |prod-long|.
Many security features are not specific to |prod-os|, and are documented in
.. xbooklink :ref:`Cloud Platform Security <overview-of-starlingx-security>`.
This section covers security features that are specific to |prod-os|:
.. _security-overview-ul-qvj-22f-tlb:
- OpenStack Keystone Accounts
- Enabling Secure HTTPS Connectivity for OpenStack

View File

@ -0,0 +1,32 @@
.. tfb1485354135500
.. _security-system-account-password-rules:
=============================
System Account Password Rules
=============================
|prod-os| enforces a set of strength requirements for new or changed passwords.
The following rules apply:
.. _security-system-account-password-rules-ul-jwb-g15-zw:
- The password must be at least seven characters long.
- You cannot reuse the last 2 passwords in history.
- The password must contain:
- at least one lower-case character
- at least one upper-case character
- at least one numeric character
- at least one special character

View File

@ -0,0 +1,160 @@
.. qsc1589994634309
.. _update-the-domain-name:
======================
Update the Domain Name
======================
Containerized OpenStack services in |prod-os| are deployed behind an ingress
controller \(nginx\) that listens, by default, on either port 80 \(HTTP\) or
port 443 \(HTTPS\).
.. rubric:: |context|
The ingress controller routes packets to the specific OpenStack service, such
as the Cinder service, or the Neutron service, by parsing the |FQDN| in the
packet. For example, neutron.openstack.svc.cluster.local is for the Neutron
service, cinderapi.openstack.svc.cluster.local is for the Cinder service.
This routing requires that access to OpenStack REST APIs \(directly or via
remote OpenStack |CLIs|\) must be via a |FQDN|. You cannot access OpenStack REST
APIs using an IP address.
|FQDNs| \(such as cinderapi.openstack.svc.cluster.local\) must be in a |DNS|
server that is publicly accessible.
.. note::
It is possible to wildcard a set of |FQDNs| to the same IP address in a
|DNS| server configuration so that you don't need to update the |DNS|
server every time an OpenStack service is added. Check your particular
|DNS| server for details on how to wild-card a set of |FQDNs|.
In a “real” deployment, that is, not a lab scenario, you cannot use the default
*openstack.svc.cluster.local* domain name externally. You must set a unique
domain name for your |prod-os| system. Use the :command:`system
serviceparameter-add` command to configure and set the OpenStack domain name:
.. rubric:: |prereq|
.. _update-the-domain-name-ul-md1-pzx-n4b:
- You must have an external |DNS| Server for which you have authority to add
new domain name to IP address mappings \(e.g. A, AAAA or CNAME records\).
- The |DNS| server must be added to your|prod-long| |DNS| list.
- Your |DNS| server must have A, AAAA or CNAME records for the following domain
names, representing the corresponding openstack services, defined as the
|OAM| Floating IP address. Refer to the configuration manual for the
particular |DNS| server you are using on how to make these updates for the
domain you are using for the |prod-os| system.
.. note::
|prod| recommends that you not define domain names for services you
are not using.
.. parsed-literal::
# define A record for general domain for |prod| system
<my-|prefix|-domain> IN A 10.10.10.10
# define alias for general domain for horizon dashboard REST API URL
horizon.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for keystone identity service REST
API URLs keystone.<my-|prefix|-domain> IN CNAME
<my-|prefix|-domain>.<my-company>.com. keystone-api.<my-|prefix|-domain> IN
CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for neutron networking REST API URL
neutron.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for nova compute provisioning REST
API URLs nova.<my-|prefix|-domain> IN CNAME
<my-|prefix|-domain>.<my-company>.com. placement.<my-|prefix|-domain> IN CNAME
<my-|prefix|-domain>.<my-company>.com. rest-api.<my-|prefix|-domain> IN CNAME
<my-|prefix|-domain>.<my-company>.com.
# define no vnc procy alias for VM console access through Horizon REST
API URL novncproxy.<my-|prefix|-domain> IN CNAME
<my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for barbican secure storage REST API URL
barbican.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for glance VM management REST API URL
glance.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for cinder block storage REST API URL
cinder.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
cinder2.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
cinder3.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
# define alias for general domain for heat orchestration REST API URLs
heat.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
cloudformation.<my-|prefix|-domain> IN CNAME
my-|prefix|-domain.<my-company>.com.
# define alias for general domain for starlingx REST API URLs
# ( for fault, patching, service management, system and VIM )
fm.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
patching.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
smapi.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
sysinv.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
vim.<my-|prefix|-domain> IN CNAME <my-|prefix|-domain>.<my-company>.com.
.. rubric:: |proc|
#. Source the environment.
.. code-block:: none
$ source /etc/platform/openrc
~(keystone_admin)$
#. To set a unique domain name, use the :command:`system
serviceparameter-add` command.
The command has the following syntax.
.. code-block:: none
system service-parameter-add openstack helm
endpoint_domain=<domain_name>
<domain\_name> should be a fully qualified domain name that you own, such
that you can configure the |DNS| Server that owns <domain\_name> with the
OpenStack service names underneath the domain.
.. xbooklink See the :ref:`prerequisites <updating-the-domain-name-prereq-FQDNs>` for a
complete list of |FQDNs|.
For example:
.. code-block:: none
~(keystone_admin)$ system service-parameter-add openstack helm
endpoint_domain=my-|prefix|-domain.mycompany.com
#. Apply the wr-openstack application.
For example:
.. code-block:: none
~(keystone_admin)$ system application-apply wr-openstack
.. rubric:: |result|
The helm charts of all OpenStack services are updated and restarted. For
example cinderapi.openstack.svc.cluster.local would be changed to
cinderapi.my-|prefix|-domain.mycompany.com, and so on for all OpenStack services.
.. note::
OpenStack Horizon is also changed to listen on
horizon.my-|prefix|-domain.mycompany.com:80 \(instead of the initial
oamfloatingip:31000\), for example,
horizon.my-wr-domain.mycompany.com:80.

View File

@ -0,0 +1,64 @@
.. tok1566218039402
.. _use-local-clis:
==============
Use Local CLIs
==============
|prod-os| administration and other tasks can be carried out from the command
line interface \(|CLI|\)
.. rubric:: |context|
.. warning::
For security reasons, only administrative users should have |SSH| privileges.
The Local |CLI| can be accessed via the local console on the active controller
or via SSH to the active controller. This procedure illustrates how to set the
context of |CLI| commands to openstack and access openstack admin privileges.
.. rubric:: |proc|
#. Login into the local console of the active controller or login via |SSH| to
the |OAM| Floating IP.
#. Setup admin credentials for the containerized openstack application.
.. code-block:: none
# source /etc/platform/openrc
# OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
.. rubric:: |result|
OpenStack |CLI| commands for the |prod-os| Cloud Application are now available
via the :command:`openstack` command.
For example:
.. code-block:: none
~(keystone_admin)$ openstack flavor list
+-----------------+------------------+------+------+-----+-------+-----------+
| ID | Name | RAM | Disk | Eph.| VCPUs | Is Public |
+-----------------+------------------+------+------+-----+-------+-----------+
| 054531c5-e74e.. | squid | 2000 | 20 | 0 | 2 | True |
| 2fa29257-8842.. | medium.2c.1G.2G | 1024 | 2 | 0 | 2 | True |
| 4151fb10-f5a6.. | large.4c.2G.4G | 2048 | 4 | 0 | 4 | True |
| 78b75c6d-93ca.. | small.1c.500M.1G | 512 | 1 | 0 | 1 | True |
| 8b9971df-6d83.. | vanilla | 1 | 1 | 0 | 1 | True |
| e94c8123-2602.. | xlarge.8c.4G.8G | 4096 | 8 | 0 | 8 | True |
+-----------------+------------------+------+------+-----+-------+-----------+
~(keystone_admin)$ openstack image list
+----------------+----------------------------------------+--------+
| ID | Name | Status |
+----------------+----------------------------------------+--------+
| 92300917-49ab..| Fedora-Cloud-Base-30-1.2.x86_64.qcow2 | active |
| 15aaf0de-b369..| opensquidbox.amd64.1.06a.iso | active |
| eeda4642-db83..| xenial-server-cloudimg-amd64-disk1.img | active |
+----------------+----------------------------------------+--------+

View File

@ -19,14 +19,15 @@
.. |CA| replace:: :abbr:`CA (Certificate Authority)` .. |CA| replace:: :abbr:`CA (Certificate Authority)`
.. |CAs| replace:: :abbr:`CAs (Certificate Authorities)` .. |CAs| replace:: :abbr:`CAs (Certificate Authorities)`
.. |CLI| replace:: :abbr:`CLI (Command Line Interface)` .. |CLI| replace:: :abbr:`CLI (Command Line Interface)`
.. |CMK| replace:: :abbr:`CMK (CPU Manager for Kubernetes)`
.. |CLIs| replace:: :abbr:`CLIs (Command Line Interfaces)` .. |CLIs| replace:: :abbr:`CLIs (Command Line Interfaces)`
.. |CMK| replace:: :abbr:`CMK (CPU Manager for Kubernetes)`
.. |CNI| replace:: :abbr:`CNI (Container Networking Interface)` .. |CNI| replace:: :abbr:`CNI (Container Networking Interface)`
.. |CoW| replace:: :abbr:`CoW (Copy on Write)` .. |CoW| replace:: :abbr:`CoW (Copy on Write)`
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)` .. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
.. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)` .. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)`
.. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)` .. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)` .. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)`
.. |DMA| replace:: :abbr:`DMA (Direct Memory Access)`
.. |DNS| replace:: :abbr:`DNS (Domain Name System)` .. |DNS| replace:: :abbr:`DNS (Domain Name System)`
.. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)` .. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)`
.. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)` .. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)`
@ -67,6 +68,7 @@
.. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)` .. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)`
.. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)` .. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)`
.. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)` .. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)`
.. |PCIe| replace:: :abbr:`PCI (Peripheral Component Interconnect extended)`
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)` .. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PEM| replace:: :abbr:`PEM (Privacy Enhanced Mail)` .. |PEM| replace:: :abbr:`PEM (Privacy Enhanced Mail)`
.. |PF| replace:: :abbr:`PF (Physical Function)` .. |PF| replace:: :abbr:`PF (Physical Function)`
@ -111,7 +113,9 @@
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)` .. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)` .. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
.. |UUID| replace:: :abbr:`UUID (Universally Unique Identifier)` .. |UUID| replace:: :abbr:`UUID (Universally Unique Identifier)`
.. |UUIDs| replace:: :abbr:`UUIDs (Universally Unique Identifiers)`
.. |VF| replace:: :abbr:`VF (Virtual Function)` .. |VF| replace:: :abbr:`VF (Virtual Function)`
.. |VFIO| replace:: abbr:`VFIO (Virtual Function I/O)`
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)` .. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)` .. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
.. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)` .. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)`
@ -121,9 +125,11 @@
.. |VNF| replace:: :abbr:`VNF (Virtual Network Function)` .. |VNF| replace:: :abbr:`VNF (Virtual Network Function)`
.. |VNFs| replace:: :abbr:`VNFs (Virtual Network Functions)` .. |VNFs| replace:: :abbr:`VNFs (Virtual Network Functions)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)` .. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIC| replace:: :abbr `VNIC (Virtual Network Interface Card)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)` .. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)` .. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |vRAN| replace:: :abbr:`vRAN (virtualized Radio Access Network)` .. |vRAN| replace:: :abbr:`vRAN (virtualized Radio Access Network)`
.. |VTEP| replace:: abbr:`VTEP (Virtual Tunnel End Point)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)` .. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)` .. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)` .. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -19,4 +19,7 @@ and the requirements of the system.
OpenStack OpenStack
--------- ---------
Coming soon. .. toctree::
:maxdepth: 2
openstack/index

View File

@ -0,0 +1,217 @@
.. cic1603143369680
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster:
============================================================
Ceph Placement Group Number Dimensioning for Storage Cluster
============================================================
Ceph pools are created automatically by |prod-long|, |prod-long| applications,
or by |prod-long| supported optional applications. By default, no
pools are created after the Ceph cluster is provisioned \(monitor\(s\) enabled
and |OSDs| defined\) until it is created by an application or the Rados Gateway
\(RADOS GW\) is configured.
The following is a list of pools created by |prod-os|, and Rados Gateway applications.
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-table-gvc-3h5-jnb:
.. table:: Table 1. List of Pools
:widths: auto
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| Service/Application | Pool Name | Role | PG Count | Created |
+==================================+=====================+===============================================================+==========+========================================================================================+
| Platform Integration Application | kube-rbd | Kubernetes RBD provisioned PVCs | 64 | When the platform automatically upload/applies after the Ceph cluster is provisioned |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| Wind River OpenStack | images | - glance image file storage | 256 | When the user applies the application for the first time |
| | | | | |
| | | - used for VM boot disk images | | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | ephemeral | - ephemeral object storage | 256 | |
| | | | | |
| | | - used for VM ephemeral disks | | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | cinder-volumes | - persistent block storage | 512 | |
| | | | | |
| | | - used for VM boot disk volumes | | |
| | | | | |
| | | - used as aditional disk volumes for VMs booted from images | | |
| | | | | |
| | | - snapshots and persistent backups for volumes | | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | cinder.backups | backup cinder volumes | 256 | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| Rados Gateway | rgw.root | Ceph Object Gateway data | 64 | When the user enables the RADOS GW through the :command:`system service-parameter` CLI |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | default.rgw.control | Ceph Object Gateway control | 64 | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | default.rgw.meta | Ceph Object Gateway metadata | 64 | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
| | default.rgw.log | Ceph Object Gateway log | 64 | |
+----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+
.. note::
Considering PG value/|OSD| has to be less than 2048 PGs, the default PG
values are calculated based on a setup with one storage replication group
and up to 5 |OSDs| per node.
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-section-vkx-qmt-jnb:
---------------
Recommendations
---------------
For more information on how placement group numbers, \(pg\_num\) can be set
based on how many |OSDs| are in the cluster, see, Ceph PGs per pool calculator:
`https://ceph.com/pgcalc/ <https://ceph.com/pgcalc/>`__.
You must collect the current pool information \(replicated size, number of
|OSDs| in the cluster\), and enter it into the calculator, calculate placement
group numbers \(pg\_num\) required based on pg\_calc algorithm, estimates on
|OSD| growth, and data percentage to balance Ceph as the number of |OSDs|
scales.
When balancing placement groups for each individual pool, consider the following:
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-vmq-g4t-jnb:
- pgs per osd
- pgs per pool
- pools per osd
- replication
- the crush map \(Ceph |OSD| tree\)
Running the command, :command:`ceph -s`, displays one of the following
**HEALTH\_WARN** messages:
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-sdd-v4t-jnb:
- too few pgs per osd
- too few pgs per pool
- too many pgs per osd
Each of the health warning messages requires manual adjustment of placement
groups for individual pools:
.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-dny-15t-jnb:
- To list all the pools in the cluster, use the following command,
:command:`ceph osd lspools`.
- To list all the pools with their pg\_num values, use the following command,
:command:`ceph osd dump`.
- To get only the pg\_num / pgp\_num value, use the following command,
:command:`ceph osd get <pool-name\>pg\_num`.
**Too few PGs per OSD**
Occurs when a new disk is added to the cluster. For more information on how
to add a disk as an |OSD|, see, |stor-doc|: :ref:`Provisioning Storage on a
Storage Host Using the CLI
<provision-storage-on-a-storage-host-using-the-cli>`.
To fix this warning, the number of placement groups should be increased, using
the following commands:
.. code-block:: none
~(keystone_admin)$ ceph osd pool set <pool-name> pg_num <new_pg_num>
.. code-block:: none
~(keystone_admin)$ ceph osd pool set <pool-name> pgp_num <new_pg_num>
.. note::
Increasing pg\_num of a pool has to be done in increments of 64/|OSD|,
otherwise, the above commands are rejected. If this happens, decrease the
pg\_num number, retry and wait for the cluster to be **HEALTH\_OK** before
proceeding to the the next step. Multiple incremental steps may be required
to achieve the targeted values.
**Too few PGs per Pool**
This indicates that the pool has many more objects per PG than average
\(too few PGs allocated\). This warning is addressed by increasing the
pg\_num of that pool, using the following commands:
.. code-block:: none
~(keystone_admin)$ ceph osd pool set <pool-name> pg_num <new_pg_num>
.. code-block:: none
~(keystone_admin)$ ceph osd pool set <pool-name> pgp_num <new_pg_num>
.. note::
pgp\_num should be equal to pg\_num.
Otherwise, Ceph will issue a warning:
.. code-block:: none
~(keystone_admin)$ ceph -s
cluster:
id: 92bfd149-37c2-43aa-8651-eec2b3e36c17
health: HEALTH_WARN
1 pools have pg_num > pgp_num
**Too many PGs / per OSD**
This warning indicates that the maximum number of 300 PGs per |OSD| is
exceeded. The number of PGs cannot be reduced after the pool is created.
Pools that do not contain any data can safely be deleted and then recreated
with a lower number of PGs. Where pools already contain data, the only
solution is to add OSDs to the cluster so that the ratio of PGs per |OSD|
becomes lower.
.. caution::
Pools have to be created with the exact same properties.
To get these properties, use :command:`ceph osd dump`, or use the following commands:
.. code-block:: none
~(keystone_admin)$ ceph osd pool get cinder-volumes crush_rule
crush_rule: storage_tier_ruleset
.. code-block:: none
~(keystone_admin)$ ceph osd pool get cinder-volumes pg_num
pg_num: 512
.. code-block:: none
~(keystone_admin)$ ceph osd pool get cinder-volumes pgp_num
pg_num: 512
Before you delete a pool, use the following properties to recreate the pool;
pg\_num, pgp\_num, crush\_rule.
To delete a pool, use the following command:
.. code-block:: none
~(keystone_admin)$ ceph osd pool delete <pool-name> <<pool-name>>
To create a pool, use the parameters from ceph osd dump, and run the following command:
.. code-block:: none
~(keystone_admin)$ ceph osd pool create {pool-name}{pg-num} {pgp-num} {replicated} <<crush-ruleset-name>>

View File

@ -0,0 +1,65 @@
.. mkh1590590274215
.. _configuration-and-management-storage-on-controller-hosts:
===========================
Storage on Controller Hosts
===========================
The controllers provide storage for the OpenStack Controller Services through a
combination of local container ephemeral disk, Persistent Volume Claims backed
by Ceph and a containerized HA mariadb deployment.
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, they also provide persistent block storage for
persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and
storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or
Duplex systems, the controllers also provide nova- local storage for ephemeral
|VM| volumes.
On systems configured for controller storage, the master/controller's root disk
is reserved for system use, and additional disks are required to support the
small Ceph cluster. On a All-in-One Simplex or Duplex system, you have the
option to partition the root disk for the nova-local storage \(to realize a
two-disk controller\) or use a third disk for nova-local storage.
.. _configuration-and-management-storage-on-controller-hosts-section-rvx-vwc-vlb:
--------------------------------------
Underlying Platform Filesystem Storage
--------------------------------------
See the :ref:`platform Planning <overview-of-starlingx-planning>` documentation
for details.
To pass the disk-space checks, any replacement disks must be installed before
the allotments are changed.
.. _configuration-and-management-storage-on-controller-hosts-section-wgm-gxc-vlb:
---------------------------------------
Glance, Cinder, and Remote Nova storage
---------------------------------------
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, this small Ceph cluster on the controller provides
Glance image storage, Cinder block storage, Cinder backup storage, and Nova
remote ephemeral block storage. For more information, see :ref:`Block Storage
for Virtual Machines <block-storage-for-virtual-machines>`.
.. _configuration-and-management-storage-on-controller-hosts-section-gpw-kxc-vlb:
------------------
Nova-local Storage
------------------
Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute
function, and therefore provide **nova- local** storage for ephemeral disks. On
other systems, **nova- local** storage is provided by compute hosts. For more
information about this type of storage, see :ref:`Storage on Compute Hosts
<storage-on-compute-hosts>` and :ref:`Block Storage for Virtual Machines
<block-storage-for-virtual-machines>`.

View File

@ -0,0 +1,160 @@
.. ble1606166239734
.. _configure-an-optional-cinder-file-system:
========================================
Configure an Optional Cinder File System
========================================
By default, **qcow2** to raw **image-conversion** is done using the
**docker\_lv** file system. To avoid filling up the **docker\_lv** file system,
you can create a new file system dedicated for image conversion as described in
this section.
**Prerequisites**:
.. _configure-an-optional-cinder-file-system-ul-sbz-3zn-tnb:
- The requested size of the **image-conversion** file system should be big
enough to accommodate any image that is uploaded to Glance.
- The recommended size for the file system must be at least twice as large as
the largest converted image from qcow2 to raw.
- The conversion file system can be added before or after |prefix|-openstack
is applied.
- The conversion file system must be added on both controllers. Otherwise,
|prefix|-openstack will not use the new file system.
- If the conversion file system is added after |prefix|-openstack is applied,
changes to |prefix|-openstack will only take effect once the application is
reapplied.
The **image-conversion** file system can only be added on the controllers, and
must be added, with the same size, to both controllers. Alarms will be raised,
if:
.. _configure-an-optional-cinder-file-system-ul-dtd-fb4-tnb:
- The conversion file system is not added on both controllers.
- The size of the file system is not the same on both controllers.
.. _configure-an-optional-cinder-file-system-section-uk1-rwn-tnb:
--------------------------------------------
Adding a New Filesystem for Image-Conversion
--------------------------------------------
.. _configure-an-optional-cinder-file-system-ol-zjs-1xn-tnb:
#. Use the :command:`host-fs-add` command to add a file system dedicated to
qcow2 to raw **image-conversion**.
.. code-block:: none
~(keystone_admin)]$ system host-fs-add <<hostname or id>> <<fs-name=size>>
Where:
**hostname or id**
is the location where the file system will be added
**fs-name**
is the file system name
**size**
is an integer indicating the file system size in Gigabytes
For example:
.. code-block:: none
~(keystone_admin)]$ system host-fs-add controller-0 image-conversion=8
+----------------+--------------------------------------+
| Property | Value |
+----------------+--------------------------------------+
| uuid | 52bfd1c6-93b8-4175-88eb-a8ee5566ce71 |
| name | image-conversion |
| size | 8 |
| logical_volume | conversion-lv |
| created_at | 2020-09-18T17:08:54.413424+00:00 |
| updated_at | None |
+----------------+--------------------------------------+
#. When the **image-conversion** filesystem is added, a new partition
/opt/conversion is created and mounted.
#. Use the following command to list the file systems.
.. code-block:: none
~(keystone_admin)]$ system host-fs-list controller-0
+--------------------+------------------+-------------+----------------+
| UUID | FS Name | Size in GiB | Logical Volume |
+--------------------+------------------+-------------+----------------+
| b5ffb565-4af2-4f26 | backup | 25 | backup-lv |
| a52c5c9f-ec3d-457c | docker | 30 | docker-lv |
| 52bfd1c6-93b8-4175 | image-conversion | 8 | conversion-lv |
| a2fabab2-054d-442d | kubelet | 10 | kubelet-lv |
| 2233ccf4-6426-400c | scratch | 16 | scratch-lv |
+--------------------+------------------+-------------+----------------+
.. _configure-an-optional-cinder-file-system-section-txm-qzn-tnb:
------------------------
Resizing the File System
------------------------
You can change the size of the **image-conversion** file system at runtime
using the following command:
.. code-block:: none
~(keystone_admin)]$ system host-fs-modify <hostname or id> <fs-name=size>
For example:
.. code-block:: none
~(keystone_admin)]$ system host-fs-modify controller-0 image-conversion=8
.. _configure-an-optional-cinder-file-system-section-ubp-f14-tnb:
------------------------
Removing the File System
------------------------
.. _configure-an-optional-cinder-file-system-ol-nmb-pg4-tnb:
#. You can remove an **image-conversion** file system dedicated to qcow2
**image-conversion** using the following command:
.. code-block:: none
~(keystone_admin)]$ system host-fs-delete <<hostname or id>> <<fs-name>>
#. When the **image-conversion** file system is removed from the system, the
/opt/conversion partition is also removed.
.. note::
You cannot delete an **image-conversion** file system when
|prefix|-openstack is in the **applying**,**applied**, or **removing**
state.
You cannot add or remove any other file systems using these commands.

View File

@ -0,0 +1,190 @@
.. pcs1565033493776
.. _create-or-change-the-size-of-nova-local-storage:
===================================================
Create or Change the Size of Nova-local Storage
===================================================
You must configure the storage resources on a host before you can unlock it. If
you prefer, you can use the |CLI|.
.. rubric:: |context|
You can use entire disks or disk partitions on compute hosts for use as
**nova-local** storage. You can add multiple disks or disk partitions. Once a
disk is added and configuration is persisted through a lock/unlock, the disk
can no longer be removed.
.. caution::
If a root-disk partition on *any* compute host is used for local storage,
then for performance reasons, *all* VMs on the system must be booted from
Cinder volumes, and must not use ephemeral or swap disks. For more
information, see :ref:`Storage on Compute Hosts
<storage-on-compute-hosts>`.
.. rubric:: |proc|
#. Lock the compute node.
.. code-block:: none
~(keystone_admin)$ system host-lock compute-0
#. Log in to the active controller as the Keystone **admin** user.
#. Review the available disk space and capacity.
.. code-block:: none
~(keystone_admin)$ system host-disk-list compute-0
+--------------------------------------+--------------++--------------+
| uuid | device_node | available_gib |
| | | |
+--------------------------------------+--------------+---------------+
| 5dcb3a0e-c677-4363-a030-58e245008504 | /dev/sda | 12216 |
| c2932691-1b46-4faf-b823-2911a9ecdb9b | /dev/sdb | 20477 |
+--------------------------------------+--------------+---------------+
#. During initial set-up, add the **nova-local** local volume group.
.. code-block:: none
~(keystone_admin)$ system host-lvg-add compute-0 nova-local
+-----------------+------------------------------------------------------------+
| Property | Value |
+-----------------+------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 5b8f0792-25b5-4e43-8058-d274bf8fa51c |
| ihost_uuid | 327b2136-ffb6-4cd5-8fed-d2ec545302aa |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gb | 0 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2015-12-23T16:30:25.524251+00:00 |
| updated_at | None |
| parameters | {u'instance_backing': u'lvm', u'instances_lv_size_mib': 0} |
+-----------------+------------------------------------------------------------+
#. Obtain the |UUID| of the disk or partition to use for **nova-local** storage.
To obtain the |UUIDs| for available disks, use the :command:`system
host-disk-list` command as shown earlier.
To obtain the |UUIDs| for available partitions on a disk, use
:command:`system host-disk-partition-list`.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-list compute-0 --disk <disk_uuid>
For example:
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-list compute-0 --disk
c2932691-1b46-4faf-b823-2911a9ecdb9b
+--------------------------------------+-----------------------------+--------------+----------+----------------------+
| uuid | device_path | device_node | size_gib | status |
| | | | | |
+--------------------------------------+-----------------------------+--------------+----------+----------------------+
| 08fd8b75-a99e-4a8e-af6c-7aab2a601e68 | /dev/disk/by-path/pci-0000: | /dev/sdb1 | 1024 | Creating (on unlock) |
| | 00:01.1-ata-1.1-part1 | | | |
| | | | | |
| | | | | |
+--------------------------------------+-----------------------------+--------------+----------+----------------------+
#. Create a partition to add to the volume group.
If you plan on using an entire disk, you can skip this step.
Do this using the :command:`host-disk-partition-add` command. The syntax is:
.. code-block:: none
system host-disk-partition-add [-t <partition_type>]
<hostname_or_id> <disk_path_or_uuid>
<partition_size_in_GiB>
For example.
.. code-block:: none
~(keystone_admin)$ system host-disk-partition-add compute-0 \
c2932691-1b46-4faf-b823-2911a9ecdb9b 1
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:01.1-ata-1.1-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 1024 |
| uuid | 6a194050-2328-40af-b313-22dbfa6bab87 |
| ihost_uuid | 0acf8e83-e74c-486e-9df4-00ce1441a899 |
| idisk_uuid | c2932691-1b46-4faf-b823-2911a9ecdb9b |
| ipv_uuid | None |
| status | Creating (on unlock) |
| created_at | 2018-01-24T20:25:41.852388+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
#. Obtain the |UUID| of the partition to use for **nova-local** storage as
described in step
.. xbooklink :ref:`5 <creating-or-changing-the-size-of-nova-local-storage-uuid>`.
#. Add a disk or partition to the **nova-local** group, using a command of the
following form:
.. note::
The host must be locked
.. code-block:: none
~(keystone_admin)$ system host-pv-add compute-0 nova-local <uuid>
where <uuid> is the |UUID| of the disk or partition, obtained using
:command:`system host-partition-list`, or of the disk, obtained using
:command:`system host-disk-list`.
For example:
.. code-block:: none
~(keystone_admin)$ system host-pv-add compute-0 nova-local \
08fd8b75-a99e-4a8e-af6c-7aab2a601e68
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 8eea6ca7-5192-4ee0-bd7b-7d7fa7c637f1 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 08fd8b75-a99e-4a8e-af6c-7aab2a601e68 |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:01.1-ata-1.1-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 0acf8e83-e74c-486e-9df4-00ce1441a899 |
| created_at | 2018-01-25T18:20:14.423947+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
.. note::
Multiple disks/partitions can be added to nova-local by repeating steps
5-8, above.

View File

@ -0,0 +1,23 @@
========
Contents
========
.. check what put here
.. toctree::
:maxdepth: 1
storage-configuration-and-management-overview
storage-configuration-and-management-storage-resources
config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster
configuration-and-management-storage-on-controller-hosts
configure-an-optional-cinder-file-system
create-or-change-the-size-of-nova-local-storage
replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system
nova-ephemeral-storage
replacing-a-nova-local-disk
storage-on-compute-hosts
storage-configuration-and-management-storage-on-storage-hosts
specifying-the-storage-type-for-vm-ephemeral-disks
storage-configuring-and-management-storage-related-cli-commands
storage-configuration-and-management-storage-utilization-display

View File

@ -0,0 +1,78 @@
.. ugv1564682723675
.. _nova-ephemeral-storage:
======================
Nova Ephemeral Storage
======================
.. contents::
:local:
:depth: 1
This is the default OpenStack storage option used for creating VMs. Virtual
machine instances are typically created with at least one ephemeral disk which
is used to run the VM guest operating system and boot partition.
Ephemeral storage for VMs, which includes swap disk storage, ephemeral disk
storage, and root disk storage if the VM is configured for boot-from-image, is
implemented by the **nova** service. For flexibility and scalability, this
storage is defined using a **nova-local** local volume group created on the
compute hosts.
The nova-local group can be backed locally by one or more disks or partitions
on the compute host, or remotely by resources on the internal Ceph cluster \(on
controller or storage hosts\). If it is backed locally on the compute host,
then it uses CoW-image storage backing. For more information about
**nova-local** backing options, see Cloud Platform Storage Configuration:
:ref:`Block Storage for Virtual Machines <block-storage-for-virtual-machines>`.
Compute hosts are grouped into host aggregates based on whether they offer CoW
or remote Ceph-backed local storage. The host aggregates are used for
instantiation scheduling.
.. _nova-ephemeral-storage-section-N10149-N1001F-N10001:
------------
Instances LV
------------
For storage on compute hosts, CoW-image backing uses an instances logical
volume, or **Instances LV**. This contains the /etc/nova/instances file system,
and is used for the following:
.. _nova-ephemeral-storage-ul-mrd-bxv-q5:
- the nova image cache, containing images downloaded from Glance
- various small nova control and log files, such as the libvirt.xml file,
which is used to pass parameters to **libvirt** at launch, and the console.log
file
For CoW-image-backed local storage, **Instances LV** is also used to hold
CoW-image disk files for use as VM disk storage. It is the only volume in
**nova-local**.
By default, no size is specified for the **Instances LV**. For non-root disks,
the minimum required space is 2 GB for a **nova-local** volume group with a
total size less that 80 GB, and 5 GB for a **nova-local** volume group larger
or equal than 80 GB; you must specify at least this amount. You can allocate
more **Instances LV** space to support the anticipated number of
boot-from-image VMs, up to 50% of the maximum available storage of the local
volume group. At least 50% free space in the volume group is required to
provide space for allocating logical volume disks for launched instances. The
value provided for the **Instance LV Size** is limited by this maximum.
Instructions for allocating the **Instances LV Size** using the Web
administration interface or the CLI are included in as part of configuring the
compute nodes. Suggested sizes are indicated in the Web administration
interface.
.. caution::
If less than the minimum required space is available, the compute host
cannot be unlocked.

View File

@ -0,0 +1,21 @@
.. tjr1539798511628
.. _replacing-a-nova-local-disk:
===========================
Replace a Nova-Local Disk
===========================
You can replace failed nova-local disks on compute nodes.
.. rubric:: |context|
To replace a nova-local storage disk on a compute node, follow the instructions
in Cloud Platform Node Management: :ref:`Changing Hardware Components for a
Worker Host <changing-hardware-components-for-a-worker-host>`.
To avoid reconfiguration, ensure that the replacement disk is assigned to the
same location on the host, and is the same size as the original. The new disk
is automatically provisioned for nova-local storage based on the existing
system configuration.

View File

@ -0,0 +1,66 @@
.. syu1590591059068
.. _replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system:
========================================================================
Replace Nova-local Storage Disk on a Cloud Platform Simplex System
========================================================================
On a |prod-long| Simplex system, a special procedure is
recommended for replacing or upgrading the nova-local storage device, to allow
for the fact that |VMs| cannot be migrated.
.. rubric:: |context|
For this procedure, you must use the |CLI|.
.. note::
The volume group will be rebuilt as part of the disk replacement procedure.
You can select a replacement disk of any size provided that the ephemeral
storage requirements for all |VMs| are met.
.. rubric:: |proc|
#. Delete all |VMs|.
#. Lock the controller.
.. code-block:: none
~(keystone_admin)$ system host-lock controller-0
#. Delete the nova-local volume group.
.. code-block:: none
~(keystone_admin)$ system host-lvg-delete controller-0 nova-local
#. Shut down the controller.
.. code-block:: none
~(keystone_admin)$ sudo shutdown -h now
#. Replace the physical device.
#. Power down the physical device.
#. Replace the drive with an equivalent or larger drive.
#. Power up the device.
Wait for the node to boot to the command prompt.
#. Source the environment.
#. Unlock the node.
.. code-block:: none
~(keystone_admin)$ system host-unlock controller-0
#. Relaunch the deleted |VMs|.

View File

@ -0,0 +1,66 @@
.. zjx1464641246986
.. _specifying-the-storage-type-for-vm-ephemeral-disks:
==================================================
Specify the Storage Type for VM Ephemeral Disks
==================================================
You can specify the ephemeral storage type for virtual machines \(|VMs|\) by
using a flavor with the appropriate extra specification.
.. rubric:: |context|
.. note::
On a system with one or more single-disk compute hosts, do not use
ephemeral disks for *any* |VMs| unless *all* single-disk compute hosts are
configured to use remote Ceph backing. For more information, see
|os-intro-doc|:
.. xbooklink:ref:`Storage on Storage Hosts <storage-configuration-storage-on-storage-hosts>`.
Each new flavor is automatically assigned a Storage Type extra spec that
specifies, as the default, instantiation on compute hosts configured for
image-backed local storage \(Local |CoW| Image Backed\). You can change the extra
spec to specify instantiation on compute hosts configured for Ceph-backed
remote storage, if this is available \(Remote Storage Backed\). Ceph-backed
remote storage is available only on systems configured with a Ceph storage
backend.
The designated storage type is used for ephemeral disk and swap disk space, and
for the root disk if the virtual machine is launched using boot-from-image.
Local storage is allocated from the Local Volume Group on the host, and does
not persist when the instance is terminated. Remote storage is allocated from a
Ceph storage pool configured on the storage host resources, and persists until
the pool resources are reallocated for other purposes. The choice of storage
type affects migration behavior; for more information, see Cloud Platform
Storage Configuration: :ref:`VM Storage Settings for Migration, Resize, or
Evacuation <vm-storage-settings-for-migration-resize-or-evacuation>`.
If the instance is configured to boot from volume, the root disk is implemented
using persistent Cinder-based storage allocated from the controller \(for a
system using LVM\) or from storage hosts \(for a system using Ceph\) by
default. On a system that offers both LVM and Ceph storage backends for Cinder
storage, you can specify to use the LVM backend when you launch an instance.
To specify the type of storage offered by a compute host, see Cloud Platform
Storage Configuration: :ref:`Work with Local Volume Groups
<work-with-local-volume-groups>`.
.. rubric:: |context|
.. caution::
Unlike Cinder-based storage, ephemeral storage does not persist if the
instance is terminated or the compute node fails.
.. _specifying-the-storage-type-for-vm-ephemeral-disks-d29e17:
In addition, for local ephemeral storage, migration and resizing support
depends on the storage backing type specified for the instance, as well as the
boot source selected at launch.
To change the storage type using the Web administration interface, click
**Edit** for the existing **Storage Type** extra specification, and select from
the **Storage** drop-down menu.

View File

@ -0,0 +1,18 @@
.. fxm1589998951395
.. _storage-configuration-and-management-overview:
========
Overview
========
|prod-os| is a containerized application running on top of the |prod-long|.
The storage management of hosts is not specific to |prod-os|. For more
information, see |prod-long| System Configuration:
.. xbooklink :ref:`System Configuration Management Overview <system-configuration-management-overview>`.
This chapter covers concepts and additional considerations related to storage
management that are specific to |prod-os|.

View File

@ -0,0 +1,15 @@
.. tfu1590592352767
.. _storage-configuration-and-management-storage-on-storage-hosts:
========================
Storage on Storage Hosts
========================
|prod-os| creates default Ceph storage pools for Glance images, Cinder volumes,
Cinder backups, and Nova ephemeral data object data.
For more information, see the :ref:`Cloud Platform Storage Configuration
<storage-configuration-storage-resources>` guide for details on configuring the
internal Ceph cluster on either controller or storage hosts.

View File

@ -0,0 +1,138 @@
.. fhe1590514169842
.. _storage-configuration-and-management-storage-resources:
=================
Storage Resources
=================
|prod-os| uses storage resources on the controller-labelled master hosts, the
compute-labeled worker hosts, and on storage hosts if they are present.
The storage configuration for |prod-os| is very flexible. The specific
configuration depends on the type of system installed, and the requirements of
the system.
.. _storage-configuration-and-management-storage-resources-section-j2k-5mw-5lb:
.. contents::
:local:
:depth: 1
-----------------------------
Storage Services and Backends
-----------------------------
The figure below shows the storage options and backends for |prod-os|.
.. image:: ../figures/zpk1486667625575.png
Each service can use different storage backends.
**Ceph**
This provides storage managed by the internal Ceph cluster. Depending on
the deployment configuration, the internal Ceph cluster is provided through
OSDs on |prod-os| master / controller hosts or storage hosts.
.. _storage-configuration-and-management-storage-resources-table-djz-14w-5lb:
.. table::
:widths: auto
+---------+----------------------------------------------------------------+---------------------------------------------------------------+
| Service | Description | Available Backends |
+=========+================================================================+===============================================================+
| Cinder | - persistent block storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk volumes | |
| | | |
| | - used as additional disk volumes for VMs booted from images | |
| | | |
| | - snapshots and persistent backups for volumes | |
+---------+----------------------------------------------------------------+---------------------------------------------------------------+
| Glance | - image file storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk images | |
+---------+----------------------------------------------------------------+---------------------------------------------------------------+
| Nova | - ephemeral object storage | - CoW-Image on Compute Nodes |
| | | |
| | - used for VM ephemeral disks | - Internal Ceph on master/controller hosts or storage hosts |
+---------+----------------------------------------------------------------+---------------------------------------------------------------+
.. _storage-configuration-and-management-storage-resources-section-erw-5mw-5lb:
--------------------
Uses of Disk Storage
--------------------
**|prod-os| System**
The |prod-os| system containers use a combination of local container
ephemeral disk, Persistent Volume Claims backed by Ceph and a containerized
HA mariadb deployment for configuration and database files.
**VM Ephemeral Boot Disk Volumes**
When booting from an image, virtual machines by default use local ephemeral
disk storage on computes for Nova ephemeral local boot disk volumes built
from images. These virtual disk volumes are created when the VM instances
are launched. These virtual volumes are destroyed when the VM instances are
terminated.
A host can be configured to instead support 'remote' ephemeral disk
storage, backed by Ceph. These virtual volumes are still destroyed when the
VM instances are terminated, but provide better live migration performance
as they are remote.
**VM Persistent Boot Disk Volumes**
When booting from Cinder volumes, virtual machines can use the Ceph-backed
storage cluster for backing Cinder boot disk volumes. This provides
permanent storage for the VM root disks, facilitating faster machine
startup, faster live migration support, but requiring more storage
resources.
**VM Additional Disk**
Virtual machines can optionally use local or remote ephemeral disk storage
on computes for additional virtual disks, such as swap disks. These disks
are ephemeral; they are created when a VM instance is launched, and
destroyed when the VM instance is terminated.
**VM block storage backups**
Cinder volumes can be backed up for long term storage in a separate Ceph
pool.
.. _storage-configuration-and-management-storage-resources-section-mhx-5mw-5lb:
-----------------
Storage Locations
-----------------
The various storage locations for |prod-os| include:
**Controller Hosts**
In the Standard with Controller Storage deployment option, one or more
disks can be used on controller hosts to provide a small Ceph-based cluster
for providing the storage backend for Cinder volumes, Cinder backups,
Glance images, and remote Nova ephemeral volumes.
**Compute Hosts**
One or more disks can be used on compute hosts to provide local Nova
ephemeral storage for virtual machines.
**Combined Controller-Compute Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local Nova Ephemeral Storage for virtual machines and a
small Ceph-backed storage cluster for backing Cinder, Glance, and Remote
Nova Ephemeral storage.
**Storage Hosts**
One or more disks are used on storage hosts to provide a large scale
Ceph-backed storage cluster for backing Cinder, Glance, and Remote Nova
Ephemeral storage. Storage hosts are used only on |prod-os| with Dedicated
Storage systems.

View File

@ -0,0 +1,17 @@
.. akj1590593707486
.. _storage-configuration-and-management-storage-utilization-display:
===========================
Storage Utilization Display
===========================
|prod-long| provides enhanced backend storage usage details through the |os-prod-hor-long|.
Upstream storage utilization display is limited to the hypervisor statistics
which include only local storage utilization on the worker nodes. Cloud
Platform provides enhanced storage utilization statistics for the ceph, and
controller-fs backends. The statistics are available using the |CLI| and Horizon.
In |os-prod-hor-long|, the Storage Overview panel includes storage Services and Usage with storage details.

View File

@ -0,0 +1,129 @@
.. jem1464901298578
.. _storage-configuring-and-management-storage-related-cli-commands:
============================
Storage-Related CLI Commands
============================
You can use |CLI| commands when working with storage specific to OpenStack.
For more information, see :ref:`Cloud Platform Storage Configuration <storage-configuration-storage-resources>`
.. _storage-configuring-and-management-storage-related-cli-commands-section-N10044-N1001C-N10001:
.. contents::
:local:
:depth: 1
----------------------------------------
Add, Modify, or Display Storage Backends
----------------------------------------
To list the storage backend types installed on a system:
.. code-block:: none
~(keystone_admin)$ system storage-backend-list
+--------+---------------+--------+-----------+----+--------+------------+
| uuid | name |backend |state |task|services|capabilities|
+--------+---------------+--------+-----------+----+--------+------------+
| 27e... |ceph-store |ceph |configured |None| None |min_repli.:1|
| | | | | | |replicati.:1|
| 502... |shared_services|external|configured |None| glance | |
+--------+---------------+--------+-----------+----+--------+------------+
To show details for a storage backend:
.. code-block:: none
~(keystone_admin)$ system storage-backend-show <name>
For example:
.. code-block:: none
~(keystone_admin)$ system storage-backend-show ceph-store
+--------------------+-------------------------------------+
|Property | Value |
+--------------------+-------------------------------------+
|backend | ceph |
|name | ceph-store |
|state | configured |
|task | None |
|services | None |
|capabilities | min_replication: 1 |
| | replication: 1 |
|object_gateway | False |
|ceph_total_space_gib| 198 |
|object_pool_gib | None |
|cinder_pool_gib | None |
|kube_pool_gib | None |
|glance_pool_gib | None |
|ephemeral_pool_gib | None |
|tier_name | storage |
|tier_uuid | d3838363-a527-4110-9345-00e299e6a252|
|created_at | 2019-08-12T21:08:50.166006+00:00 |
|updated_at | None |
+--------------------+-------------------------------------+
.. _storage-configuring-and-management-storage-related-cli-commands-section-N10086-N1001C-N10001:
------------------
List Glance Images
------------------
You can use this command to identify the storage backend type for Glance
images. \(The column headers in the following example have been modified
slightly to fit the page.\)
.. code-block:: none
~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
~(keystone_admin)$ openstack image list
+----+------+-------+--------+-----------+------+--------+------------+-----------+
| ID | Name | Store | Disk | Container | Size | Status | Cache Size | Raw Cache |
| | | | Format | Format | | | | |
+----+------+-------+--------+-----------+------+--------+------------+-----------+
| .. | img1 | rbd | raw | bare | 1432 | active | | |
| .. | img2 | file | raw | bare | 1432 | active | | |
+----+------+-------+--------+-----------+------+--------+------------+-----------+
.. _storage-configuring-and-management-storage-related-cli-commands-ul-jvc-dnx-jnb:
- The value **rbd** indicates a Ceph backend.
- You can use the long option to show additional information.
.. _storage-configuring-and-management-storage-related-cli-commands-section-N100A1-N1001C-N10001:
-----------------
Show Glance Image
-----------------
You can use this command to obtain information about a Glance image.
.. code-block:: none
~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3
~(keystone_admin)$ openstack image-show <<image-id>>
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | c11edf9e31b416c46125600ddef1a8e8 |
| name | ubuntu-14.014.img |
| store | rbd |
| owner | 05be70a23c81420180c51e9740dc730a |
+------------------+--------------------------------------+
The Glance **store** value can be either file or rbd. The rbd value indicates a Ceph backend.

View File

@ -0,0 +1,17 @@
.. jow1443470081421
.. _storage-on-compute-hosts:
========================
Storage on Compute Hosts
========================
Compute-labelled worker hosts can provide ephemeral storage for Virtual Machine
\(VM\) disks.
.. note::
On All-in-One Simplex or Duplex systems, compute storage is provided using
resources on the combined host. For more information, see |os-intro-doc|:
:ref:`Storage on Controller Hosts <storage-configuration-storage-on-controller-hosts>`.

View File

@ -18,7 +18,7 @@ using the Horizon Web interface see
<configuring-ptp-service-using-horizon>`. <configuring-ptp-service-using-horizon>`.
You can also specify the |PTP| service for **clock\_synchronization** using You can also specify the |PTP| service for **clock\_synchronization** using
the web administration interface. the |os-prod-hor| interface.
.. xbooklink For more information, see |node-doc|: `Host Inventory <hosts-tab>`. .. xbooklink For more information, see |node-doc|: `Host Inventory <hosts-tab>`.

View File

@ -2,11 +2,11 @@
.. gkr1591372948568 .. gkr1591372948568
.. _adding-configuration-rpc-response-max-timeout-in-neutron-conf: .. _adding-configuration-rpc-response-max-timeout-in-neutron-conf:
======================================================== =============================================================
Add Configuration rpc\_response\_max\_timeout in Neutron Add Configuration rpc\_response\_max\_timeout in neutron.conf
======================================================== =============================================================
You can add the rpc\_response\_max\_timeout to Neutron using Helm You can add the rpc\_response\_max\_timeout to neutron.conf using Helm
overrides. overrides.
.. rubric:: |context| .. rubric:: |context|
@ -14,17 +14,17 @@ overrides.
Maximum rpc timeout is now configurable by rpc\_response\_max\_timeout from Maximum rpc timeout is now configurable by rpc\_response\_max\_timeout from
Neutron config instead of being calculated as 10 \* rpc\_response\_timeout. Neutron config instead of being calculated as 10 \* rpc\_response\_timeout.
This configuration can be used to change the maximum rpc timeout. If the This configuration can be used to change the maximum rpc timeout. If maximum
maximum rpc timeout is too big, some requests which should fail will be held rpc timeout is too big, some requests which should fail will be held for a long
for a long time before the server returns failure. If this value is too small time before the server returns failure. If this value is too small and the
and the server is very busy, the requests may need more time than maximum rpc server is very busy, the requests may need more time than maximum rpc timeout
timeout and the requests will fail though they can succeed with a bigger and the requests will fail though they can succeed with a bigger maximum rpc
maximum rpc timeout. timeout.
.. rubric:: |proc| .. rubric:: |proc|
1. Create a yaml file to add configuration rpc\_response\_max\_timeout in #. create a yaml file to add configuration rpc\_response\_max\_timeout in
Neutron. neutron.conf.
.. code-block:: none .. code-block:: none
@ -35,15 +35,15 @@ maximum rpc timeout.
rpc_response_max_timeout: 600 rpc_response_max_timeout: 600
EOF EOF
2. Update the neutron overrides and apply to |prefix|-openstack. #. Update the neutron overrides and apply to |prefix|-openstack.
.. parsed-literal:: .. parsed-literal::
~(keystone_admin)]$ system helm-override-update |prefix|-openstack neutron openstack --values neutron-overrides.yaml ~(keystone_admin)]$ system helm-override-update |prefix|-openstack neutron openstack --values neutron-overrides.yaml
~(keystone_admin)]$ system application-apply |prefix|-openstack ~(keystone_admin)]$ system application-apply |prefix|-openstack
3. Verify that configuration rpc\_response\_max\_time has been added in #. Verify that configuration rpc\_response\_max\_time has been added in
Neutron. neutron.conf.
.. code-block:: none .. code-block:: none

View File

@ -6,7 +6,7 @@
Assign a Dedicated VLAN ID to a Target Project Network Assign a Dedicated VLAN ID to a Target Project Network
====================================================== ======================================================
To assign a dedicated VLAN segment ID you must first enable the Neutron To assign a dedicated |VLAN| segment ID you must first enable the Neutron
**segments** plugin. **segments** plugin.
.. rubric:: |proc| .. rubric:: |proc|
@ -73,6 +73,10 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron
| | | | | |
+--------------------+---------------------------------------------------------------------------------------------------------------------+ +--------------------+---------------------------------------------------------------------------------------------------------------------+
.. note::
The value for DEFAULT is folded onto two lines in the example above for
display purposes.
#. Apply the |prefix|-openstack application. #. Apply the |prefix|-openstack application.
@ -80,7 +84,7 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron
~(keystone_admin)]$ system application-apply |prefix|-openstack ~(keystone_admin)]$ system application-apply |prefix|-openstack
#. You can now assign the VLAN network type to a datanetwork. #. You can now assign the |VLAN| network type to a datanetwork.
#. Identify the name of the data network to assign. #. Identify the name of the data network to assign.
@ -158,17 +162,21 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron
| faf63edf-63f0-4e9b-b930-5fa8f43b5484 | None | 865b9576-1815-4734-a7e4-c2d0dd31d19c | vlan | 2001 | | faf63edf-63f0-4e9b-b930-5fa8f43b5484 | None | 865b9576-1815-4734-a7e4-c2d0dd31d19c | vlan | 2001 |
+--------------------------------------+--------------------------------------------+--------------------------------------+--------------+---------+ +--------------------------------------+--------------------------------------------+--------------------------------------+--------------+---------+
.. note::
Thr name **test1-st-segement01-mx6fa5eonzrr** has been folded onto
two lines in the sample output above for display pruposes.
#. List subnets. #. List subnets.
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ openstack subnet list ~(keystone_admin)]$ openstack subnet list
+------------...----+---------------------+---------------...-----+------------------+ +--------------------------------------+---------------------+--------------------------------------+------------------+
| ID ... | Name | Network ... | Subnet | | ID | Name | Network | Subnet |
+------------...----+---------------------+---------------...-----+------------------+ +--------------------------------------+---------------------+--------------------------------------+------------------+
| 0f64c277-82...f2f | external01-subnet | 6bbd3e4e-9419-...cab7 | 10.10.10.0/24 | | 0f64c277-82d7-4161-aa47-fc4cfadacf2f | external01-subnet | 6bbd3e4e-9419-49c6-a68a-ed51fbc1cab7 | 10.10.10.0/24 |
| bb9848b6-4b...ddc | subnet-temp | 865b9576-1815-...d19c | 192.168.17.0/24 | | bb9848b6-63f0-4e9b-b930-5fa8f43b5ddc | subnet-temp | 865b9576-1815-4734-a7e4-c2d0dd31d19c | 192.168.17.0/24 |
+------------...----+---------------------+-----------------------+------------------+ +--------------------------------------+---------------------+--------------------------------------+------------------+
In this example, the subnet external01-subnet uses a dedicated segment ID. In this example, the subnet external01-subnet uses a dedicated segment ID.
@ -176,14 +184,9 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ openstack subnet show 0f64c277-82d7-4161-aa47-fc4cfadacf2f ~(keystone_admin)]$ openstack subnet show
0f64c277-82d7-4161-aa47-fc4cfadacf2f | grep segment | segment_id |
The output from this command is a row from ascii table output, it 502e3f4f-6187-4737-b1f5-1be7fd3fc45e |
displays the following:
.. code-block:: none
|grep segment | segment_id | 502e3f4f-6187-4737-b1f5-1be7fd3fc45e |
.. note:: .. note::
Dedicated segment IDs should not be in the range created using the Dedicated segment IDs should not be in the range created using the

Some files were not shown because too many files have changed in this diff Show More