diff --git a/doc/source/_includes/adding-compute-nodes-to-an-existing-duplex-system.rest b/doc/source/_includes/adding-compute-nodes-to-an-existing-duplex-system.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/configuring-pci-passthrough-ethernet-interfaces.rest b/doc/source/_includes/configuring-pci-passthrough-ethernet-interfaces.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/dynamic-vxlan.rest b/doc/source/_includes/dynamic-vxlan.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/pci-passthrough-ethernet-interface-devices.rest b/doc/source/_includes/pci-passthrough-ethernet-interface-devices.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/provisioning-sr-iov-vf-interfaces-using-the-cli.rest b/doc/source/_includes/provisioning-sr-iov-vf-interfaces-using-the-cli.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/resource-planning.rest b/doc/source/_includes/resource-planning.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/restore-openstack-from-a-backup.rest b/doc/source/_includes/restore-openstack-from-a-backup.rest new file mode 100644 index 000000000..65147fd34 --- /dev/null +++ b/doc/source/_includes/restore-openstack-from-a-backup.rest @@ -0,0 +1,2 @@ +.. [#] :ref:`Back up OpenStack ` + diff --git a/doc/source/_includes/static-vxlan.rest b/doc/source/_includes/static-vxlan.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_includes/using-labels-to-identify-openstack-nodes.rest b/doc/source/_includes/using-labels-to-identify-openstack-nodes.rest index 7e4930d70..e69de29bb 100644 --- a/doc/source/_includes/using-labels-to-identify-openstack-nodes.rest +++ b/doc/source/_includes/using-labels-to-identify-openstack-nodes.rest @@ -1,31 +0,0 @@ -- To remove labels from the host, do the following. - - .. code-block:: none - - ~(keystone)admin)$ system host-label-remove compute-0 openstack-compute-node sriov - Deleted host label openstack-compute-node for host compute-0 - Deleted host label SRIOV for host compute-0 - -- To assign Kubernetes labels to identify compute-0 as a compute node with - |SRIOV|, use the following command: - - .. code-block:: none - - ~(keystone)admin)$ system host-label-assign compute-0 openstack-compute-node=enabled sriov=enabled - +-------------+--------------------------------------+ - | Property | Value | - +-------------+--------------------------------------+ - | uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e | - | host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 | - | label_key | openstack-compute-node | - | label_value | enabled | - +-------------+--------------------------------------+ - +-------------+--------------------------------------+ - | Property | Value | - +-------------+--------------------------------------+ - | uuid | d8e29e62-4173-4445-886c-9a95b0d6fee1 | - | host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 | - | label_key | SRIOV | - | label_value | enabled | - +-------------+--------------------------------------+ - diff --git a/doc/source/_includes/vxlan-data-networks.rest b/doc/source/_includes/vxlan-data-networks.rest new file mode 100644 index 000000000..e69de29bb diff --git a/doc/source/_vendor/vendor_strings.txt b/doc/source/_vendor/vendor_strings.txt index 510cb1676..2c9c51518 100755 --- a/doc/source/_vendor/vendor_strings.txt +++ b/doc/source/_vendor/vendor_strings.txt @@ -12,6 +12,8 @@ .. |prod-os| replace:: StarlingX OpenStack .. |prod-dc| replace:: Distributed Cloud .. |prod-p| replace:: StarlingX Platform +.. |os-prod-hor-long| replace:: OpenStack Horizon Web Interface +.. |os-prod-hor| replace:: OpenStack Horizon .. Guide names; will be formatted in italics by default. .. |node-doc| replace:: :title:`StarlingX Node Configuration and Management` @@ -28,7 +30,7 @@ .. |usertasks-doc| replace:: :title:`StarlingX User Tasks` .. |admintasks-doc| replace:: :title:`StarlingX Administrator Tasks` .. |datanet-doc| replace:: :title:`StarlingX Data Networks` - +.. |os-intro-doc| replace:: :title:`OpenStack Introduction` .. Name of downloads location diff --git a/doc/source/backup/index.rst b/doc/source/backup/index.rst index 12193ba3b..72f4c7e58 100644 --- a/doc/source/backup/index.rst +++ b/doc/source/backup/index.rst @@ -1,30 +1,25 @@ -.. Backup and Restore file, created by - sphinx-quickstart on Thu Sep 3 15:14:59 2020. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - ================== Backup and Restore ================== -------------- -System backup -------------- +---------- +Kubernetes +---------- + +.. check what put here .. toctree:: - :maxdepth: 1 + :maxdepth: 2 - backing-up-starlingx-system-data - running-ansible-backup-playbook-locally-on-the-controller - running-ansible-backup-playbook-remotely + kubernetes/index --------------------------- -System and storage restore --------------------------- +--------- +OpenStack +--------- + +.. check what put here .. toctree:: - :maxdepth: 1 - - restoring-starlingx-system-data-and-storage - running-restore-playbook-locally-on-the-controller - system-backup-running-ansible-restore-playbook-remotely + :maxdepth: 2 + + openstack/index \ No newline at end of file diff --git a/doc/source/backup/backing-up-starlingx-system-data.rst b/doc/source/backup/kubernetes/backing-up-starlingx-system-data.rst similarity index 100% rename from doc/source/backup/backing-up-starlingx-system-data.rst rename to doc/source/backup/kubernetes/backing-up-starlingx-system-data.rst diff --git a/doc/source/backup/index.rs1 b/doc/source/backup/kubernetes/index.rs1 similarity index 100% rename from doc/source/backup/index.rs1 rename to doc/source/backup/kubernetes/index.rs1 diff --git a/doc/source/backup/kubernetes/index.rst b/doc/source/backup/kubernetes/index.rst new file mode 100644 index 000000000..fd3f73d53 --- /dev/null +++ b/doc/source/backup/kubernetes/index.rst @@ -0,0 +1,34 @@ +.. Backup and Restore file, created by + sphinx-quickstart on Thu Sep 3 15:14:59 2020. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +---------- +Kubernetes +---------- + +================== +Backup and Restore +================== + +------------- +System backup +------------- + +.. toctree:: + :maxdepth: 1 + + backing-up-starlingx-system-data + running-ansible-backup-playbook-locally-on-the-controller + running-ansible-backup-playbook-remotely + +-------------------------- +System and storage restore +-------------------------- + +.. toctree:: + :maxdepth: 1 + + restoring-starlingx-system-data-and-storage + running-restore-playbook-locally-on-the-controller + system-backup-running-ansible-restore-playbook-remotely diff --git a/doc/source/backup/restoring-starlingx-system-data-and-storage.rst b/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst similarity index 100% rename from doc/source/backup/restoring-starlingx-system-data-and-storage.rst rename to doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst diff --git a/doc/source/backup/running-ansible-backup-playbook-locally-on-the-controller.rst b/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst similarity index 100% rename from doc/source/backup/running-ansible-backup-playbook-locally-on-the-controller.rst rename to doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst diff --git a/doc/source/backup/running-ansible-backup-playbook-remotely.rst b/doc/source/backup/kubernetes/running-ansible-backup-playbook-remotely.rst similarity index 100% rename from doc/source/backup/running-ansible-backup-playbook-remotely.rst rename to doc/source/backup/kubernetes/running-ansible-backup-playbook-remotely.rst diff --git a/doc/source/backup/running-restore-playbook-locally-on-the-controller.rst b/doc/source/backup/kubernetes/running-restore-playbook-locally-on-the-controller.rst similarity index 100% rename from doc/source/backup/running-restore-playbook-locally-on-the-controller.rst rename to doc/source/backup/kubernetes/running-restore-playbook-locally-on-the-controller.rst diff --git a/doc/source/backup/system-backup-running-ansible-restore-playbook-remotely.rst b/doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst similarity index 100% rename from doc/source/backup/system-backup-running-ansible-restore-playbook-remotely.rst rename to doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst diff --git a/doc/source/backup/openstack/back-up-openstack.rst b/doc/source/backup/openstack/back-up-openstack.rst new file mode 100644 index 000000000..5f3b4d151 --- /dev/null +++ b/doc/source/backup/openstack/back-up-openstack.rst @@ -0,0 +1,52 @@ + +.. mdt1596804427371 +.. _back-up-openstack: + +================= +Back up OpenStack +================= + +|prod-os| is backed up using the |prod| back-up facilities. + +.. rubric:: |context| + +The backup playbook will produce a OpenStack backup tarball in addition to the +platform tarball. This can be used to perform |prod-os| restores independently +of restoring the underlying platform. + +.. note:: + + Data stored in Ceph such as Glance images, Cinder volumes or volume backups + or Rados objects \(images stored in ceph\) are not backed up automatically. + + +.. _back-up-openstack-ul-ohv-x3k-qmb: + +- To backup glance images use the image\_backup.sh script. For example: + + .. code-block:: none + + ~(keystone_admin)$ image-backup export + +- To back-up other Ceph data such as cinder volumes, backups in ceph or + rados objects use the :command:`rbd export` command for the data in + OpenStack pools cinder-volumes, cinder-backup and rados. + + For example if you want to export a Cinder volume with the ID of: + 611157b9-78a4-4a26-af16-f9ff75a85e1b you can use the following command: + + .. code-block:: none + + ~(keystone_admin)$ rbd export -p cinder-volumes + 611157b9-78a4-4a26-af16-f9ff75a85e1b + /tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b + + To see the the Cinder volumes, use the :command:`openstack volume-list` + command. + + + After export, copy the data off-box for safekeeping. + +For details on performing a |prod| back-up, see :ref:` +System Backup and Restore `. + diff --git a/doc/source/backup/openstack/index.rst b/doc/source/backup/openstack/index.rst new file mode 100644 index 000000000..cb2d4305d --- /dev/null +++ b/doc/source/backup/openstack/index.rst @@ -0,0 +1,15 @@ + +--------- +OpenStack +--------- + +================== +Backup and Restore +================== + +.. toctree:: + :maxdepth: 1 + + back-up-openstack + restore-openstack-from-a-backup + openstack-backup-considerations \ No newline at end of file diff --git a/doc/source/backup/openstack/openstack-backup-considerations.rst b/doc/source/backup/openstack/openstack-backup-considerations.rst new file mode 100644 index 000000000..4c03ab49e --- /dev/null +++ b/doc/source/backup/openstack/openstack-backup-considerations.rst @@ -0,0 +1,13 @@ + +.. tye1591106946243 +.. _openstack-backup-considerations: + +============================================= +Containerized OpenStack Backup Considerations +============================================= + +Backup of the containerized OpenStack application is performed as part of the +|prod-long| backup procedures. + +See :ref:`System Backup and Restore `. + diff --git a/doc/source/backup/openstack/restore-openstack-from-a-backup.rst b/doc/source/backup/openstack/restore-openstack-from-a-backup.rst new file mode 100644 index 000000000..1f776945d --- /dev/null +++ b/doc/source/backup/openstack/restore-openstack-from-a-backup.rst @@ -0,0 +1,120 @@ + +.. gmx1612810318507 +.. _restore-openstack-from-a-backup: + +=============================== +Restore OpenStack from a Backup +=============================== + +You can restore |prod-os| from a backup with or without Ceph. + +.. rubric:: |prereq| + +.. _restore-openstack-from-a-backup-ul-ylc-brc-s4b: + +- You must have a backup of your |prod-os| installation as described in + :ref:`Back up OpenStack `. + +- You must have an operational |prod-long| deployment. + +.. rubric:: |proc| + +#. Delete the old OpenStack application and upload it again. + + .. note:: + + Images and volumes will remain in Ceph. + + .. code-block:: none + + ~(keystone_admin)$ system application-remove wr-openstack + ~(keystone_admin)$ system application-delete wr-openstack + ~(keystone_admin)$ system application-upload wr-openstack.tgz + +#. Restore |prod-os|. + + You can choose either of the following options: + + + - Restore only |prod-os| system. This option will not restore the Ceph + data \(that is, it will not run comands like :command:`rbd import`\). + This procedure will preserve any existing Ceph data at restore-time. + + - Restore |prod-os| system data, Cinder volumes and Glance images. You'll + want to run this step if your Ceph data will be wiped after the backup. + + + + .. table:: + :widths: 200, 668 + + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | **Restore only OpenStack application system data:** | #. Run the following command: | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ | + | | restore_openstack.yml \ | + | | -e 'initial_backup_dir= \ | + | | ansible_become_pass= \ | + | | admin_password= \ | + | | backup_filename=wr-openstack_backup.tgz' | + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | **Restore OpenStack application system data, cinder volumes and glance images:** | #. Run the following command: | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ | + | | restore_openstack.yml \ | + | | -e 'restore_cinder_glance_data=true \ | + | | initial_backup_dir= \ | + | | ansible_become_pass= \ | + | | admin_password= \ | + | | backup_filename=wr-openstack_backup.tgz' | + | | | + | | When this step has completed, the Cinder, Glance and MariaDB services will be up, and Mariadb data restored. | + | | | + | | #. Restore Ceph data. | + | | | + | | | + | | #. Restore Cinder volumes using :command:`rbd import` command. | + | | | + | | For example: | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ rbd import -p cinder-volumes /tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b | + | | | + | | Where tmp/611157b9-78a4-4a26-af16-f9ff75a85e1b is a file saved earlier at the backup procedure as described in [#]_ . | + | | | + | | #. Restore Glance images using the :command:`image-backup` script. | + | | | + | | For example if we have an archive named image\_3f30adc2-3e7c-45bf-9d4b-a4c1e191d879.tgz in the/opt/backups directory we can use restore it using the following command: | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ sudo image-backup.sh import image_3f30adc2-3e7c-45bf-9d4b-a4c1e191d879.tgz | + | | | + | | #. Use the :command:`tidy\_storage\_post\_restore` utilitary to detect any discrepancy between Cinder/Glance DB and rbd pools: | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ tidy_storage_post_restore | + | | | + | | | + | | After the script finishes, some command output will be written to the log file. They will help reconcile discrepancies between the |prod-os| database and CEPH data. | + | | | + | | #. Run the playbook again with the restore\_openstack\_continue flag set to true to bring up the remaining Openstack services. | + | | | + | | .. code-block:: none | + | | | + | | ~(keystone_admin)$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/ \ | + | | restore_openstack.yml \ | + | | -e 'restore_openstack_continue=true \ | + | | initial_backup_dir= | + | | ansible_become_pass= \ | + | | admin_password= \ | + | | backup_filename=wr-openstack_backup.tgz' | + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. include:: ../../_includes/restore-openstack-from-a-backup.rest \ No newline at end of file diff --git a/doc/source/datanet/adding-a-static-ip-address-to-a-data-interface.rst b/doc/source/datanet/adding-a-static-ip-address-to-a-data-interface.rst index 8adb7e789..e850cfc5a 100644 --- a/doc/source/datanet/adding-a-static-ip-address-to-a-data-interface.rst +++ b/doc/source/datanet/adding-a-static-ip-address-to-a-data-interface.rst @@ -12,31 +12,31 @@ administration interface or the CLI. .. rubric:: |context| For VXLAN connectivity between VMs, you must add appropriate endpoint IP -addresses to the worker node interfaces. You can add individual static +addresses to the compute node interfaces. You can add individual static addresses, or you can assign addresses from a pool associated with the data interface. For more about using address pools, see :ref:`Using IP Address Pools for Data Interfaces `. -To add a static IP address using the web administration interface, refer to the +To add a static IP address using the |os-prod-hor|, refer to the following steps. To use the CLI, see :ref:`Managing Data Interface Static IP Addresses Using the CLI `. .. rubric:: |prereq| -To make interface changes, you must lock the worker host first. +To make interface changes, you must lock the compute host first. .. rubric:: |proc| .. _adding-a-static-ip-address-to-a-data-interface-steps-zkx-d1h-hr: -#. Lock the worker host. +#. Lock the compute host. #. Set the interface to support an IPv4 or IPv6 address, or both. #. Select **Admin** \> **Platform** \> **Host Inventory** to open the Host Inventory page. - #. Select the **Host** tab, and then double-click the worker host to open + #. Select the **Host** tab, and then double-click the compute host to open the Host Detail page. #. Select the **Interfaces** tab and click **Edit Interface** for the data @@ -63,7 +63,7 @@ To make interface changes, you must lock the worker host first. The new address is added to the **Address List**. -#. Unlock the worker node and wait for it to become available. +#. Unlock the compute node and wait for it to become available. For more information, see :ref:`Managing Data Interface Static IP Addresses Using the CLI ` \ No newline at end of file diff --git a/doc/source/datanet/adding-and-maintaining-routes-for-a-vxlan-network.rst b/doc/source/datanet/adding-and-maintaining-routes-for-a-vxlan-network.rst index 4142724e8..ea1eef22b 100644 --- a/doc/source/datanet/adding-and-maintaining-routes-for-a-vxlan-network.rst +++ b/doc/source/datanet/adding-and-maintaining-routes-for-a-vxlan-network.rst @@ -11,7 +11,7 @@ the CLI. .. rubric:: |prereq| -The worker node must be locked. +The compute node must be locked. .. rubric:: |proc| @@ -24,7 +24,7 @@ To add routes, use the following command. where **node** - is the name or UUID of the worker node + is the name or UUID of the compute node **ifname** is the name of the interface @@ -53,4 +53,4 @@ To list existing routes, including their UUIDs, use the following command. .. code-block:: none - ~(keystone_admin)]$ system host-route-list worker-0 \ No newline at end of file + ~(keystone_admin)]$ system host-route-list compute-0 \ No newline at end of file diff --git a/doc/source/datanet/adding-segmentation-ranges-using-the-cli.rst b/doc/source/datanet/adding-segmentation-ranges-using-the-cli.rst index 21acf9984..a2bd80e5d 100644 --- a/doc/source/datanet/adding-segmentation-ranges-using-the-cli.rst +++ b/doc/source/datanet/adding-segmentation-ranges-using-the-cli.rst @@ -32,6 +32,7 @@ You can use the CLI to add segmentation ranges to data networks. --physical-network data-net-a \ --network-type vlan \ --minimum 623 + --maximum 623 ~(keystone_admin)]$ openstack network segment range create segment-a-project2 \ --private \ @@ -71,6 +72,8 @@ You can use the CLI to add segmentation ranges to data networks. **maximum** is the maximum value of the segmentation range. +.. rubric:: |result| + You can also obtain information about segmentation ranges using the following command: .. code-block:: none diff --git a/doc/source/datanet/assigning-a-data-network-to-an-interface.rst b/doc/source/datanet/assigning-a-data-network-to-an-interface.rst index bf5a82b74..b467f7d14 100644 --- a/doc/source/datanet/assigning-a-data-network-to-an-interface.rst +++ b/doc/source/datanet/assigning-a-data-network-to-an-interface.rst @@ -8,12 +8,9 @@ Assign a Data Network to an Interface In order to associate the L2 Network definition of a Data Network with a physical network, the Data Network must be mapped to an Ethernet or Aggregated -Ethernet interface on a worker node. - -.. rubric:: |context| +Ethernet interface on a compute node. The command for performing the mapping has the format: -.. code-block:: none - - system interface‐datanetwork‐assign \ No newline at end of file +:command:`system interface‐datanetwork‐assign` + \ No newline at end of file diff --git a/doc/source/datanet/changing-the-mtu-of-a-data-interface-using-the-cli.rst b/doc/source/datanet/changing-the-mtu-of-a-data-interface-using-the-cli.rst index 9d159e00a..1956b03b3 100644 --- a/doc/source/datanet/changing-the-mtu-of-a-data-interface-using-the-cli.rst +++ b/doc/source/datanet/changing-the-mtu-of-a-data-interface-using-the-cli.rst @@ -6,20 +6,18 @@ Change the MTU of a Data Interface Using the CLI ================================================ -You can change the MTU value for a data interface from the OpenStack Horizon -Web interface or the CLI. +You can change the |MTU| value for a data interface from the |os-prod-hor-long| +or the |CLI|. .. rubric:: |context| -The MTU must be changed while the worker host is locked. - -You can use CLI commands to lock and unlock hosts, and to modify the MTU -on the hosts. +You can use |CLI| commands to lock and unlock hosts, and to modify the |MTU| on +the hosts. .. code-block:: none ~(keystone_admin)]$ system host-lock - ~(keystone_admin)]$ system host-if-modify --imtu + ~(keystone_admin)]$ system host-if-modify --imtu ~(keystone_admin)]$ system host-unlock where: @@ -31,15 +29,16 @@ where: is the name of the interface **** - is the new MTU value + is the new |MTU| value For example: .. code-block:: none - ~(keystone_admin)]$ system host-if-modify worker-0 enp0s8 --imtu 1496 + ~(keystone_admin)]$ system host-if-modify compute-0 enp0s8 --imtu 1496 .. note:: - You cannot set the MTU on an openstack-compute-labeled worker node - interface to a value smaller than the largest MTU used on its data + + You cannot set the |MTU| on an openstack-compute-labeled compute node + interface to a value smaller than the largest |MTU| used on its data networks. \ No newline at end of file diff --git a/doc/source/datanet/changing-the-mtu-of-a-data-interface.rst b/doc/source/datanet/changing-the-mtu-of-a-data-interface.rst index 116053eee..d50575df7 100644 --- a/doc/source/datanet/changing-the-mtu-of-a-data-interface.rst +++ b/doc/source/datanet/changing-the-mtu-of-a-data-interface.rst @@ -6,18 +6,14 @@ Change the MTU of a Data Interface ================================== -You can change the MTU value for a data interface within limits determined by +You can change the |MTU| value for a data interface within limits determined by the data network to which the interface is attached. .. rubric:: |context| -The data interface MTU must be equal to or greater than the MTU of the data +The data interface |MTU| must be equal to or greater than the |MTU| of the data network. -.. rubric:: |prereq| - -You must lock the host for the interface on which you want to change the MTU. - .. rubric:: |proc| .. _changing-the-mtu-of-a-data-interface-steps-hfm-5nb-p5: @@ -29,7 +25,7 @@ You must lock the host for the interface on which you want to change the MTU. #. From the **Edit** menu for the standby controller, select **Lock Host**. -#. On all the hosts, edit the interface to change the MTU value. +#. On all the hosts, edit the interface to change the |MTU| value. #. Click the name of the host, and then select the **Interfaces** tab and click **Edit** for the interface you want to change. @@ -41,4 +37,4 @@ You must lock the host for the interface on which you want to change the MTU. From the **Edit** menu for the host, select **Unlock Host**. - The network MTU is updated with the new value. \ No newline at end of file + The network |MTU| is updated with the new value. \ No newline at end of file diff --git a/doc/source/datanet/configuring-data-interfaces-for-vxlans.rst b/doc/source/datanet/configuring-data-interfaces-for-vxlans.rst index c6456ae67..3eefacac7 100644 --- a/doc/source/datanet/configuring-data-interfaces-for-vxlans.rst +++ b/doc/source/datanet/configuring-data-interfaces-for-vxlans.rst @@ -7,10 +7,8 @@ Configure Data Interfaces for VXLANs ==================================== For data interfaces attached to VXLAN-based data networks, endpoint IP -addresses, \(static or dynamic from a IP Address pool\) and possibly IP -Routes are additionally required on the host data interfaces. +addresses, static or dynamic from a IP Address pool\) and possibly IP Routes +are additionally required on the host data interfaces. -You can complete the VXLAN data network setup by using the web -administration interface or the CLI. For more information on setting up -VXLAN Data Networks, see tasks related to :ref:`VXLAN data network setup -completion `. \ No newline at end of file +See :ref:`VXLAN Data Network Setup Completion +` for details on this configuration. \ No newline at end of file diff --git a/doc/source/datanet/configuring-data-interfaces.rst b/doc/source/datanet/configuring-data-interfaces.rst index 7e28c98ce..0cfd340d1 100644 --- a/doc/source/datanet/configuring-data-interfaces.rst +++ b/doc/source/datanet/configuring-data-interfaces.rst @@ -24,8 +24,7 @@ underlying network for OpenStack Neutron Tenant/Project Networks. .. xreflink - :ref:`Configuring VLAN Interfaces ` For each of the above procedures, configure the node interface specifying the -``ifclass`` as ``data`` and assign one or more data networks to the node -interface. +'ifclass' as 'data' and assign one or more data networks to the node interface. .. xreflink As an example for an Ethernet interface, repeat the procedure in |node-doc|: :ref:`Configuring Ethernet Interfaces @@ -38,24 +37,35 @@ interface. #. List the attached interfaces. To list all interfaces, use the :command:`system host-if-list` command and - include the -a flag. + include the ``-a`` flag. .. code-block:: none ~(keystone_admin)]$ system host-if-list -a controller-0 - +---...+----------+----------+...+---------------+...+-------------------+ - | uuid | name | class | | ports | | data networks | - +---...+----------+----------+...+---------------+...+-------------------+ - | 68...| ens787f3 | None | | [u'ens787f3'] | | [] | - | 79...| data0 | data | | [u'ens787f0'] | | [u'group0-data0'] | - | 78...| cluster0 | platform | | [] | | [] | - | 89...| ens513f3 | None | | [u'ens513f3'] | | [] | - | 97...| ens803f1 | None | | [u'ens803f1'] | | [] | - | d6...| pxeboot0 | platform | | [u'eno2'] | | [] | - | d6...| mgmt0 | platform | | [] | | [] | - +---...+----------+----------+...+---------------+...+-------------------+ + +-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+ + | uuid | name | class | type | vlan | ports | uses i/f | used by i/f | attributes | + | | | | | id | | | | | + +-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+ + | 0aa20d82-...| sriovvf2 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1500,max_tx_rate=100 | + | 0e5f162d-...| mgmt0 | platform | vlan | 163 | [] | [u'sriov0'] | [] | MTU=1500 | + | 14f2ed53-...| sriov0 | pci-sriov | ethernet | None | [u'enp24s0f0'] | [] | [u'sriovnet1', u'oam0', | MTU=9216 | + | | | | | | | | u'sriovnet2', u'sriovvf2', | | + | | | | | | | | u'sriovvf1', u'mgmt0', | | + | | | | | | | | u'pxeboot0'] | | + | | | | | | | | | | + | 163592bd-...| data1 | data | ethernet | None | [u'enp24s0f1'] | [] | [] | MTU=1500,accelerated=True | + | 1831571d-...| sriovnet2 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1956,max_tx_rate=100 | + | 5741318f-...| eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 | + | 5bd79fbd-...| enp26s0f0 | None | ethernet | None | [u'enp26s0f0'] | [] | [] | MTU=1500 | + | 623d5494-...| oam0 | platform | vlan | 103 | [] | [u'sriov0'] | [] | MTU=1500 | + | 78b4080a-...| enp26s0f1 | None | ethernet | None | [u'enp26s0f1'] | [] | [] | MTU=1500 | + | a6f1f901-...| eno1 | None | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 | + | f37eac1b-...| pxeboot0 | platform | ethernet | None | [] | [u'sriov0'] | [] | MTU=1500 | + | f7c62216-...| sriovnet1 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1500,max_tx_rate=100 | + | fcbe3aca-...| sriovvf1 | pci-sriov | vf | None | [] | [u'sriov0'] | [] | MTU=1956,max_tx_rate=100 | + +-------------+-----------+-----------+----------+------+----------------+-------------+----------------------------+---------------------------+ -#. Attach an interface to a data network. +#. Attach an interface to a network. Use a command sequence of the following form: @@ -73,7 +83,7 @@ interface. The MTU for the interface. .. note:: - The MTU must be equal to or larger than the MTU of the data network + The |MTU| must be equal to or larger than the |MTU| of the data network to which the interface is attached. **ifclass** @@ -81,13 +91,13 @@ interface. **data**, **pci-sriov**, and **pci-passthrough**. **data network** - The name or ID of the data network to assign the interface to. + The name or ID of the network to assign the interface to. **hostname** - The name or UUID of the host. + The name or |UUID| of the host. **ethname** - The name or UUID of the Ethernet interface to use. + The name or |UUID| of the Ethernet interface to use. **ip4\_mode** The mode for assigning IPv4 addresses to a data interface \(static or diff --git a/doc/source/datanet/dynamic-vxlan.rst b/doc/source/datanet/dynamic-vxlan.rst index fbee51df4..dea4d302b 100644 --- a/doc/source/datanet/dynamic-vxlan.rst +++ b/doc/source/datanet/dynamic-vxlan.rst @@ -6,10 +6,10 @@ Dynamic VXLAN ============= -|prod-os| supports dynamic mode \(learning\) VXLAN implementation that has -each vSwitch instance registered on the network for a particular IP -multicast group, MAC addresses, and VTEP endpoints that are populated based on -neutron configuration data. +|prod-os| supports dynamic mode \(learning\) VXLAN implementation that has each +vSwitch instance registered on the network for a particular IP multicast group, +|MAC| addresses, and |VTEP| endpoints that are populated based on neutron +configuration data. The IP multicast group, \(for example, 239.1.1.1\), is input when a new neutron data network is provisioned. The selection of the IP multicast group @@ -18,25 +18,45 @@ group. The IP multicast network can work in both a single subnet \(that is, local Layer2 environment\) or can span Layer3 segments in the customer network for more complex routing requirements but requires IP multicast enabled routers. -In the dynamic VXLAN mode, when a VM instance sends a packet to some destination -node the vSwitch VXLAN implementation examines the destination MAC address to -determine how to treat the packet. If the destination is known, a unicast packet -is sent to the worker node hosting that VM instance. If the destination is -unknown or the packet is a broadcast/multicast packet then a multicast packet -is sent to all worker nodes. Once the destination VM instance receives the -packet and responds to the initial source worker node, it learns that the VM -is hosted from that worker node, and any future packets destined to that VM -instance are unicasted to that worker node. +.. only:: starlingx + + In the dynamic |VXLAN| mode, when a VM instance sends a packet to some + destination node the |VXLAN| implementation examines the + destination MAC address to determine how to treat the packet. If the + destination is known, a unicast packet is sent to the compute node hosting + that VM instance. If the destination is unknown or the packet is a + broadcast/multicast packet then a multicast packet is sent to all compute + nodes. Once the destination VM instance receives the packet and responds to + the initial source compute node, it learns that the VM is hosted from that + compute node, and any future packets destined to that VM instance are + unicasted to that compute node. + +.. only:: partner + + .. include:: ../_includes/dynamic-vxlan.rest + + :start-after: vswitch-text-1-begin + :end-before: vswitch-text-1-end .. figure:: figures/eol1510005391750.png - `Multicast Endpoint Distribution` +Multicast Endpoint Distribution -For broadcast and multicast packets originating from the VM instances the -vSwitch implements head-end replication to clone and send a copy of the -packet to each known worker node. This operation is expensive and will -negatively impact performance if the network is experiencing high volume of -broadcast or multicast packets. +.. only:: starlingx + + For broadcast and multicast packets originating from the VM instances + implements head-end replication to clone and send a copy of the packet to + each known compute node. This operation is expensive and will negatively + impact performance if the network is experiencing high volume of broadcast + or multicast packets. + +.. only:: partner + + .. include:: ../_includes/dynamic-vxlan.rest + + :start-after: vswitch-text-1-begin + :end-before: vswitch-text-1-end + .. _dynamic-vxlan-section-N10054-N1001F-N10001: @@ -44,20 +64,20 @@ broadcast or multicast packets. Workflow to Configure Dynamic VXLAN Data Networks ------------------------------------------------- -Use the following workflow to create dynamic VXLAN data networks and add -segmentation ranges using CLI. +Use the following workflow to create dynamic |VXLAN| data networks and add +segmentation ranges using the |CLI|. .. _dynamic-vxlan-ol-bpj-dlb-1cb: #. Create a VXLAN data network, see :ref:`Adding Data Networks `. -#. Add segmentation ranges to dynamic VXLAN \(Multicast VXLAN\) data networks, - see :ref:`Adding Segmentation Ranges Using the CLI +#. Add segmentation ranges to dynamic |VXLAN| \(Multicast |VXLAN|\) data + networks, see :ref:`Adding Segmentation Ranges Using the CLI `. -#. Configure the endpoint IP addresses of the worker nodes using the web - administration interface or the CLI: +#. Configure the endpoint IP addresses of the compute nodes using the + |os-prod-hor-long| or the |CLI|: - To configure static IP addresses for individual data interfaces, see: @@ -72,6 +92,6 @@ segmentation ranges using CLI. #. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes for a VXLAN Network `. -For more information on the differences between the dynamic and static VXLAN +For more information on the differences between the dynamic and static |VXLAN| modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes `. \ No newline at end of file diff --git a/doc/source/datanet/index.rst b/doc/source/datanet/index.rst index 59b42d22f..5392839ca 100644 --- a/doc/source/datanet/index.rst +++ b/doc/source/datanet/index.rst @@ -26,6 +26,7 @@ Displaying data network information displaying-data-network-information-using-horizon displaying-data-network-information-using-the-cli the-data-network-topology-view + vxlan-data-networks ********************************************* Adding, assigning, and removing data networks @@ -98,4 +99,5 @@ VXLAN data network setup completion managing-data-interface-static-ip-addresses-using-the-cli using-ip-address-pools-for-data-interfaces managing-ip-address-pools-using-the-cli - adding-and-maintaining-routes-for-a-vxlan-network \ No newline at end of file + adding-and-maintaining-routes-for-a-vxlan-network + vxlan-data-network-setup-completion \ No newline at end of file diff --git a/doc/source/datanet/managing-data-interface-static-ip-addresses-using-the-cli.rst b/doc/source/datanet/managing-data-interface-static-ip-addresses-using-the-cli.rst index 5f6390f69..83aead1e0 100644 --- a/doc/source/datanet/managing-data-interface-static-ip-addresses-using-the-cli.rst +++ b/doc/source/datanet/managing-data-interface-static-ip-addresses-using-the-cli.rst @@ -7,7 +7,7 @@ Manage Data Interface Static IP Addresses Using the CLI ======================================================= If you prefer, you can create and manage static addresses for data interfaces -using the CLI. +using the |CLI|. .. rubric:: |context| @@ -17,15 +17,15 @@ For more information about using static addresses for data interfaces, see .. rubric:: |prereq| -To make interface changes, you must lock the worker node first. +To make interface changes, you must lock the compute node first. .. rubric:: |proc| .. _managing-data-interface-static-ip-addresses-using-the-cli-steps-zkx-d1h-hr: -1. Lock the worker node. +#. Lock the compute node. -2. Set the interface to support an IPv4 or IPv6 address, or both. +#. Set the interface to support an IPv4 or IPv6 address, or both. .. code-block:: none @@ -34,7 +34,7 @@ To make interface changes, you must lock the worker node first. where **node** - is the name or UUID of the worker node + is the name or |UUID| of the compute node **ifname** is the name of the interface @@ -54,7 +54,7 @@ To make interface changes, you must lock the worker node first. where **node** - is the name or UUID of the worker node + is the name or |UUID| of the compute node **ifname** is the name of the interface @@ -71,12 +71,12 @@ To make interface changes, you must lock the worker node first. ~(keystone_admin)]$ system host-addr-list - This displays the UUIDs of existing addresses, as shown in this example + This displays the |UUIDs| of existing addresses, as shown in this example below. .. code-block:: none - ~(keystone_admin)]$ system host-addr-list worker-0 + ~(keystone_admin)]$ system host-addr-list compute-0 +-----------------------+--------+------------------------+--------+ | uuid | ifname | address | prefix | +-----------------------+--------+------------------------+--------+ @@ -89,6 +89,6 @@ To make interface changes, you must lock the worker node first. ~(keystone_admin)]$ system host-addr-delete - where **uuid** is the UUID of the address. + where **uuid** is the |UUID| of the address. -#. Unlock the worker node and wait for it to become available. \ No newline at end of file +#. Unlock the compute node and wait for it to become available. \ No newline at end of file diff --git a/doc/source/datanet/managing-ip-address-pools-using-the-cli.rst b/doc/source/datanet/managing-ip-address-pools-using-the-cli.rst index 3f553232b..0b4146b1b 100644 --- a/doc/source/datanet/managing-ip-address-pools-using-the-cli.rst +++ b/doc/source/datanet/managing-ip-address-pools-using-the-cli.rst @@ -6,20 +6,25 @@ Manage IP Address Pools Using the CLI ===================================== -You can create and manage address pools using the CLI: +You can create and manage address pools using the |CLI|: .. contents:: :local: :depth: 1 +.. rubric:: |context| + +For more information about address pools, see :ref:`Using IP Address Pools for +Data Interfaces `. + .. rubric:: |prereq| -To make interface changes, you must lock the worker node first. +To make interface changes, you must lock the compute node first. .. _managing-ip-address-pools-using-the-cli-section-N1003C-N1001F-N10001: ------------------------ -Creating an address pool +Creating an Address pool ------------------------ To create an address pool, use a command of the following form: diff --git a/doc/source/datanet/static-vxlan.rst b/doc/source/datanet/static-vxlan.rst index de5bad077..5f5c6b825 100644 --- a/doc/source/datanet/static-vxlan.rst +++ b/doc/source/datanet/static-vxlan.rst @@ -6,29 +6,36 @@ Static VXLAN ============ -The static unicast mode relies on the mapping of neutron ports to worker nodes -to receive the packet in order to reach the VM. +The static unicast mode relies on the mapping of neutron ports to compute nodes +to receive the packet in order to reach the |VM|. -In this mode there is no multicast addressing or multicast packets sent from -the worker nodes, neither is there any learning. In contrast to the dynamic -VXLAN mode, any packets destined to unknown MAC addresses are dropped. To -ensure that there are no unknown endpoints the system examines the neutron -port DB and gathers the list of mappings between port MAC/IP addresses and the -hostname on which they reside. This information is then propagated throughout -the system to pre-provision endpoint entries into all vSwitch instances. This -ensures that each vSwitch knows how to reach all VM instances that are related -to any local VM instances. +.. only:: starlingx -Static VXLAN is limited to use on one data network. If configured, it must be -enabled on all OpenStack worker nodes. + In this mode there is no multicast addressing and no multicast packets are + sent from the compute nodes, neither is there any learning. In contrast to + the dynamic |VXLAN| mode, any packets destined to unknown MAC addresses are + dropped. To ensure that there are no unknown endpoints the system examines + the neutron port DB and gathers the list of mappings between port |MAC|/IP + addresses and the hostname on which they reside. + +.. only:: partner + + .. include:: ../_includes/static-vxlan.rest + + :start-after: vswitch-text-1-begin + :end-before: vswitch-text-1-end + + +Static |VXLAN| is limited to use on one data network. If configured, it must be +enabled on all OpenStack compute nodes. .. figure:: figures/oeg1510005898965.png - `Static Endpoint Distribution` + `Static Endpoint Distribution` .. note:: In the static mode there is no dynamic endpoint learning. This means that - if a node does not have an entry for some destination MAC address it will + if a node does not have an entry for some destination |MAC| address it will not create an entry even if it receives a packet from that device. .. _static-vxlan-section-N1006B-N1001F-N10001: @@ -38,19 +45,19 @@ Workflow to Configure Static VXLAN Data Networks ------------------------------------------------ Use the following workflow to create static VXLAN data networks and add -segmentation ranges using the CLI. +segmentation ranges using the |CLI|. .. _static-vxlan-ol-bpj-dlb-1cb: -#. Create a VXLAN data network, see :ref:`Adding Data Networks Using the CLI +#. Create a |VXLAN| data network, see :ref:`Adding Data Networks Using the CLI `. -#. Add segmentation ranges to static VXLAN data networks, see :ref:`Adding +#. Add segmentation ranges to static |VXLAN| data networks, see :ref:`Adding Segmentation Ranges Using the CLI `. #. Establish routes between the hosts, see :ref:`Adding and Maintaining Routes for a VXLAN Network `. -For more information on the differences between the dynamic and static VXLAN +For more information on the differences between the dynamic and static |VXLAN| modes, see :ref:`Differences Between Dynamic and Static VXLAN Modes `. \ No newline at end of file diff --git a/doc/source/datanet/the-data-network-topology-view.rst b/doc/source/datanet/the-data-network-topology-view.rst index 280b045ca..2b8f1198a 100644 --- a/doc/source/datanet/the-data-network-topology-view.rst +++ b/doc/source/datanet/the-data-network-topology-view.rst @@ -2,11 +2,11 @@ .. vkv1559818533210 .. _the-data-network-topology-view: -============================== -The Data Network Topology View -============================== +========================== +Data Network Topology View +========================== -The Data Network Topology view shows data networks and worker host data +The Data Network Topology view shows data networks and compute host data interface connections for the system using a color-coded graphical display. Active alarm information is also shown in real time. You can select individual hosts or networks to highlight their connections and obtain more details. diff --git a/doc/source/datanet/using-ip-address-pools-for-data-interfaces.rst b/doc/source/datanet/using-ip-address-pools-for-data-interfaces.rst index dad8f44b9..66fdd0eda 100644 --- a/doc/source/datanet/using-ip-address-pools-for-data-interfaces.rst +++ b/doc/source/datanet/using-ip-address-pools-for-data-interfaces.rst @@ -11,14 +11,14 @@ You can create pools of IP addresses for use with data interfaces. .. rubric:: |context| As an alternative to manually adding static IP addresses to data interfaces for -use with VXLANs, you can define pools of IP addresses and associate them with +use with |VXLANs|, you can define pools of IP addresses and associate them with one or more data interfaces. Each pool consists of one or more contiguous ranges of IPv4 or IPv6 addresses. When a data interface is associated with a pool, its IP address is allocated from the pool. The allocation may be either random or sequential, depending on the settings for the pool. -You can use the web administration interface or the CLI to create and manage -address pools. For information about using the CLI, see :ref:`Managing IP +You can use the |os-prod-hor| or the |CLI| to create and manage +address pools. For information about using the |CLI|, see :ref:`Managing IP Address Pools Using the CLI `. .. rubric:: |prereq| diff --git a/doc/source/datanet/vxlan-data-network-setup-completion.rst b/doc/source/datanet/vxlan-data-network-setup-completion.rst new file mode 100644 index 000000000..e78bcddce --- /dev/null +++ b/doc/source/datanet/vxlan-data-network-setup-completion.rst @@ -0,0 +1,20 @@ + +.. yxz1511555520499 +.. _vxlan-data-network-setup-completion: + +=================================== +VXLAN Data Network Setup Completion +=================================== + +You can complete the |VXLAN| data network setup by using the |os-prod-hor-long| +or the |CLI|. + +For more information on setting up |VXLAN| Data Networks, see :ref:`VXLAN Data Networks `. + +- :ref:`Adding a Static IP Address to a Data Interface ` + +- :ref:`Using IP Address Pools for Data Interfaces ` + +- :ref:`Adding and Maintaining Routes for a VXLAN Network ` + + diff --git a/doc/source/datanet/vxlan-data-networks.rst b/doc/source/datanet/vxlan-data-networks.rst new file mode 100644 index 000000000..d801021fb --- /dev/null +++ b/doc/source/datanet/vxlan-data-networks.rst @@ -0,0 +1,50 @@ + +.. wic1511538154740 +.. _vxlan-data-networks: + +=================== +VXLAN Data Networks +=================== + +Virtual eXtensible Local Area Networks \(|VXLANs|\) data networks are an +alternative to |VLAN| data networks. + +A |VXLAN| data network is implemented over a range of |VXLAN| Network +Identifiers \(|VNIs|.\) This is similar to the |VLAN| option, but allows +multiple data networks to be defined over the same physical network using +unique |VNIs| defined in segmentation ranges. + +Packets sent between |VMs| over virtual project networks backed by a |VXLAN| +data network are encapsulated with IP, |UDP|, and |VXLAN| headers and sent as +Layer 3 packets. The IP addresses of the source and destination compute nodes +are included in the outer IP header. + +.. only:: starlingx + + |prod-os| supports two configurations for |VXLANs|: + +.. only:: partner + + .. include:: ../_includes/vxlan-data-networks.rest + +.. _vxlan-data-networks-ul-rzs-kqf-zbb: + +- Dynamic |VXLAN|, see :ref:`Dynamic VXLAN ` + +- Static |VXLAN|, see :ref:`Static VXLAN ` + + +.. _vxlan-data-networks-section-N10067-N1001F-N10001: + +.. rubric:: |prereq| + +Before you can create project networks on a |VXLAN| provider network, you must +define at least one network segment range. + +- :ref:`Dynamic VXLAN ` + +- :ref:`Static VXLAN ` + +- :ref:`Differences Between Dynamic and Static VXLAN Modes ` + + diff --git a/doc/source/fault-mgmt/openstack/openstack-alarm-messages-300s.rst b/doc/source/fault-mgmt/openstack/openstack-alarm-messages-300s.rst index 4bcd21ed4..55ebce581 100644 --- a/doc/source/fault-mgmt/openstack/openstack-alarm-messages-300s.rst +++ b/doc/source/fault-mgmt/openstack/openstack-alarm-messages-300s.rst @@ -6,130 +6,107 @@ Alarm Messages - 300s ===================== -.. include:: ../../_includes/openstack-alarm-messages-xxxs.rest +The system inventory and maintenance service reports system changes with +different degrees of severity. Use the reported alarms to monitor the overall +health of the system. + +For more information, see :ref:`Overview +`. + +In the following tables, the severity of the alarms is represented by one or +more letters, as follows: + +.. _alarm-messages-300s-ul-jsd-jkg-vp: + +- C: Critical + +- M: Major + +- m: Minor + +- W: Warning + +A slash-separated list of letters is used when the alarm can be triggered with +one of several severity levels. + +An asterisk \(\*\) indicates the management-affecting severity, if any. A +management-affecting alarm is one that cannot be ignored at the indicated +severity level or higher by using relaxed alarm rules during an orchestrated +patch or upgrade operation. + +.. note:: + + Differences exist between the terminology emitted by some alarms and that + used in the |CLI|, GUI, and elsewhere in the documentations: + +.. _alarm-messages-300s-ul-dsf-dxn-bhb: + +- References to provider networks in alarms refer to data networks. + +- References to data networks in alarms refer to physical networks. + +- References to tenant networks in alarms refer to project networks. + .. _alarm-messages-300s-table-zrd-tg5-v5: -.. list-table:: - :widths: 6 15 - :header-rows: 0 +.. table:: Table 1. Alarm Messages + :widths: auto - * - **Alarm ID: 300.003** - - Networking Agent not responding. - * - Entity Instance - - host=.agent= - * - Severity: - - M\* - * - Proposed Repair Action - - If condition persists, attempt to clear issue by administratively locking and unlocking the Host. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.004** - - No enabled compute host with connectivity to provider network. - * - Entity Instance - - host=.providernet= - * - Severity: - - M\* - * - Proposed Repair Action - - Enable compute hosts with required provider network connectivity. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.005** - - Communication failure detected over provider network x% for ranges y% on host z%. - - or - - Communication failure detected over provider network x% on host z%. - * - Entity Instance - - providernet=.host= - * - Severity: - - M\* - * - Proposed Repair Action - - Check neighbor switch port VLAN assignments. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.010** - - ML2 Driver Agent non-reachable - - or - - ML2 Driver Agent reachable but non-responsive - - or - - ML2 Driver Agent authentication failure - - or - - ML2 Driver Agent is unable to sync Neutron database - * - Entity Instance - - host=.ml2driver= - * - Severity: - - M\* - * - Proposed Repair Action - - Monitor and if condition persists, contact next level of support. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.012** - - Openflow Controller connection failed. - * - Entity Instance - - host=.openflow-controller= - * - Severity: - - M\* - * - Proposed Repair Action - - Check cabling and far-end port configuration and status on adjacent equipment. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.013** - - No active Openflow controller connections found for this network. - - or - - One or more Openflow controller connections in disconnected state for this network. - * - Entity Instance - - host=.openflow-network= - * - Severity: - - C, M\* - * - Proposed Repair Action - - host=.openflow-network= - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.015** - - No active OVSDB connections found. - * - Entity Instance - - host= - * - Severity: - - C\* - * - Proposed Repair Action - - Check cabling and far-end port configuration and status on adjacent equipment. - -.. list-table:: - :widths: 6 15 - :header-rows: 0 - - * - **Alarm ID: 300.016** - - Dynamic routing agent x% lost connectivity to peer y% - * - Entity Instance - - host=,agent=,bgp-peer= - * - Severity: - - M\* - * - Proposed Repair Action - - If condition persists, fix connectivity to peer. \ No newline at end of file + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | Alarm ID | Description | Severity | Proposed Repair Action | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | Entity Instance ID | + +==========+=====================================================================================+==========+===================================================================================================+ + | 300.003 | Networking Agent not responding. | M\* | If condition persists, attempt to clear issue by administratively locking and unlocking the Host. | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=.agent= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.004 | No enabled compute host with connectivity to provider network. | M\* | Enable compute hosts with required provider network connectivity. | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=.providernet= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.005 | Communication failure detected over provider network x% for ranges y% on host z%. | M\* | Check neighbor switch port VLAN assignments. | + | | | | | + | | or | | | + | | | | | + | | Communication failure detected over provider network x% on host z%. | | | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | providernet=.host= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.010 | ML2 Driver Agent non-reachable | M\* | Monitor and if condition persists, contact next level of support. | + | | | | | + | | or | | | + | | | | | + | | ML2 Driver Agent reachable but non-responsive | | | + | | | | | + | | or | | | + | | | | | + | | ML2 Driver Agent authentication failure | | | + | | | | | + | | or | | | + | | | | | + | | ML2 Driver Agent is unable to sync Neutron database | | | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=.ml2driver= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.012 | Openflow Controller connection failed. | M\* | Check cabling and far-end port configuration and status on adjacent equipment. | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=.openflow-controller= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.013 | No active Openflow controller connections found for this network. | C, M\* | Check cabling and far-end port configuration and status on adjacent equipment. | + | | | | | + | | or | | | + | | | | | + | | One or more Openflow controller connections in disconnected state for this network. | | | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=.openflow-network= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.015 | No active OVSDB connections found. | C\* | Check cabling and far-end port configuration and status on adjacent equipment. | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | 300.016 | Dynamic routing agent x% lost connectivity to peer y% | M\* | If condition persists, fix connectivity to peer | + + +-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ + | | host=,agent=,bgp-peer= | + +----------+-------------------------------------------------------------------------------------+----------+---------------------------------------------------------------------------------------------------+ diff --git a/doc/source/fault-mgmt/openstack/openstack-fault-management-overview.rst b/doc/source/fault-mgmt/openstack/openstack-fault-management-overview.rst index 6e2faf456..bd2f8244e 100644 --- a/doc/source/fault-mgmt/openstack/openstack-fault-management-overview.rst +++ b/doc/source/fault-mgmt/openstack/openstack-fault-management-overview.rst @@ -14,11 +14,11 @@ This section provides the list of OpenStack related Alarms and Customer Logs that are monitored and reported for the |prod-os| application through the |prod| fault management interfaces. -All Fault Management related interfaces for displaying alarms and logs, +All fault management related interfaces for displaying alarms and logs, suppressing/unsuppressing events, and enabling :abbr:`SNMP (Simple Network Management Protocol)` are available on the |prod| REST APIs, :abbr:`CLIs (Command Line Interfaces)` and/or GUIs. -.. :only: partner +.. only:: partner .. include:: ../../_includes/openstack-fault-management-overview.rest \ No newline at end of file diff --git a/doc/source/index.rst b/doc/source/index.rst index 34812186a..940e92e3f 100755 --- a/doc/source/index.rst +++ b/doc/source/index.rst @@ -178,11 +178,23 @@ Backup and restore Container integration --------------------- + .. toctree:: :maxdepth: 2 container_integration/kubernetes/index + + +------- +Updates +------- + +.. toctree:: + :maxdepth: 2 + + updates/index + --------- Reference --------- diff --git a/doc/source/node_management/figures/bek1516655307871.png b/doc/source/node_management/figures/bek1516655307871.png new file mode 100644 index 000000000..56cc2c924 Binary files /dev/null and b/doc/source/node_management/figures/bek1516655307871.png differ diff --git a/doc/source/node_management/figures/jow1452530556357.png b/doc/source/node_management/figures/jow1452530556357.png new file mode 100644 index 000000000..477b79bed Binary files /dev/null and b/doc/source/node_management/figures/jow1452530556357.png differ diff --git a/doc/source/node_management/figures/kho1513370501907.png b/doc/source/node_management/figures/kho1513370501907.png new file mode 100644 index 000000000..89de2972f Binary files /dev/null and b/doc/source/node_management/figures/kho1513370501907.png differ diff --git a/doc/source/node_management/figures/ptj1538163621289.png b/doc/source/node_management/figures/ptj1538163621289.png new file mode 100644 index 000000000..ab890a2c5 Binary files /dev/null and b/doc/source/node_management/figures/ptj1538163621289.png differ diff --git a/doc/source/node_management/kubernetes/node_interfaces/provisioning-sr-iov-vf-interfaces-using-the-cli.rst b/doc/source/node_management/kubernetes/node_interfaces/provisioning-sr-iov-vf-interfaces-using-the-cli.rst index 14e768098..e185144e5 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/provisioning-sr-iov-vf-interfaces-using-the-cli.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/provisioning-sr-iov-vf-interfaces-using-the-cli.rst @@ -93,23 +93,24 @@ For more information, see :ref:`Provision SR-IOV Interfaces using the CLI interface. **drivername** - An optional virtual function driver to use. Valid choices are 'vfio' + An optional virtual function driver to use. Valid choices are |VFIO| and 'netdevice'. The default value is netdevice, which will cause |SRIOV| virtual function interfaces to appear as kernel network devices' in the container. A value of '**vfio**' will cause the device to be - bound to the vfio-pci driver. Vfio based devices will not appear as + bound to the vfio-pci driver. |VFIO| based devices will not appear as kernel network interfaces, but may be used by |DPDK| based applications. .. note:: - - Applications backed by Mellanox AVS should use the - netdevice |VF| driver - - If the driver for the |VF| interface and parent |SRIOV| interface differ, a separate data network should be created for each interface. + .. only:: partner + + .. include:: ../../../_includes/provisioning-sr-iov-vf-interfaces-using-the-cli.rest + **networks** A list of data networks that are attached to the interface, delimited by quotes and separated by commas; for example, diff --git a/doc/source/node_management/openstack/adding-compute-nodes-to-an-existing-duplex-system.rst b/doc/source/node_management/openstack/adding-compute-nodes-to-an-existing-duplex-system.rst index 0e0035253..3b8a204db 100644 --- a/doc/source/node_management/openstack/adding-compute-nodes-to-an-existing-duplex-system.rst +++ b/doc/source/node_management/openstack/adding-compute-nodes-to-an-existing-duplex-system.rst @@ -6,15 +6,14 @@ Add Compute Nodes to an Existing Duplex System ============================================== -You can add up to 4 compute nodes to an existing Duplex system by following +You can add up to 6 compute nodes to an existing Duplex system by following the standard procedures for adding compute nodes to a system. .. rubric:: |prereq| -Before adding compute nodes to an existing duplex-direct system, you must -convert the system to use switch-based network connections. +.. only:: partner -.. xbooklink For more information, see |sysconf-doc|: `Converting a Duplex System to Switch-Based Connection `. + .. include:: ../../_includes/adding-compute-nodes-to-an-existing-duplex-system.rest Before adding compute nodes to a duplex system, you can either add and provision platform RAM and CPU cores on the controllers or reallocate RAM and diff --git a/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst b/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst new file mode 100644 index 000000000..7f80904d3 --- /dev/null +++ b/doc/source/node_management/openstack/configuring-a-flavor-to-use-a-generic-pci-device.rst @@ -0,0 +1,134 @@ + +.. vib1596720522530 +.. _configuring-a-flavor-to-use-a-generic-pci-device: + +============================================== +Configure a Flavor to Use a Generic PCI Device +============================================== + +To provide |VM| access to a generic |PCI| passthrough device, you must use a flavor +with an extra specification identifying the device |PCI| alias. + + +The Nova scheduler attempts to schedule the |VM| on a host containing the device. +If no suitable compute node is available, the error **No valid host was found** +is reported. If a suitable compute node is available, then the scheduler +attempts to instantiate the |VM| in a |NUMA| node with direct access to the +device, subject to the **PCI NUMA Affinity** extra specification. + +.. caution:: + + When this extra spec is used, an eligible host |NUMA| node is required for + each virtual |NUMA| node in the instance. If this requirement cannot be met, + the instantiation fails. + +You can use the |os-prod-hor| interface or the |CLI| to add a |PCI| alias +extra specification. From the |os-prod-hor| interface, use the **Custom +Extra Spec** selection in the **Create Flavor Extra Spec** drop-down menu. For +the **Key**, use **pci\_passthrough:alias**. + +.. image:: ../figures/kho1513370501907.png + + + +.. note:: + + To edit the |PCI| alias for a QuickAssist-|SRIOV| device, you can use the + Update Flavor Metadata dialog box accessible from the Flavors page. This + supports editing for a QuickAssist-|SRIOV| |PCI| alias only. It cannot be + used to edit the |PCI| Alias for GPU devices or multiple devices. + + To access the Update Flavor Metadata dialog box, go to the Flavors page, + open the **Edit Flavor** drop-down menu, and then select **Update + Metadata**. + +.. rubric:: |prereq| + +To be available for use by |VMs|, the device must be exposed, and it must also +have a PCI alias. To expose a device, see :ref:`Exposing a Generic PCI Device +Using the CLI ` or :ref:`Expose +a Generic PCI Device for Use by VMs +`. To assign a PCI alias, see +:ref:`Configuring a PCI Alias in Nova ` + +.. rubric:: |proc| + +- Use the :command:`openstack flavor set` command to add the extra spec. + + .. code-block:: none + + ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="pci_alias[:number_of_devices]" + + where + + **** + is the name of the flavor + + **** + is the PCI alias of the device + + .. note:: + + The parameter pci\_passthrough:alias is used for both |PCI| + passthrough devices and |SRIOV| devices. + + Depending on the device type, the following default |PCI| alias options + are available: + + **qat-vf** + Exposes an Intel AV-ICE02 VPN Acceleration Card for |SRIOV| access. + For more information, see :ref:`SR-IOV Encryption Acceleration + `. + + The following device specific options are available for qat-vf: + + qat-dh895xcc-vf + + qat-c62x-vf + + .. note:: + + Due to driver limitations, |PCI| passthrough access for the Intel + AV-ICE02 VPN Acceleration Card \(qat-pf option\) is not + supported. + + **gpu** + Exposes a graphical processing unit \(gpu\) with the |PCI|-SIG + defined class code for 'Display Controller' \(0x03\). + + .. note:: + + On a system with multiple cards that use the same default |PCI| + alias, you must assign and use a unique |PCI| alias for each one. + + **** + is the number of |SRIOV| or |PCI| passthrough devices to expose to the VM + + For example, to make two QuickAssist |SRIOV| devices available to a guest: + + .. code-block:: none + + ~(keystone_admin)$ openstack flavor set --property "pci_passthrough:alias"="qat-dh895xcc-vf:2" + + To make a GPU device available to a guest: + + .. code-block:: none + + ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1" + + + To make a GPU device from a specific vendor available to a guest: + + .. code-block:: none + + ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="nvidia-tesla-p40:1" + + + To make multiple |PCI| devices available, use the following command: + + .. code-block:: none + + ~(keystone_admin)$ openstack flavor set flavor_name --property "pci_passthrough:alias"="gpu:1, qat-c62x-vf:2" + + + diff --git a/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst b/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst new file mode 100644 index 000000000..4947a6671 --- /dev/null +++ b/doc/source/node_management/openstack/configuring-pci-passthrough-ethernet-interfaces.rst @@ -0,0 +1,204 @@ + +.. wjw1596720840345 +.. _configure-pci-passthrough-ethernet-interfaces: + +============================================= +Configure PCI Passthrough Ethernet Interfaces +============================================= + +A passthrough Ethernet interface is a physical |PCI| Ethernet |NIC| on a compute +node to which a virtual machine is granted direct access. This minimizes packet +processing delays but at the same time demands special operational +considerations. + +.. rubric:: |context| + +You can specify interfaces when you launch an instance. + +.. rubric:: |prereq| + +.. note:: + + To use |PCI| passthrough or |SRIOV| devices, you must have Intel VT-x and + Intel VT-d features enabled in the BIOS. + +The exercise assumes that the underlying data network **group0-data0** exists +already, and that |VLAN| ID 10 is a valid segmentation ID assigned to +**project1**. + +.. rubric:: |proc| + +#. Log in as the **admin** user to the |os-prod-hor| interface. + +#. Lock the compute node you want to configure. + +#. Configure the Ethernet interface to be used as a PCI passthrough interface. + + + #. Select **Admin** \> **Platform** \> **Host Inventory** from the left-hand pane. + + #. Select the **Hosts** tab. + + #. Click the name of the compute host. + + #. Select the **Interfaces** tab. + + #. Click the **Edit Interface** button associated with the interface you + want to configure. + + + The Edit Interface dialog appears. + + .. image:: ../figures/ptj1538163621289.png + + + + Select **pci-passthrough**, from the **Interface Class** drop-down, and + then select the data network to attach the interface. + + You may also need to change the |MTU|. + + The interface can also be configured from the |CLI| as illustrated below: + + .. code-block:: none + + ~(keystone_admin)$ system host-if-modify -c pci-passthrough compute-0 enp0s3 + ~(keystone_admin)$ system interface-datanetwork-assign compute-0 + +#. Create the **net0** project network + + Select **Admin** \> **Network** \> **Networks**, select the Networks tab, and then click **Create Network**. Fill in the Create Network dialog box as illustrated below. You must ensure that: + + + - **project1** has access to the project network, either assigning it as + the owner, as in the illustration \(using **Project**\), or by enabling + the shared flag. + + - The segmentation ID is set to 10. + + + .. image:: ../figures/bek1516655307871.png + + + + Click the **Next** button to proceed to the Subnet tab. + + Click the **Next** button to proceed to the Subnet Details tab. + +#. Configure the access switch. Refer to the OEM documentation to configure + the access switch. + + Configure the physical port on the access switch used to connect to + Ethernet interface **enp0s3** as an access port with default |VLAN| ID of 10. + Traffic across the connection is therefore untagged, and effectively + integrated into the targeted project network. + + You can also use a trunk port on the access switch so that it handles + tagged packets as well. However, this opens the possibility for guest + applications to join other project networks using tagged packets with + different |VLAN| IDs, which might compromise the security of the system. + See |os-intro-doc|: :ref:`L2 Access Switches + ` for other details regarding the + configuration of the access switch. + +#. Unlock the compute node. + +#. Create a neutron port with a |VNIC| type, direct-physical. + + The neutron port can also be created from the |CLI|, using the following + command. First, you must set up the environment and determine the correct + network |UUID| to use with the port. + + .. code-block:: none + + ~(keystone_admin)$ source /etc/platform/openrc + ~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 + ~(keystone_admin)$ openstack network list | grep net0 + ~(keystone_admin)$ openstack port create --network --vnic-type direct-physical + + You have now created a port to be used when launching the server in the + next step. + +#. Launch the virtual machine, specifying the port uuid created in *Step 7*. + + .. note:: + + You will need to source to the same project selected in the Create + Network 'net0' in *step 4*. + + .. code-block:: none + + ~(keystone_admin)$ openstack server create --flavor --image --nic port-id= + + For more information, see the Neutron documentation at: + `https://docs.openstack.org/neutron/train/admin/config-sriov.html + `__. + +.. rubric:: |result| + +The new virtual machine instance is up now. It has a PCI passthrough connection +to the **net0** project network identified with |VLAN| ID 10. + +.. only:: partner + + .. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest + + :start-after: warning-text-begin + :end-before: warning-text-end + +.. rubric:: |prereq| + +Access switches must be properly configured to ensure that virtual machines +using |PCI|-passthrough or |SRIOV| Ethernet interfaces have the expected +connectivity. In a common scenario, the virtual machine using these interfaces +connects to external end points only, that is, it does not connect to other +virtual machines in the same |prod-os| cluster. In this case: + + +.. _configure-pci-passthrough-ethernet-interfaces-ul-pz2-w4w-rr: + +- Traffic between the virtual machine and the access switch can be tagged or + untagged. + +- The connecting port on the access switch is part of a port-based |VLAN|. + +.. only:: partner + + .. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest + + :start-after: vlan-bullet-1-begin + :end-before: vlan-bullet-1-end + +- The port-based |VLAN| provides the required connectivity to external + switching and routing equipment needed by guest applications to establish + connections to the intended end points. + + +For connectivity to other virtual machines in the |prod-os| cluster the +following configuration is also required: + + +.. _configure-pci-passthrough-ethernet-interfaces-ul-ngs-nvw-rr: + +- The |VLAN| ID used for the project network, 10 in this example, and the + default port |VLAN| ID of the access port on the switch are the same. This + ensures that incoming traffic from the virtual machine is tagged internally by + the switch as belonging to |VLAN| ID 10, and switched to the appropriate exit + ports. + +.. only:: partner + + .. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest + + :start-after: vlan-bullet-2-begin + :end-before: vlan-bullet-2-end + +.. only:: partner + + .. include:: ../../_includes/configuring-pci-passthrough-ethernet-interfaces.rest + + :start-after: vlan-bullet-3-begin + :end-before: vlan-bullet-3-end + + + diff --git a/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst b/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst new file mode 100644 index 000000000..c0f6ef9ed --- /dev/null +++ b/doc/source/node_management/openstack/exposing-a-generic-pci-device-for-use-by-vms.rst @@ -0,0 +1,76 @@ + +.. akw1596720643112 +.. _expose-a-generic-pci-device-for-use-by-vms: + +========================================== +Expose a Generic PCI Device for Use by VMs +========================================== + +You can configure generic |PCI|-passthrough or |SRIOV| devices \(i.e. not network +interface devices/cards\) so that they are accessible to |VMs|. + +.. rubric:: |context| + +.. note:: + + For network cards, you must use the network interface settings to configure + VM access. You can do this from either the |os-prod-hor| interface or + the |CLI|. For more information, see :ref:`Configuring PCI Passthrough + Ethernet Interfaces `. + +For generic |PCI|-passthrough or SR-IOV devices, you must + + +.. _expose-a-generic-pci-device-for-use-by-vms-ul-zgb-zpc-fcb: + +- on each host where an instance of the device is installed, enable the + device For this, you can use the |os-prod-hor| interface or the |CLI|. + +- assign a system-wide |PCI| alias to the device. For this, you must use the + |CLI|. + + +To enable devices and assign a |PCI| alias using the |CLI|, see :ref:`Exposing a +Generic PCI Device Using the CLI +`. + +.. rubric:: |prereq| + +To edit a device, you must first lock the host. + +.. rubric:: |proc| + +#. Select the **Devices** tab on the Host Detail page for the host. + +#. Click **Edit Device**. + + .. image:: ../figures/jow1452530556357.png + + +#. Update the information as required. + + **Name** + Sets the system inventory name for the device. + + **Enabled** + Controls whether the device is exposed for use by |VMs|. + +#. Repeat the above steps for other hosts where the same type of device is + installed. + +#. Assign a |PCI| alias. + + The |PCI| alias is a system-wide setting. It is used for all devices of the + same type across multiple hosts. + + For more information, see :ref:`Configuring a PCI Alias in Nova + `. + +.. rubric:: |postreq| + +After completing the steps above, unlock the host. + +To access a device from a |VM|, you must configure a flavor with a reference to +the |PCI| alias. For more information, see :ref:`Configuring a Flavor to Use a +Generic PCI Device `. + diff --git a/doc/source/node_management/openstack/exposing-a-generic-pci-device-using-the-cli.rst b/doc/source/node_management/openstack/exposing-a-generic-pci-device-using-the-cli.rst new file mode 100644 index 000000000..e2700c5c0 --- /dev/null +++ b/doc/source/node_management/openstack/exposing-a-generic-pci-device-using-the-cli.rst @@ -0,0 +1,90 @@ + +.. dxo1596720611892 +.. _exposing-a-generic-pci-device-using-the-cli: + +========================================= +Expose a Generic PCI Device Using the CLI +========================================= + +For generic PCI-passthrough or |SRIOV| devices \(i.e not network interface +devices or cards\), you can configure |VM| access using the |CLI|. + +.. rubric:: |context| + +To expose a device for |VM| access, you must + + +.. _exposing-a-generic-pci-device-using-the-cli-ul-zgb-zpc-fcb: + +- enable the device on each host where it is installed + +- assign a system-wide |PCI| alias to the device. For more information, see + :ref:`Configuring a PCI Alias in Nova `. + +.. rubric:: |prereq| + +To edit a device, you must first lock the host. + +.. rubric:: |proc| + +#. List the non-|NIC| devices on the host for which |VM| access is supported. Use + ``-a`` to list disabled devices. + + .. code-block:: none + + ~(keystone_admin)$ system host-device-list compute-0 -a + +------------+----------+------+-------+-------+------+--------+--------+-----------+---------+ + | name | address | class| vendor| device| class| vendor | device | numa_node | enabled | + | | | id | id | id | | name | name | | | + +------------+----------+------+-------+-------+------+--------+--------+-----------+---------+ + |pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | + |pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | + |pci_0000_00.| 0000:00:.| 0c0. | 8086 | 8d2d | USB | Intel | C610/x9| 0 | False | + +------------+----------+------+-------+-------+------+--------+--------+-----------+---------+ + + This list shows the |PCI| address needed to enable a device, and the device + ID and vendor ID needed to add a |PCI| Alias. + +#. On each host where the device is installed, enable the device. + + .. code-block:: none + + ~(keystone_admin)$system host-device-modify + --enable=True [--name=""] + + where + + **** + is the name of the host where the device is installed + + **** + is the address shown in the device list + + **** + is an optional descriptive name for display purposes + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system host-device-modify --name="Encryption1" --enable=True compute-0 0000:09:00.0 + +#. Assign a |PCI| alias. + + The |PCI| alias is a system-wide setting. It is used for all devices of the + same type across multiple hosts. For more information, see + :ref:`Configuring a PCI Alias in Nova `. + + As the change is applied, **Config-out-of-date** alarms are raised. The + alarms are automatically cleared when the change is complete. + +.. rubric:: |result| + +The device is added to the list of available devices. + +.. rubric:: |postreq| + +To access a device from a |VM|, you must configure a flavor with a reference to +the |PCI| alias. For more information, see :ref:`Configuring a Flavor to Use a +Generic PCI Device `. + diff --git a/doc/source/node_management/openstack/generic-pci-passthrough.rst b/doc/source/node_management/openstack/generic-pci-passthrough.rst new file mode 100644 index 000000000..58940c5e8 --- /dev/null +++ b/doc/source/node_management/openstack/generic-pci-passthrough.rst @@ -0,0 +1,71 @@ + +.. dze1596720804160 +.. _generic-pci-passthrough: + +======================= +Generic PCI Passthrough +======================= + +.. rubric:: |prereq| + +Before you can enable a device, you must lock the compute host. + +If you want to enable a device that is in the inventory for pci-passthrough, +the device must be enabled and a Nova |PCI| Alias must be configured with +vendor-id, product-id and alias name. + +You can use the following command from the |CLI|, to view devices that are +automatically inventoried on a host: + +.. code-block:: none + + ~(keystone_admin)$ system host-device-list controller-0 --all + + +You can use the following command from the |CLI| to list the devices for a +host, for example: + +.. code-block:: none + + ~(keystone_admin)$ system host-device-list --all controller-0 + +-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+ + | name | address | class| vendor| device| class| vendor | device | numa_node |enabled| + | | | id | id | id | | name | name | | | + +------------+----------+-------+-------+-------+------+--------+--------+-------------+-------+ + | pci_0000_05.| 0000:05:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | + | pci_0000_06.| 0000:06:.| 030. | 10de | 13f2 | VGA. | NVIDIA.| GM204GL| 0 | True | + +-------------+----------+------+-------+-------+------+--------+--------+-------------+-------+ + +The ``--alloption`` displays both enabled and disabled devices. + +.. note:: + + Depending on the system, not all devices in this list can be accessed via + pci-passthrough, based on hardware/driver limitations. + +To enable or disable a device using the |CLI|, do the following: + +.. rubric:: |prereq| + +To edit a device, you must first lock the host. + +.. rubric:: |proc| + +#. Enable the device. + + .. code-block:: none + + ~(keystone_admin)$ system host-device-modify + --enable=True + +#. Add a |PCI| alias. + + For more information, see :ref:`Configuring a PCI Alias in Nova + `. + +.. rubric:: |postreq| + +Refer to :ref:`Configuring a Flavor to Use a Generic PCI Device +` for details on how to +launch the |VM| with a |PCI| interface to this Generic |PCI| Device. + diff --git a/doc/source/node_management/openstack/index.rst b/doc/source/node_management/openstack/index.rst index 70ea170df..f714cc0f1 100644 --- a/doc/source/node_management/openstack/index.rst +++ b/doc/source/node_management/openstack/index.rst @@ -5,5 +5,24 @@ Contents .. toctree:: :maxdepth: 1 + node-management-overview adding-compute-nodes-to-an-existing-duplex-system using-labels-to-identify-openstack-nodes + + +------------------------- +PCI Device Access for VMs +------------------------- + +.. toctree:: + :maxdepth: 1 + + sr-iov-encryption-acceleration + configuring-pci-passthrough-ethernet-interfaces + pci-passthrough-ethernet-interface-devices + configuring-a-flavor-to-use-a-generic-pci-device + generic-pci-passthrough + pci-device-access-for-vms + pci-sr-iov-ethernet-interface-devices + exposing-a-generic-pci-device-for-use-by-vms + exposing-a-generic-pci-device-using-the-cli \ No newline at end of file diff --git a/doc/source/node_management/openstack/node-management-overview.rst b/doc/source/node_management/openstack/node-management-overview.rst new file mode 100644 index 000000000..b9567c07b --- /dev/null +++ b/doc/source/node_management/openstack/node-management-overview.rst @@ -0,0 +1,21 @@ + +.. zmd1590003300772 +.. _node-management-overview: + +======== +Overview +======== + +You can add OpenStack compute nodes to an existing |AIO| Duplex system, and use +labels to identify OpenStack Nodes. + +Guidelines for |VMs| in a Duplex system remain unchanged. + +For more information on using labels to identify OpenStack Nodes, see +:ref:`Using Labels to Identify OpenStack Nodes +`. + +For more information on adding compute nodes to an existing Duplex system, see +:ref:`Adding Compute Nodes to an Existing Duplex System +`. + diff --git a/doc/source/node_management/openstack/pci-device-access-for-vms.rst b/doc/source/node_management/openstack/pci-device-access-for-vms.rst new file mode 100644 index 000000000..32f269254 --- /dev/null +++ b/doc/source/node_management/openstack/pci-device-access-for-vms.rst @@ -0,0 +1,64 @@ + +.. sip1596720928269 +.. _pci-device-access-for-vms: + +========================= +PCI Device Access for VMs +========================= + +You can provide |VMs| with |PCI| passthrough or |SRIOV| access to network interface +cards and other |PCI| devices. + +.. note:: + + To use |PCI| passthrough or |SRIOV| devices, you must have Intel-VTx and + Intel VT-d features enabled in the BIOS. + +.. note:: + + When starting a |VM| where interfaces have **binding\_vif\_type**, the + following parameter is required for the |VM| flavor, hw:mem\_page\_size=large + enabled + + where, page size is one of the following: + + +.. _pci-device-access-for-vms-ul-cz3-mtd-z4b: + +- small: Requests the smallest available size on the compute node, which + is always 4KiB of regular memory. + +- large: Requests the largest available huge page size, 1GiB or 2MiB. + +- any: Requests any available size, including small pages. Cloud platform + uses the largest available size, 1GiB, then 2MiB, and then 4KiB. + + +For a network interface card, you can provide |VM| access by configuring the +network interface. For more information, see :ref:`Configuring PCI Passthrough +Ethernet Interfaces `. + +For other types of device, you can provide |VM| access by assigning a |PCI| alias +to the device, and then referencing the |PCI| alias in a flavor extra +specification. For more information, see :ref:`Expose a Generic PCI Device +for Use by VMs ` and +:ref:`Configuring a Flavor to Use a Generic PCI Device +`. + +- :ref:`PCI Passthrough Ethernet Interface Devices ` + +- :ref:`Configuring PCI Passthrough Ethernet Interfaces ` + +- :ref:`PCI SR-IOV Ethernet Interface Devices ` + +- :ref:`Generic PCI Passthrough ` + +- :ref:`SR-IOV Encryption Acceleration ` + +- :ref:`Expose a Generic PCI Device for Use by VMs ` + +- :ref:`Exposing a Generic PCI Device Using the CLI ` + +- :ref:`Configure a Flavor to Use a Generic PCI Device ` + + diff --git a/doc/source/node_management/openstack/pci-passthrough-ethernet-interface-devices.rst b/doc/source/node_management/openstack/pci-passthrough-ethernet-interface-devices.rst new file mode 100644 index 000000000..45cafda2e --- /dev/null +++ b/doc/source/node_management/openstack/pci-passthrough-ethernet-interface-devices.rst @@ -0,0 +1,64 @@ + +.. pqu1596720884619 +.. _pci-passthrough-ethernet-interface-devices: + +========================================== +PCI Passthrough Ethernet Interface Devices +========================================== + +For all purposes, a |PCI| passthrough interface behaves as if it were physically +attached to the virtual machine. + +Therefore, any potential throughput limitations coming from the virtualized +environment, such as the ones introduced by internal copying of data buffers, +are eliminated. However, by bypassing the virtualized environment, the use of +|PCI| passthrough Ethernet devices introduces several restrictions that must be +taken into consideration. They include: + + +.. _pci-passthrough-ethernet-interface-devices-ul-mjs-m52-tp: + +- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring + +- no support for live migration + +.. only:: partner + + .. include:: ../../_includes/pci-passthrough-ethernet-interface-devices.rest + + :start-after: avs-bullet-3-begin + :end-before: avs-bullet-3-end + +.. only:: starlingx + + A passthrough interface is attached directly to the provider network's + access switch. Therefore, proper routing of traffic to connect the + passthrough interface to a particular project network depends entirely on + the |VLAN| tagging options configured on both the passthrough interface and + the access port on the switch. + +.. only:: partner + + .. include:: ../../_includes/pci-passthrough-ethernet-interface-devices.rest + + :start-after: avs-text-begin + :end-before: avs-text-end + + +The access switch routes incoming traffic based on a |VLAN| ID, which ultimately +determines the project network to which the traffic belongs. The |VLAN| ID is +either explicit, as found in incoming tagged packets, or implicit, as defined +by the access port's default |VLAN| ID when the incoming packets are untagged. In +both cases the access switch must be configured to process the proper |VLAN| ID, +which therefore has to be known in advance. + +.. caution:: + + On cold migration, a |PCI| passthrough interface receives a new |MAC| address, + and therefore a new **eth** x interface. The IP address is retained. + +In the following example a new virtual machine is launched by user **user1** on +project **project1**, with a passthrough interface connected to the project +network **net0** identified with |VLAN| ID 10. See :ref:`Configure PCI +Passthrough ethernet Interfaces ` + diff --git a/doc/source/node_management/openstack/pci-sr-iov-ethernet-interface-devices.rst b/doc/source/node_management/openstack/pci-sr-iov-ethernet-interface-devices.rst new file mode 100644 index 000000000..8f38b9be3 --- /dev/null +++ b/doc/source/node_management/openstack/pci-sr-iov-ethernet-interface-devices.rst @@ -0,0 +1,61 @@ + +.. vic1596720744539 +.. _pci-sr-iov-ethernet-interface-devices: + +===================================== +PCI SR-IOV Ethernet Interface Devices +===================================== + +A |SRIOV| ethernet interface is a physical |PCI| ethernet |NIC| that implements +hardware-based virtualization mechanisms to expose multiple virtual network +interfaces that can be used by one or more virtual machines simultaneously. + +The |PCI|-SIG Single Root I/O Virtualization and Sharing \(|SRIOV|\) specification +defines a standardized mechanism to create individual virtual ethernet devices +from a single physical ethernet interface. For each exposed virtual ethernet +device, formally referred to as a Virtual Function \(VF\), the |SRIOV| interface +provides separate management memory space, work queues, interrupts resources, +and |DMA| streams, while utilizing common resources behind the host interface. +Each VF therefore has direct access to the hardware and can be considered to be +an independent ethernet interface. + +When compared with a |PCI| Passthrough ethernet interface, a |SRIOV| ethernet +interface: + + +.. _pci-sr-iov-ethernet-interface-devices-ul-tyq-ymg-rr: + +- Provides benefits similar to those of a |PCI| Passthrough ethernet interface, + including lower latency packet processing. + +- Scales up more easily in a virtualized environment by providing multiple + VFs that can be attached to multiple virtual machine interfaces. + +- Shares the same limitations, including the lack of support for |LAG|, |QoS|, + |ACL|, and live migration. + +- Has the same requirements regarding the |VLAN| configuration of the access + switches. + +- Provides a similar configuration workflow when used on |prod-os|. + + +The configuration of a |PCI| |SRIOV| ethernet interface is identical to +:ref:`Configure PCI Passthrough ethernet Interfaces +` except that + + +.. _pci-sr-iov-ethernet-interface-devices-ul-ikt-nvz-qmb: + +- you use **pci-sriov** instead of **pci-passthrough** when defining the + network type of an interface + +- the segmentation ID of the project network\(s\) used is more significant + here since this identifies the particular |VF| of the |SRIOV| interface + +- when creating the neutron port, you must use ``--vnic-typedirect`` + +- when creating a neutron port backed by an |SRIOV| |VF|, you must use + ``--vnic-type direct`` + + diff --git a/doc/source/node_management/openstack/sr-iov-encryption-acceleration.rst b/doc/source/node_management/openstack/sr-iov-encryption-acceleration.rst new file mode 100644 index 000000000..098dacc2e --- /dev/null +++ b/doc/source/node_management/openstack/sr-iov-encryption-acceleration.rst @@ -0,0 +1,33 @@ + +.. psa1596720683716 +.. _sr-iov-encryption-acceleration: + +============================== +SR-IOV Encryption Acceleration +============================== + +|prod-os| supports |PCI| |SRIOV| access for encryption acceleration. + +|prod-os| supports |SRIOV| access for acceleration devices based on +Intel QuickAssist™ technology, specifically Coleto Creek 8925/8950, and C62X +chipset. Other QuickAssist™ devices are currently not supported. + +If acceleration devices have to be used, the devices have to be present as +virtual devices \(qat-dh895xcc-vfor qat-c62x-vf\) on the |PCI| bus. Physical +devices \(qat-pf\) are currently not supported. + +If hardware is present \(for example, Intel AV-ICE02 VPN Acceleration Card\) on +an available host, you can provide |VMs| with |PCI| passthrough access to one or +more of the supported virtual |SRIOV| acceleration devices to improve +performance for encrypted communications. + +.. caution:: + Live migration is not supported for instances using |SRIOV| devices. + +To expose the device to |VMs|, see :ref:`Exposing a Generic PCI Device for Use +by VMs `. + +.. note:: + To use |PCI| passthrough or |SRIOV| devices, you must have Intel VT-x and + Intel VT-d features enabled in the BIOS. + diff --git a/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst b/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst index fe5f78e51..3a4df182b 100644 --- a/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst +++ b/doc/source/node_management/openstack/using-labels-to-identify-openstack-nodes.rst @@ -2,25 +2,53 @@ .. rho1557409702625 .. _using-labels-to-identify-openstack-nodes: -====================================== +======================================== Use Labels to Identify OpenStack Nodes -====================================== +======================================== The |prod-os| application is deployed on the nodes of the |prod| based on node labels. +.. rubric:: |context| + Prior to initially installing the |prod-os| application or when adding nodes to a |prod-os| deployment, you need to label the nodes appropriately for their OpenStack role. .. _using-labels-to-identify-openstack-nodes-table-xyl-qmy-thb: -.. Common OpenStack labels -.. include:: ../../_includes/common-openstack-labels.rest -For more information, see |node-doc|: :ref:`Configure Node Labels from The CLI -`. +.. only:: starlingx + + .. table:: Table 1. Common OpenStack Labels + :widths: auto + + +-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | Label | Worker/Controller | Description | + +=============================+===========================+=======================================================================================================================================================================+ + | **openstack-control-plane** | - Controller | Identifies a node to deploy openstack controller services on. | + | | | | + | | - All-in-One Controller | | + +-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | **openstack-compute-node** | Worker | Identifies a node to deploy openstack compute agents on. | + | | | | + | | | .. note:: | + | | | Adding or removing this label, or removing a node with this label from a cluster, triggers the regeneration and application of the helm chart override by Armada. | + +-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + | **sriov** | - Worker | Identifies a node as supporting sr-iov. | + | | | | + | | - All-in-One Controller | | + +-----------------------------+---------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+ + +.. only:: partner + + .. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest + + :start-after: table-1-of-contents-begin + :end-before: table-1-of-contents-end + +For more information. see |node-doc|: :ref:`Configuring Node Labels from The CLI `. .. rubric:: |prereq| @@ -28,4 +56,40 @@ Nodes must be locked before labels can be assigned or removed. .. rubric:: |proc| -.. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest +.. only:: starlingx + + #. To assign Kubernetes labels to identify compute-0 as a compute node with and SRIOV, use the following command: + + .. code-block:: none + + ~(keystone)admin)$ system host-label-assign compute-0 openstack-compute-node=enabled sriov=enabled + +-------------+--------------------------------------+ + | Property | Value | + +-------------+--------------------------------------+ + | uuid | 2909d775-cd6c-4bc1-8268-27499fe38d5e | + | host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 | + | label_key | openstack-compute-node | + | label_value | enabled | + +-------------+--------------------------------------+ + +-------------+--------------------------------------+ + | Property | Value | + +-------------+--------------------------------------+ + | uuid | d8e29e62-4173-4445-886c-9a95b0d6fee1 | + | host_uuid | 1f00d8a4-f520-41ee-b608-1b50054b1cd8 | + | label_key | sriov | + | label_value | enabled | + +-------------+--------------------------------------+ + + #. To remove the labels from the host, do the following. + + .. code-block:: none + + ~(keystone)admin)$ system host-label-remove compute-0 openstack-compute-node sriov + Deleted host label openstack-compute-node for host compute-0 + Deleted host label sriov for host compute-0 + +.. only:: partner + + .. include:: ../../_includes/using-labels-to-identify-openstack-nodes.rest + + :start-after: table-1-of-contents-end diff --git a/doc/source/planning/openstack/block-storage-for-virtual-machines.rst b/doc/source/planning/openstack/block-storage-for-virtual-machines.rst index a7312dfda..e5feeb2dc 100755 --- a/doc/source/planning/openstack/block-storage-for-virtual-machines.rst +++ b/doc/source/planning/openstack/block-storage-for-virtual-machines.rst @@ -31,12 +31,12 @@ You can allocate root disk storage for virtual machines using the following: The use of Cinder volumes or ephemeral storage is determined by the **Instance Boot Source** setting when an instance is launched. Boot from volume results in the use of a Cinder volume, while Boot from image results in the use of -ephemeral storage. +Ephemeral storage. .. note:: On systems with one or more single-disk compute hosts configured with local instance backing, the use of Boot from volume for all |VMs| is strongly - recommended. This helps prevent the use of local ephemeral storage on these + recommended. This helps prevent the use of local Ephemeral storage on these hosts. On systems without dedicated storage hosts, Cinder-backed persistent storage @@ -52,27 +52,27 @@ On systems with dedicated hosts, Cinder storage is provided using Ceph-backed Ephemeral and Swap Disk Storage for VMs --------------------------------------- -Storage for |VM| ephemeral and swap disks, and for ephemeral boot disks if the +Storage for |VM| Ephemeral and swap disks, and for Ephemeral boot disks if the |VM| is launched from an image rather than a volume, is provided using the **nova-local** local volume group defined on compute hosts. -The **nova-local** group provides either local ephemeral storage using -|CoW|-image-backed storage resources on compute hosts, or remote ephemeral +The **nova-local** group provides either local Ephemeral storage using +|CoW|-image-backed storage resources on compute hosts, or remote Ephemeral storage, using Ceph-backed resources on storage hosts. You must configure the storage backing type at installation before you can unlock a compute host. The -default type is image-backed local ephemeral storage. You can change the +default type is image-backed local Ephemeral storage. You can change the configuration after installation. .. xbooklink For more information, see |stor-doc|: :ref:`Working with Local Volume Groups `. .. caution:: - On a compute node with a single disk, local ephemeral storage uses the root + On a compute node with a single disk, local Ephemeral storage uses the root disk. This can adversely affect the disk I/O performance of the host. To avoid this, ensure that single-disk compute nodes use remote Ceph-backed storage if available. If Ceph storage is not available on the system, or is not used for one or more single-disk compute nodes, then you must ensure that all VMs on the system are booted from Cinder volumes and do not use - ephemeral or swap disks. + Ephemeral or swap disks. On |prod-os| Simplex or Duplex systems that use a single disk, the same consideration applies. Since the disk also provides Cinder support, adverse @@ -83,11 +83,11 @@ The backing type is set individually for each host using the **Instance Backing** parameter on the **nova-local** local volume group. **Local CoW Image backed** - This provides local ephemeral storage using a |CoW| sparse-image-format + This provides local Ephemeral storage using a |CoW| sparse-image-format backend, to optimize launch and delete performance. **Remote RAW Ceph storage backed** - This provides remote ephemeral storage using a Ceph backend on a system + This provides remote Ephemeral storage using a Ceph backend on a system with storage nodes, to optimize migration capabilities. Ceph backing uses a Ceph storage pool configured from the storage host resources. @@ -96,17 +96,18 @@ storage by setting a flavor extra specification. .. xbooklink For more information, see OpenStack Configuration and Management: :ref:`Specifying the Storage Type for VM Ephemeral Disks `. -.. _block-storage-for-virtual-machines-d29e17: - .. caution:: - Unlike Cinder-based storage, ephemeral storage does not persist if the + Unlike Cinder-based storage, Ephemeral storage does not persist if the instance is terminated or the compute node fails. + + .. _block-storage-for-virtual-machines-d29e17: - In addition, for local ephemeral storage, migration and resizing support + In addition, for local Ephemeral storage, migration and resizing support depends on the storage backing type specified for the instance, as well as the boot source selected at launch. The **nova-local** storage type affects migration behavior. Live migration is -not always supported for |VM| disks using local ephemeral storage. +not always supported for |VM| disks using local Ephemeral storage. -.. xbooklink For more information, see :ref:`VM Storage Settings for Migration, Resize, or Evacuation `. +.. xbooklink For more information, see :ref:`VM Storage Settings for Migration, + Resize, or Evacuation `. diff --git a/doc/source/planning/openstack/index.rst b/doc/source/planning/openstack/index.rst index 3e738889b..d96ba0645 100755 --- a/doc/source/planning/openstack/index.rst +++ b/doc/source/planning/openstack/index.rst @@ -25,6 +25,7 @@ Data networks network-planning-data-networks physical-network-planning + resource-planning .. toctree:: :maxdepth: 1 diff --git a/doc/source/planning/openstack/resource-planning.rst b/doc/source/planning/openstack/resource-planning.rst new file mode 100644 index 000000000..cc733acde --- /dev/null +++ b/doc/source/planning/openstack/resource-planning.rst @@ -0,0 +1,84 @@ + +.. jow1454003783557 +.. _resource-planning: + +================== +Resource Placement +================== + +.. only:: starlingx + + For |VMs| requiring maximum determinism and throughput, the |VM| must be + placed in the same NUMA node as all of its resources, including |VM| + memory, |NICs|, and any other resource such as |SRIOV| or |PCI|-Passthrough + devices. + + VNF 1 and VNF 2 in the example figure are examples of |VMs| deployed for + maximum throughput with |SRIOV|. + +.. only:: starlingx + + A |VM| such as VNF 6 in NUMA-REF will not have the same performance as VNF + 1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in + this case: + +.. From NUMA-REF +.. xbooklink :ref:`VM scheduling and placement - NUMA + architecture ` + +.. only:: partner + + .. include:: ../../_includes/resource-planning.rest + + :start-after: avs-text-1-begin + :end-before: avs-text-2-end + +.. only:: partner + + .. include:: ../../_includes/resource-planning.rest + + :start-after: avs-text-2-begin + :end-before: avs-text-2-end + + +.. _resource-planning-ul-tcb-ssz-55: + +.. only:: partner + + .. include:: ../../_includes/resource-planning.rest + + :start-after: avs-text-1-end + + +If accessing |PCIe| devices directly from a |VM| using |PCI|-Passthrough or +|SRIOV|, maximum performance can only be achieved by pinning the |VM| cores +to the same NUMA node as the |PCIe| device. For example, VNF1 and VNF2 +will have optimum SR-IOV performance if deployed on NUMA node 0 and VNF6 +will have maximum |PCI|-Passthrough performance if deployed in NUMA node 1. +Options for controlling access to |PCIe| devices are: + + +.. _resource-planning-ul-ogh-xsz-55: + +- Use pci\_numa\_affinity flavor extra specs to force VNF6 to be scheduled on + NUMA nodes where the |PCI| device is running. This is the recommended option + because it does not require prior knowledge of which socket a |PCI| device + resides on. The affinity may be **strict** or **prefer**: + + +- **Strict** affinity guarantees scheduling on the same NUMA node as a + |PCIe| Device or the VM will not be scheduled. + +- **Prefer** affinity uses best effort so it will only schedule the VM on + a NUMA node if no NUMA nodes with that |PCIe| device are available. Note + that prefer mode does not provide the same performance or determinism + guarantees as strict, but may be good enough for some applications. + + +- Pin the VM to the NUMA node 0 with the |PCI| device using flavor extra + specs or image properties. This will force the scheduler to schedule the VM + on NUMA node 0. However, this requires knowledge of which cores the + applicable |PCIe| devices run on and does not work well unless all nodes + have that type of |PCIe| node attached to the same socket. + + diff --git a/doc/source/planning/openstack/storage-configuration-storage-on-hosts.rst b/doc/source/planning/openstack/storage-configuration-storage-on-hosts.rst index 52725367b..0b347654b 100755 --- a/doc/source/planning/openstack/storage-configuration-storage-on-hosts.rst +++ b/doc/source/planning/openstack/storage-configuration-storage-on-hosts.rst @@ -17,14 +17,14 @@ Storage on controller hosts .. _storage-configuration-storage-on-controller-hosts: The controllers provide storage for the |prod-os|'s OpenStack Controller -Services through a combination of local container ephemeral disk, |PVCs| backed +Services through a combination of local container Ephemeral disk, |PVCs| backed by Ceph and a containerized HA mariadb deployment. On systems configured for controller storage with a small Ceph cluster on the master/controller nodes, they also provide persistent block storage for persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and -storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or -Duplex systems, the controllers also provide nova-local storage for ephemeral +storage for |VM| remote Ephemeral volumes \(Nova\). On All-in-One Simplex or +Duplex systems, the controllers also provide nova-local storage for Ephemeral |VM| volumes. On systems configured for controller storage, the master/controller's root disk @@ -51,7 +51,7 @@ Glance, Cinder, and remote Nova storage On systems configured for controller storage with a small Ceph cluster on the master/controller nodes, this small Ceph cluster on the controller provides Glance image storage, Cinder block storage, Cinder backup storage, and Nova -remote ephemeral block storage. For more information, see :ref:`Block Storage +remote Ephemeral block storage. For more information, see :ref:`Block Storage for Virtual Machines `. .. _storage-configuration-storage-on-controller-hosts-section-N101BB-N10029-N10001: @@ -61,7 +61,7 @@ Nova-local storage ****************** Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute -function, and therefore provide **nova-local** storage for ephemeral disks. On +function, and therefore provide **nova-local** storage for Ephemeral disks. On other systems, **nova-local** storage is provided by compute hosts. For more information about this type of storage, see :ref:`Storage on Compute Hosts ` and :ref:`Block Storage for Virtual Machines @@ -78,17 +78,16 @@ Storage on storage hosts ------------------------ |prod-os| creates default Ceph storage pools for Glance images, Cinder volumes, -Cinder backups, and Nova ephemeral data. - -.. xbooklink For more information, see the :ref:`Platform Storage Configuration ` guide for details on configuring the internal Ceph cluster on either controller or storage hosts. - -.. _storage-on-compute-hosts: +Cinder backups, and Nova Ephemeral data. For more information, see the +:ref:`Platform Storage Configuration ` +guide for details on configuring the internal Ceph cluster on either controller +or storage hosts. ------------------------ Storage on compute hosts ------------------------ -Compute-labelled worker hosts can provide ephemeral storage for |VM| disks. +Compute-labelled worker hosts can provide Ephemeral storage for |VM| disks. .. note:: On All-in-One Simplex or Duplex systems, compute storage is provided using diff --git a/doc/source/planning/openstack/vm-storage-settings-for-migration-resize-or-evacuation.rst b/doc/source/planning/openstack/vm-storage-settings-for-migration-resize-or-evacuation.rst index a78c94f9c..506a2e3c3 100755 --- a/doc/source/planning/openstack/vm-storage-settings-for-migration-resize-or-evacuation.rst +++ b/doc/source/planning/openstack/vm-storage-settings-for-migration-resize-or-evacuation.rst @@ -7,7 +7,7 @@ VM Storage Settings for Migration, Resize, or Evacuation ======================================================== The migration, resize, or evacuation behavior for an instance depends on the -type of ephemeral storage used. +type of Ephemeral storage used. .. note:: Live migration behavior can also be affected by flavor extra @@ -16,6 +16,7 @@ type of ephemeral storage used. The following table summarizes the boot and local storage configurations needed to support various behaviors. + .. _vm-storage-settings-for-migration-resize-or-evacuation-table-wmf-qdh-v5: .. table:: @@ -43,16 +44,16 @@ to support various behaviors. In addition to the behavior summarized in the table, system-initiated cold migrate \(e.g. when locking a host\) and evacuate restrictions may be applied -if a |VM| with a large root disk size exists on the host. For a Local |CoW| -Image Backed \(local\_image\) storage type, the VIM can cold migrate or -evacuate |VMs| with disk sizes up to 60 GB +if a |VM| with a large root disk size exists on the host. For a Local CoW Image +Backed \(local\_image\) storage type, the VIM can cold migrate or evacuate +|VMs| with disk sizes up to 60 GB .. note:: The criteria for live migration are independent of disk size. .. note:: The **Local Storage Backing** is a consideration only for instances that - use local ephemeral or swap disks. + use local Ephemeral or swap disks. The boot configuration for an instance is determined by the **Instance Boot Source** selected at launch. diff --git a/doc/source/security/index.rst b/doc/source/security/index.rst index 7e7c84e18..ae30964c9 100644 --- a/doc/source/security/index.rst +++ b/doc/source/security/index.rst @@ -32,3 +32,14 @@ Kubernetes :maxdepth: 2 kubernetes/index + +--------- +OpenStack +--------- + +.. check what put here + +.. toctree:: + :maxdepth: 2 + + openstack/index \ No newline at end of file diff --git a/doc/source/security/openstack/access-using-the-default-set-up.rst b/doc/source/security/openstack/access-using-the-default-set-up.rst new file mode 100644 index 000000000..575e9215d --- /dev/null +++ b/doc/source/security/openstack/access-using-the-default-set-up.rst @@ -0,0 +1,22 @@ + +.. rhv1589993884379 +.. _access-using-the-default-set-up: + +=============================== +Access Using the Default Set-up +=============================== + +Upon installation, you can access the system using either the local |CLI| via +the local console and/or ssh, and |os-prod-hor-long|, the WRO administrative +web service. + +For details on the local |CLI|, see :ref:`Use Local CLIs `. + +For convenience, the |prod-os| administrative web service, Horizon, is +initially made available on node port 31000, i.e. at URL +http://:31000. + +After setting the domain name, see :ref:`Configure Remote CLIs +`, |os-prod-hor-long| is accessed by a +different URL. + diff --git a/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst b/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst new file mode 100644 index 000000000..3ec01e05c --- /dev/null +++ b/doc/source/security/openstack/config-and-management-using-container-backed-remote-clis-and-clients.rst @@ -0,0 +1,113 @@ + +.. jcc1605727727548 +.. _config-and-management-using-container-backed-remote-clis-and-clients: + +============================================ +Use Container-backed Remote CLIs and Clients +============================================ + +Remote openstack |CLIs| can be used in any shell after sourcing the generated +remote |CLI|/client RC file. This RC file sets up the required environment +variables and aliases for the remote |CLI| commands. + +.. rubric:: |context| + +.. note:: + If you specified repositories that require authentication when configuring + the container-backed remote |CLIs|, you must perform a :command:`docker + login` to that repository before using remote |CLIs| for the first time + +.. rubric:: |prereq| + +.. _config-and-management-using-container-backed-remote-clis-and-clients-ul-lgr-btf-14b: + +- Consider adding the following command to your .login or shell rc file, such + that your shells will automatically be initialized with the environment + variables and aliases for the remote |CLI| commands. Otherwise, execute it before + proceeding: + + .. code-block:: none + + root@myclient:/home/user/remote_cli_wd# source remote_client_platform.sh + +- You must have completed the configuration steps in :ref:`Configure Remote + CLIs ` before proceeding. + +.. rubric:: |proc| + +- Test workstation access to the remote OpenStack |CLI|. + + Enter your OpenStack password when prompted. + + .. note:: + The first usage of a command will be slow as it requires that the + docker image supporting the remote clients be pulled from the remote + registry. + + .. code-block:: none + + root@myclient:/home/user/remote_cli_wd# source remote_client_openstack.sh + Please enter your OpenStack Password for project admin as user admin: + root@myclient:/home/user/remote_cli_wd# openstack endpoint list + +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+ + | ID | Region | Service Name | Service Type | Enabled | Interface | URL | + +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+ + | 0342460b877d4d0db407580a2bb13733 | RegionOne | glance | image | True | internal | http://glance.openstack.svc.cluster.local/ | + | 047a2a63a53a4178b8ae1d093487e99e | RegionOne | keystone | identity | True | internal | http://keystone.openstack.svc.cluster.local/v3 | + | 05d5d4bbffb842fea0f81c9eb2784f05 | RegionOne | keystone | identity | True | public | http://keystone.openstack.svc.cluster.local/v3 | + | 07195197201441f9b065dde45c94ef2b | RegionOne | keystone | identity | True | admin | http://keystone.openstack.svc.cluster.local/v3 | + | 0f5c6d0bc626409faedb207b84998e74 | RegionOne | heat-cfn | cloudformation | True | admin | http://cloudformation.openstack.svc.cluster.local/v1 | + | 16806fa22ca744298e5a7ce480bcb885 | RegionOne | cinderv2 | volumev2 | True | admin | http://cinder.openstack.svc.cluster.local/v2/%(tenant_id)s | + | 176cd2168303457fbaf24fca96c6195e | RegionOne | neutron | network | True | admin | http://neutron.openstack.svc.cluster.local/ | + | 21bd7488f8e44a9787f7b3301e666da8 | RegionOne | heat | orchestration | True | admin | http://heat.openstack.svc.cluster.local/v1/%(project_id)s | + | 356fa0758af44a72adeec421ccaf2f2a | RegionOne | nova | compute | True | admin | http://nova.openstack.svc.cluster.local/v2.1/%(tenant_id)s | + | 35a42c23cb8841958885b8b01defa839 | RegionOne | fm | faultmanagement | True | admin | http://fm.openstack.svc.cluster.local/ | + | 37dfe2902a834efdbdcd9f2b9cf2c6e7 | RegionOne | cinder | volume | True | internal | http://cinder.openstack.svc.cluster.local/v1/%(tenant_id)s | + | 3d94abf91e334a74bdb01d8fad455a38 | RegionOne | cinderv2 | volumev2 | True | public | http://cinder.openstack.svc.cluster.local/v2/%(tenant_id)s | + | 433f1e8860ff4d57a7eb64e6ae8669bd | RegionOne | cinder | volume | True | public | http://cinder.openstack.svc.cluster.local/v1/%(tenant_id)s | + | 454b21f41806464580a1f6290cb228ec | RegionOne | placement | placement | True | public | http://placement.openstack.svc.cluster.local/ | + | 561be1aa00da4e4fa64791110ed99852 | RegionOne | heat-cfn | cloudformation | True | public | http://cloudformation.openstack.svc.cluster.local/v1 | + | 6068407def6b4a38b862c89047319f77 | RegionOne | cinderv3 | volumev3 | True | admin | http://cinder.openstack.svc.cluster.local/v3/%(tenant_id)s | + | 77e886bc903a4484a25944c1e99bdf1f | RegionOne | nova | compute | True | internal | http://nova.openstack.svc.cluster.local/v2.1/%(tenant_id)s | + | 7c3e0ce3b69d45878c1152473719107c | RegionOne | fm | faultmanagement | True | internal | http://fm.openstack.svc.cluster.local/ | + +----------------------------------+-----------+--------------+-----------------+---------+-----------+------------------------------------------------------------+ + root@myclient:/home/user/remote_cli_wd# openstack volume list --all-projects + +--------------------------------------+-----------+-----------+------+-------------+ + | ID | Name | Status | Size | Attached to | + +--------------------------------------+-----------+-----------+------+-------------+ + | f2421d88-69e8-4e2f-b8aa-abd7fb4de1c5 | my-volume | available | 8 | | + +--------------------------------------+-----------+-----------+------+-------------+ + root@myclient:/home/user/remote_cli_wd# + + .. note:: + Some commands used by remote |CLI| are designed to leave you in a shell + prompt, for example: + + .. code-block:: none + + root@myclient:/home/user/remote_cli_wd# openstack + + In some cases the mechanism for identifying commands that should leave + you at a shell prompt does not identify them correctly. If you + encounter such scenarios, you can force-enable or disable the shell + options using the or variables before + the command. + + You cannot use both variables at the same time. + +- If you need to run a remote |CLI| command that references a local file, then + that file must be copied to or created in the working directory specified on + the ./config\_client.sh command and referenced as under /wd/ in the actual + command. + + For example: + + .. code-block:: none + + root@myclient:/home/user# cd $HOME/remote_cli_wd + root@myclient:/home/user/remote_cli_wd# openstack image create --public + --disk-format qcow2 --container-format bare --file ubuntu.qcow2 + ubuntu_image + + + diff --git a/doc/source/security/openstack/configure-remote-clis-and-clients.rst b/doc/source/security/openstack/configure-remote-clis-and-clients.rst new file mode 100644 index 000000000..bea3c06e0 --- /dev/null +++ b/doc/source/security/openstack/configure-remote-clis-and-clients.rst @@ -0,0 +1,182 @@ + +.. fvv1597424560931 +.. _configure-remote-clis-and-clients: + +===================== +Configure Remote CLIs +===================== + +The |prod-os| command lines can be accessed from remote computers running +Linux, MacOS, and Windows. + +.. rubric:: |context| + +This functionality is made available using a docker image for connecting to the +|prod-os| remotely. This docker image is pulled as required by configuration +scripts. + +.. rubric:: |prereq| + +You must have Docker installed on the remote systems you connect from. For more +information on installing Docker, see `https://docs.docker.com/install/ +`__. For Windows remote workstations, Docker +is only supported on Windows 10. + +For Windows remote workstations, you must run the following commands from a +Cygwin terminal. See `https://www.cygwin.com/ `__ for +more information about the Cygwin project. + +For Windows remote workstations, you must also have :command:`winpty` +installed. Download the latest release tarball for Cygwin from +`https://github.com/rprichard/winpty/releases +`__. After downloading the +tarball, extract it to any location and change the Windows variable to +include its bin folder from the extracted winpty folder. + +The following procedure shows how to configure the Container-backed Remote +|CLIs| for OpenStack remote access. + +.. rubric:: |proc| + +.. _configure-remote-clis-and-clients-steps-fvl-n4d-tkb: + +#. Copy the remote client tarball file from |dnload-loc| to the remote + workstation, and extract its content. + + + - The tarball is available from the |prod-os| area on |dnload-loc|. + + - You can extract the tarball's contents anywhere on your workstation system. + + + .. parsed-literal:: + + $ cd $HOME + $ tar xvf |prefix|-remote-clients-.tgz + +#. Download the user/tenant **openrc** file from the |os-prod-hor-long| to the + remote workstation. + + + #. Log in to |os-prod-hor| interface as the user and tenant that you want + to configure remote access for. + + In this example, we use the 'admin' user in the 'admin' tenant. + + #. Navigate to **Project** \> **API Access** \> **Download Openstack RCfile**. + + #. Select **Openstack RC file**. + + The file admin-openrc.sh downloads. + + +#. On the remote workstation, configure the client access. + + + #. Change to the location of the extracted tarball. + + .. parsed-literal:: + + $ cd $HOME/|prefix|-remote-clients-/ + + #. Create a working directory that will be mounted by the container + implementing the remote |CLIs|. + + .. code-block:: none + + $ mkdir -p $HOME/remote_cli_wd + + #. Run the :command:`configure\_client.sh` script. + + .. parsed-literal:: + + $ ./configure_client.sh -t openstack -r admin_openrc.sh -w + $HOME/remote_cli_wd -p + 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1 + + + If you specify repositories that require authentication, as shown + above, you must remember to perform a :command:`docker login` to that + repository before using remote |CLIs| for the first time. + + The options for configure\_client.sh are: + + **-t** + The type of client configuration. The options are platform \(for + |prod-long| |CLI| and clients\) and openstack \(for + |prod-os| application |CLI| and clients\). + + The default value is platform. + + **-r** + The user/tenant RC file to use for 'openstack' |CLI| commands. + + The default value is admin-openrc.sh. + + **-o** + The remote |CLI|/workstation RC file generated by this script. + + This RC file needs to be sourced in the shell, to setup required + environment variables and aliases, before running any remote |CLI| + commands. + + For the platform client setup, the default is + remote\_client\_platform.sh. For the openstack application client + setup, the default is remote\_client\_openstack.sh. + + **-w** + The working directory that will be mounted by the container + implementing the remote |CLIs|. When using the remote |CLIs|, any files + passed as arguments to the remote |CLI| commands need to be in this + directory in order for the container to access the files. The + default value is the directory from which the + :command:`configure\_client.sh` command was run. + + **-p** + Override the container image for the platform |CLI| and clients. + + By default, the platform |CLIs| and clients container image is pulled + from docker.io/starlingx/stx-platformclients. + + For example, to use the container images from the |prod| |AWS| ECR: + + .. parsed-literal:: + + $ ./configure_client.sh -t platform -r admin-openrc.sh -k + admin-kubeconfig -w $HOME/remote_cli_wd -p + 625619392498.dkr.ecr.us-west-2.amazonaws.com/docker.io/starlingx/stx-platformclients:stx.4.0-v1.3.0-|prefix|.1 + + If you specify repositories that require authentication, you must + first perform a :command:`docker login` to that repository before + using remote |CLIs|. + + **-a** + Override the OpenStack application image. + + By default, the OpenStack |CLIs| and clients container image is + pulled from docker.io/starlingx/stx-openstackclients. + + The :command:`configure-client.sh` command will generate a + remote\_client\_openstack.sh RC file. This RC file needs to be sourced + in the shell to set up required environment variables and aliases + before any remote |CLI| commands can be run. + + #. Copy the file remote\_client\_platform.sh to $HOME/remote\_cli\_wd + + +.. rubric:: |postreq| + +After configuring the |prod-os| container-backed remote |CLIs|/clients, the +remote |prod-os| |CLIs| can be used in any shell after sourcing the generated +remote |CLI|/client RC file. This RC file sets up the required environment +variables and aliases for the remote |CLI| commands. + +.. note:: + Consider adding this command to your .login or shell rc file, such that + your shells will automatically be initialized with the environment + variables and aliases for the remote |CLI| commands. + +See :ref:`Use Container-backed Remote |CLI|s and Clients +` for +details. + diff --git a/doc/source/security/openstack/index.rst b/doc/source/security/openstack/index.rst new file mode 100644 index 000000000..ee1113572 --- /dev/null +++ b/doc/source/security/openstack/index.rst @@ -0,0 +1,21 @@ +--------- +OpenStack +--------- + +================= +Access the System +================= + +.. toctree:: + :maxdepth: 1 + + security-overview + access-using-the-default-set-up + use-local-clis + update-the-domain-name + configure-remote-clis-and-clients + config-and-management-using-container-backed-remote-clis-and-clients + install-a-trusted-ca-certificate + install-rest-api-and-horizon-certificate + openstack-keystone-accounts + security-system-account-password-rules \ No newline at end of file diff --git a/doc/source/security/openstack/install-a-trusted-ca-certificate.rst b/doc/source/security/openstack/install-a-trusted-ca-certificate.rst new file mode 100644 index 000000000..7ed03765b --- /dev/null +++ b/doc/source/security/openstack/install-a-trusted-ca-certificate.rst @@ -0,0 +1,37 @@ + +.. fak1590002084693 +.. _install-a-trusted-ca-certificate: + +================================ +Install a Trusted CA Certificate +================================ + +A trusted |CA| certificate can be added to the |prod-os| service containers +such that the containerized OpenStack services can validate certificates of +far-end systems connecting or being connected to over HTTPS. The most common +use case here would be to enable certificate validation of clients connecting +to OpenStack service REST API endpoints. + +.. rubric:: |proc| + +.. _install-a-trusted-ca-certificate-steps-unordered-am5-xgt-vlb: + +#. Install a trusted |CA| certificate for OpenStack using the following + command to override all OpenStack Helm Charts. + + .. code-block:: none + + ~(keystone_admin)$ system certificate-install -m openstack_ca + + where contains a single |CA| certificate to be trusted. + + Running the command again with a different |CA| certificate in the file will + *replace* this openstack trusted |CA| certificate. + +#. Apply the updated Helm chart overrides containing the certificate changes: + + .. code-block:: none + + ~(keystone_admin)$ system application-apply wr-openstack + + diff --git a/doc/source/security/openstack/install-rest-api-and-horizon-certificate.rst b/doc/source/security/openstack/install-rest-api-and-horizon-certificate.rst new file mode 100644 index 000000000..2f224cb31 --- /dev/null +++ b/doc/source/security/openstack/install-rest-api-and-horizon-certificate.rst @@ -0,0 +1,43 @@ + +.. pmb1590001656644 +.. _install-rest-api-and-horizon-certificate: + +======================================== +Install REST API and Horizon Certificate +======================================== + +.. rubric:: |context| + +This certificate must be valid for the domain configured for OpenStack, see the +sections on :ref:`Accessing the System `. + +.. rubric:: |proc| + +#. Install the certificate for OpenStack as Helm chart overrides. + + .. code-block:: none + + ~(keystone_admin)$ system certificate-install -m openstack + + where is a pem file containing both the certificate and + private key. + + .. note:: + The OpenStack certificate must be created with wildcard SAN. + + For example, to create a certificate for |FQDN|: west2.us.example.com, + the following entry must be included in the certificate: + + .. code-block:: none + + X509v3 extensions: + X509v3 Subject Alternative Name: + DNS:*.west2.us.example.com + +#. Apply the Helm chart overrides containing the certificate changes. + + .. code-block:: none + + ~(keystone_admin)$ system application-apply wr-openstack + + diff --git a/doc/source/security/openstack/openstack-keystone-accounts.rst b/doc/source/security/openstack/openstack-keystone-accounts.rst new file mode 100644 index 000000000..7fa72f5a8 --- /dev/null +++ b/doc/source/security/openstack/openstack-keystone-accounts.rst @@ -0,0 +1,22 @@ + +.. xdd1485354265196 +.. _openstack-keystone-accounts: + +=========================== +OpenStack Keystone Accounts +=========================== + +|prod-os| uses Keystone for Identity Management which defines projects/tenants +for grouping OpenStack resources and users for managing access to these +resources. + +|prod-os| provides a local SQL Backend for Keystone. + +You can create OpenStack projects and users from the |os-prod-hor-long| +or the CLI. Projects and users can also be managed using the OpenStack REST +API. + +.. seealso:: + :ref:`System Account Password Rules ` + + diff --git a/doc/source/security/openstack/security-overview.rst b/doc/source/security/openstack/security-overview.rst new file mode 100644 index 000000000..79ea313b1 --- /dev/null +++ b/doc/source/security/openstack/security-overview.rst @@ -0,0 +1,24 @@ + +.. iad1589999522755 +.. _security-overview: + +======== +Overview +======== + +|prod-os| is a containerized application running on top of |prod-long|. + +Many security features are not specific to |prod-os|, and are documented in + +.. xbooklink :ref:`Cloud Platform Security `. + +This section covers security features that are specific to |prod-os|: + + +.. _security-overview-ul-qvj-22f-tlb: + +- OpenStack Keystone Accounts + +- Enabling Secure HTTPS Connectivity for OpenStack + + diff --git a/doc/source/security/openstack/security-system-account-password-rules.rst b/doc/source/security/openstack/security-system-account-password-rules.rst new file mode 100644 index 000000000..83555e59a --- /dev/null +++ b/doc/source/security/openstack/security-system-account-password-rules.rst @@ -0,0 +1,32 @@ + +.. tfb1485354135500 +.. _security-system-account-password-rules: + +============================= +System Account Password Rules +============================= + +|prod-os| enforces a set of strength requirements for new or changed passwords. + +The following rules apply: + + +.. _security-system-account-password-rules-ul-jwb-g15-zw: + +- The password must be at least seven characters long. + +- You cannot reuse the last 2 passwords in history. + +- The password must contain: + + + - at least one lower-case character + + - at least one upper-case character + + - at least one numeric character + + - at least one special character + + + diff --git a/doc/source/security/openstack/update-the-domain-name.rst b/doc/source/security/openstack/update-the-domain-name.rst new file mode 100644 index 000000000..e1ec33747 --- /dev/null +++ b/doc/source/security/openstack/update-the-domain-name.rst @@ -0,0 +1,160 @@ + +.. qsc1589994634309 +.. _update-the-domain-name: + +====================== +Update the Domain Name +====================== + +Containerized OpenStack services in |prod-os| are deployed behind an ingress +controller \(nginx\) that listens, by default, on either port 80 \(HTTP\) or +port 443 \(HTTPS\). + +.. rubric:: |context| + +The ingress controller routes packets to the specific OpenStack service, such +as the Cinder service, or the Neutron service, by parsing the |FQDN| in the +packet. For example, neutron.openstack.svc.cluster.local is for the Neutron +service, cinder‐api.openstack.svc.cluster.local is for the Cinder service. + +This routing requires that access to OpenStack REST APIs \(directly or via +remote OpenStack |CLIs|\) must be via a |FQDN|. You cannot access OpenStack REST +APIs using an IP address. + +|FQDNs| \(such as cinder‐api.openstack.svc.cluster.local\) must be in a |DNS| +server that is publicly accessible. + +.. note:: + It is possible to wild‐card a set of |FQDNs| to the same IP address in a + |DNS| server configuration so that you don't need to update the |DNS| + server every time an OpenStack service is added. Check your particular + |DNS| server for details on how to wild-card a set of |FQDNs|. + +In a “real” deployment, that is, not a lab scenario, you cannot use the default +*openstack.svc.cluster.local* domain name externally. You must set a unique +domain name for your |prod-os| system. Use the :command:`system +service‐parameter-add` command to configure and set the OpenStack domain name: + +.. rubric:: |prereq| + +.. _update-the-domain-name-ul-md1-pzx-n4b: + +- You must have an external |DNS| Server for which you have authority to add + new domain name to IP address mappings \(e.g. A, AAAA or CNAME records\). + +- The |DNS| server must be added to your|prod-long| |DNS| list. + +- Your |DNS| server must have A, AAAA or CNAME records for the following domain + names, representing the corresponding openstack services, defined as the + |OAM| Floating IP address. Refer to the configuration manual for the + particular |DNS| server you are using on how to make these updates for the + domain you are using for the |prod-os| system. + + .. note:: + + |prod| recommends that you not define domain names for services you + are not using. + + .. parsed-literal:: + + # define A record for general domain for |prod| system + IN A 10.10.10.10 + + # define alias for general domain for horizon dashboard REST API URL + horizon. IN CNAME ..com. + + # define alias for general domain for keystone identity service REST + API URLs keystone. IN CNAME + ..com. keystone-api. IN + CNAME ..com. + + # define alias for general domain for neutron networking REST API URL + neutron. IN CNAME ..com. + + # define alias for general domain for nova compute provisioning REST + API URLs nova. IN CNAME + ..com. placement. IN CNAME + ..com. rest-api. IN CNAME + ..com. + + # define no vnc procy alias for VM console access through Horizon REST + API URL novncproxy. IN CNAME + ..com. + + # define alias for general domain for barbican secure storage REST API URL + barbican. IN CNAME ..com. + + # define alias for general domain for glance VM management REST API URL + glance. IN CNAME ..com. + + # define alias for general domain for cinder block storage REST API URL + cinder. IN CNAME ..com. + cinder2. IN CNAME ..com. + cinder3. IN CNAME ..com. + + # define alias for general domain for heat orchestration REST API URLs + heat. IN CNAME ..com. + cloudformation. IN CNAME + my-|prefix|-domain..com. + + # define alias for general domain for starlingx REST API URLs + # ( for fault, patching, service management, system and VIM ) + fm. IN CNAME ..com. + patching. IN CNAME ..com. + smapi. IN CNAME ..com. + sysinv. IN CNAME ..com. + vim. IN CNAME ..com. + +.. rubric:: |proc| + +#. Source the environment. + + .. code-block:: none + + $ source /etc/platform/openrc + ~(keystone_admin)$ + +#. To set a unique domain name, use the :command:`system + service‐parameter-add` command. + + The command has the following syntax. + + .. code-block:: none + + system service-parameter-add openstack helm + endpoint_domain= + + should be a fully qualified domain name that you own, such + that you can configure the |DNS| Server that owns with the + OpenStack service names underneath the domain. + +.. xbooklink See the :ref:`prerequisites ` for a + complete list of |FQDNs|. + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system service-parameter-add openstack helm + endpoint_domain=my-|prefix|-domain.mycompany.com + +#. Apply the wr-openstack application. + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system application-apply wr-openstack + +.. rubric:: |result| + +The helm charts of all OpenStack services are updated and restarted. For +example cinder‐api.openstack.svc.cluster.local would be changed to +cinder‐api.my-|prefix|-domain.mycompany.com, and so on for all OpenStack services. + +.. note:: + OpenStack Horizon is also changed to listen on + horizon.my-|prefix|-domain.mycompany.com:80 \(instead of the initial + oam‐floating‐ip:31000\), for example, + horizon.my-wr-domain.mycompany.com:80. + diff --git a/doc/source/security/openstack/use-local-clis.rst b/doc/source/security/openstack/use-local-clis.rst new file mode 100644 index 000000000..ba6a4bcfc --- /dev/null +++ b/doc/source/security/openstack/use-local-clis.rst @@ -0,0 +1,64 @@ + +.. tok1566218039402 +.. _use-local-clis: + +============== +Use Local CLIs +============== + +|prod-os| administration and other tasks can be carried out from the command +line interface \(|CLI|\) + +.. rubric:: |context| + +.. warning:: + For security reasons, only administrative users should have |SSH| privileges. + +The Local |CLI| can be accessed via the local console on the active controller +or via SSH to the active controller. This procedure illustrates how to set the +context of |CLI| commands to openstack and access openstack admin privileges. + +.. rubric:: |proc| + +#. Login into the local console of the active controller or login via |SSH| to + the |OAM| Floating IP. + +#. Setup admin credentials for the containerized openstack application. + + .. code-block:: none + + # source /etc/platform/openrc + # OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 + + +.. rubric:: |result| + +OpenStack |CLI| commands for the |prod-os| Cloud Application are now available +via the :command:`openstack` command. + +For example: + +.. code-block:: none + + ~(keystone_admin)$ openstack flavor list + +-----------------+------------------+------+------+-----+-------+-----------+ + | ID | Name | RAM | Disk | Eph.| VCPUs | Is Public | + +-----------------+------------------+------+------+-----+-------+-----------+ + | 054531c5-e74e.. | squid | 2000 | 20 | 0 | 2 | True | + | 2fa29257-8842.. | medium.2c.1G.2G | 1024 | 2 | 0 | 2 | True | + | 4151fb10-f5a6.. | large.4c.2G.4G | 2048 | 4 | 0 | 4 | True | + | 78b75c6d-93ca.. | small.1c.500M.1G | 512 | 1 | 0 | 1 | True | + | 8b9971df-6d83.. | vanilla | 1 | 1 | 0 | 1 | True | + | e94c8123-2602.. | xlarge.8c.4G.8G | 4096 | 8 | 0 | 8 | True | + +-----------------+------------------+------+------+-----+-------+-----------+ + + ~(keystone_admin)$ openstack image list + +----------------+----------------------------------------+--------+ + | ID | Name | Status | + +----------------+----------------------------------------+--------+ + | 92300917-49ab..| Fedora-Cloud-Base-30-1.2.x86_64.qcow2 | active | + | 15aaf0de-b369..| opensquidbox.amd64.1.06a.iso | active | + | eeda4642-db83..| xenial-server-cloudimg-amd64-disk1.img | active | + +----------------+----------------------------------------+--------+ + + diff --git a/doc/source/shared/abbrevs.txt b/doc/source/shared/abbrevs.txt index f677e1e96..4609cc096 100755 --- a/doc/source/shared/abbrevs.txt +++ b/doc/source/shared/abbrevs.txt @@ -19,14 +19,15 @@ .. |CA| replace:: :abbr:`CA (Certificate Authority)` .. |CAs| replace:: :abbr:`CAs (Certificate Authorities)` .. |CLI| replace:: :abbr:`CLI (Command Line Interface)` -.. |CMK| replace:: :abbr:`CMK (CPU Manager for Kubernetes)` .. |CLIs| replace:: :abbr:`CLIs (Command Line Interfaces)` +.. |CMK| replace:: :abbr:`CMK (CPU Manager for Kubernetes)` .. |CNI| replace:: :abbr:`CNI (Container Networking Interface)` .. |CoW| replace:: :abbr:`CoW (Copy on Write)` .. |CSK| replace:: :abbr:`CSK (Code Signing Key)` .. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)` .. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)` .. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)` +.. |DMA| replace:: :abbr:`DMA (Direct Memory Access)` .. |DNS| replace:: :abbr:`DNS (Domain Name System)` .. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)` .. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)` @@ -67,6 +68,7 @@ .. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)` .. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)` .. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)` +.. |PCIe| replace:: :abbr:`PCI (Peripheral Component Interconnect extended)` .. |PDU| replace:: :abbr:`PDU (Packet Data Unit)` .. |PEM| replace:: :abbr:`PEM (Privacy Enhanced Mail)` .. |PF| replace:: :abbr:`PF (Physical Function)` @@ -111,7 +113,9 @@ .. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)` .. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)` .. |UUID| replace:: :abbr:`UUID (Universally Unique Identifier)` +.. |UUIDs| replace:: :abbr:`UUIDs (Universally Unique Identifiers)` .. |VF| replace:: :abbr:`VF (Virtual Function)` +.. |VFIO| replace:: abbr:`VFIO (Virtual Function I/O)` .. |VFs| replace:: :abbr:`VFs (Virtual Functions)` .. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)` .. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)` @@ -121,9 +125,11 @@ .. |VNF| replace:: :abbr:`VNF (Virtual Network Function)` .. |VNFs| replace:: :abbr:`VNFs (Virtual Network Functions)` .. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)` +.. |VNIC| replace:: :abbr `VNIC (Virtual Network Interface Card)` .. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)` .. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)` .. |vRAN| replace:: :abbr:`vRAN (virtualized Radio Access Network)` +.. |VTEP| replace:: abbr:`VTEP (Virtual Tunnel End Point)` .. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)` .. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)` .. |XML| replace:: :abbr:`XML (eXtensible Markup Language)` diff --git a/doc/source/storage/figures/zpk1486667625575.png b/doc/source/storage/figures/zpk1486667625575.png new file mode 100755 index 000000000..e237094cc Binary files /dev/null and b/doc/source/storage/figures/zpk1486667625575.png differ diff --git a/doc/source/storage/index.rst b/doc/source/storage/index.rst index 9622ace2d..8cb6ee872 100644 --- a/doc/source/storage/index.rst +++ b/doc/source/storage/index.rst @@ -19,4 +19,7 @@ and the requirements of the system. OpenStack --------- -Coming soon. \ No newline at end of file +.. toctree:: + :maxdepth: 2 + + openstack/index \ No newline at end of file diff --git a/doc/source/storage/openstack/config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster.rst b/doc/source/storage/openstack/config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster.rst new file mode 100644 index 000000000..71a781f6c --- /dev/null +++ b/doc/source/storage/openstack/config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster.rst @@ -0,0 +1,217 @@ + +.. cic1603143369680 +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster: + +============================================================ +Ceph Placement Group Number Dimensioning for Storage Cluster +============================================================ + +Ceph pools are created automatically by |prod-long|, |prod-long| applications, +or by |prod-long| supported optional applications. By default, no +pools are created after the Ceph cluster is provisioned \(monitor\(s\) enabled +and |OSDs| defined\) until it is created by an application or the Rados Gateway +\(RADOS GW\) is configured. + +The following is a list of pools created by |prod-os|, and Rados Gateway applications. + + +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-table-gvc-3h5-jnb: + + +.. table:: Table 1. List of Pools + :widths: auto + + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | Service/Application | Pool Name | Role | PG Count | Created | + +==================================+=====================+===============================================================+==========+========================================================================================+ + | Platform Integration Application | kube-rbd | Kubernetes RBD provisioned PVCs | 64 | When the platform automatically upload/applies after the Ceph cluster is provisioned | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | Wind River OpenStack | images | - glance image file storage | 256 | When the user applies the application for the first time | + | | | | | | + | | | - used for VM boot disk images | | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | ephemeral | - ephemeral object storage | 256 | | + | | | | | | + | | | - used for VM ephemeral disks | | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | cinder-volumes | - persistent block storage | 512 | | + | | | | | | + | | | - used for VM boot disk volumes | | | + | | | | | | + | | | - used as aditional disk volumes for VMs booted from images | | | + | | | | | | + | | | - snapshots and persistent backups for volumes | | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | cinder.backups | backup cinder volumes | 256 | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | Rados Gateway | rgw.root | Ceph Object Gateway data | 64 | When the user enables the RADOS GW through the :command:`system service-parameter` CLI | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | default.rgw.control | Ceph Object Gateway control | 64 | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | default.rgw.meta | Ceph Object Gateway metadata | 64 | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + | | default.rgw.log | Ceph Object Gateway log | 64 | | + +----------------------------------+---------------------+---------------------------------------------------------------+----------+----------------------------------------------------------------------------------------+ + +.. note:: + Considering PG value/|OSD| has to be less than 2048 PGs, the default PG + values are calculated based on a setup with one storage replication group + and up to 5 |OSDs| per node. + + +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-section-vkx-qmt-jnb: + +--------------- +Recommendations +--------------- + +For more information on how placement group numbers, \(pg\_num\) can be set +based on how many |OSDs| are in the cluster, see, Ceph PGs per pool calculator: +`https://ceph.com/pgcalc/ `__. + +You must collect the current pool information \(replicated size, number of +|OSDs| in the cluster\), and enter it into the calculator, calculate placement +group numbers \(pg\_num\) required based on pg\_calc algorithm, estimates on +|OSD| growth, and data percentage to balance Ceph as the number of |OSDs| +scales. + +When balancing placement groups for each individual pool, consider the following: + + +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-vmq-g4t-jnb: + +- pgs per osd + +- pgs per pool + +- pools per osd + +- replication + +- the crush map \(Ceph |OSD| tree\) + + +Running the command, :command:`ceph -s`, displays one of the following +**HEALTH\_WARN** messages: + + +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-sdd-v4t-jnb: + +- too few pgs per osd + +- too few pgs per pool + +- too many pgs per osd + + +Each of the health warning messages requires manual adjustment of placement +groups for individual pools: + + +.. _config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster-ul-dny-15t-jnb: + +- To list all the pools in the cluster, use the following command, + :command:`ceph osd lspools`. + +- To list all the pools with their pg\_num values, use the following command, + :command:`ceph osd dump`. + +- To get only the pg\_num / pgp\_num value, use the following command, + :command:`ceph osd get pg\_num`. + + +**Too few PGs per OSD** + Occurs when a new disk is added to the cluster. For more information on how + to add a disk as an |OSD|, see, |stor-doc|: :ref:`Provisioning Storage on a + Storage Host Using the CLI + `. + +To fix this warning, the number of placement groups should be increased, using +the following commands: + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool set pg_num + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool set pgp_num + +.. note:: + + Increasing pg\_num of a pool has to be done in increments of 64/|OSD|, + otherwise, the above commands are rejected. If this happens, decrease the + pg\_num number, retry and wait for the cluster to be **HEALTH\_OK** before + proceeding to the the next step. Multiple incremental steps may be required + to achieve the targeted values. + +**Too few PGs per Pool** + This indicates that the pool has many more objects per PG than average + \(too few PGs allocated\). This warning is addressed by increasing the + pg\_num of that pool, using the following commands: + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool set pg_num + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool set pgp_num + +.. note:: + pgp\_num should be equal to pg\_num. + +Otherwise, Ceph will issue a warning: + +.. code-block:: none + + ~(keystone_admin)$ ceph -s + cluster: + id: 92bfd149-37c2-43aa-8651-eec2b3e36c17 + health: HEALTH_WARN + 1 pools have pg_num > pgp_num + +**Too many PGs / per OSD** + This warning indicates that the maximum number of 300 PGs per |OSD| is + exceeded. The number of PGs cannot be reduced after the pool is created. + Pools that do not contain any data can safely be deleted and then recreated + with a lower number of PGs. Where pools already contain data, the only + solution is to add OSDs to the cluster so that the ratio of PGs per |OSD| + becomes lower. + +.. caution:: + + Pools have to be created with the exact same properties. + +To get these properties, use :command:`ceph osd dump`, or use the following commands: + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool get cinder-volumes crush_rule + crush_rule: storage_tier_ruleset + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool get cinder-volumes pg_num + pg_num: 512 + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool get cinder-volumes pgp_num + pg_num: 512 + +Before you delete a pool, use the following properties to recreate the pool; +pg\_num, pgp\_num, crush\_rule. + +To delete a pool, use the following command: + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool delete <> + +To create a pool, use the parameters from ceph osd dump, and run the following command: + +.. code-block:: none + + ~(keystone_admin)$ ceph osd pool create {pool-name}{pg-num} {pgp-num} {replicated} <> + diff --git a/doc/source/storage/openstack/configuration-and-management-storage-on-controller-hosts.rst b/doc/source/storage/openstack/configuration-and-management-storage-on-controller-hosts.rst new file mode 100644 index 000000000..30ce453fb --- /dev/null +++ b/doc/source/storage/openstack/configuration-and-management-storage-on-controller-hosts.rst @@ -0,0 +1,65 @@ + +.. mkh1590590274215 +.. _configuration-and-management-storage-on-controller-hosts: + +=========================== +Storage on Controller Hosts +=========================== + +The controllers provide storage for the OpenStack Controller Services through a +combination of local container ephemeral disk, Persistent Volume Claims backed +by Ceph and a containerized HA mariadb deployment. + +On systems configured for controller storage with a small Ceph cluster on the +master/controller nodes, they also provide persistent block storage for +persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and +storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or +Duplex systems, the controllers also provide nova- local storage for ephemeral +|VM| volumes. + +On systems configured for controller storage, the master/controller's root disk +is reserved for system use, and additional disks are required to support the +small Ceph cluster. On a All-in-One Simplex or Duplex system, you have the +option to partition the root disk for the nova-local storage \(to realize a +two-disk controller\) or use a third disk for nova-local storage. + + +.. _configuration-and-management-storage-on-controller-hosts-section-rvx-vwc-vlb: + +-------------------------------------- +Underlying Platform Filesystem Storage +-------------------------------------- + +See the :ref:`platform Planning ` documentation +for details. + +To pass the disk-space checks, any replacement disks must be installed before +the allotments are changed. + + +.. _configuration-and-management-storage-on-controller-hosts-section-wgm-gxc-vlb: + +--------------------------------------- +Glance, Cinder, and Remote Nova storage +--------------------------------------- + +On systems configured for controller storage with a small Ceph cluster on the +master/controller nodes, this small Ceph cluster on the controller provides +Glance image storage, Cinder block storage, Cinder backup storage, and Nova +remote ephemeral block storage. For more information, see :ref:`Block Storage +for Virtual Machines `. + + +.. _configuration-and-management-storage-on-controller-hosts-section-gpw-kxc-vlb: + +------------------ +Nova-local Storage +------------------ + +Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute +function, and therefore provide **nova- local** storage for ephemeral disks. On +other systems, **nova- local** storage is provided by compute hosts. For more +information about this type of storage, see :ref:`Storage on Compute Hosts +` and :ref:`Block Storage for Virtual Machines +`. + diff --git a/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst b/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst new file mode 100644 index 000000000..dc2163ac7 --- /dev/null +++ b/doc/source/storage/openstack/configure-an-optional-cinder-file-system.rst @@ -0,0 +1,160 @@ + +.. ble1606166239734 +.. _configure-an-optional-cinder-file-system: + +======================================== +Configure an Optional Cinder File System +======================================== + +By default, **qcow2** to raw **image-conversion** is done using the +**docker\_lv** file system. To avoid filling up the **docker\_lv** file system, +you can create a new file system dedicated for image conversion as described in +this section. + +**Prerequisites**: + + +.. _configure-an-optional-cinder-file-system-ul-sbz-3zn-tnb: + +- The requested size of the **image-conversion** file system should be big + enough to accommodate any image that is uploaded to Glance. + +- The recommended size for the file system must be at least twice as large as + the largest converted image from qcow2 to raw. + +- The conversion file system can be added before or after |prefix|-openstack + is applied. + +- The conversion file system must be added on both controllers. Otherwise, + |prefix|-openstack will not use the new file system. + +- If the conversion file system is added after |prefix|-openstack is applied, + changes to |prefix|-openstack will only take effect once the application is + reapplied. + + +The **image-conversion** file system can only be added on the controllers, and +must be added, with the same size, to both controllers. Alarms will be raised, +if: + + +.. _configure-an-optional-cinder-file-system-ul-dtd-fb4-tnb: + +- The conversion file system is not added on both controllers. + +- The size of the file system is not the same on both controllers. + + + +.. _configure-an-optional-cinder-file-system-section-uk1-rwn-tnb: + +-------------------------------------------- +Adding a New Filesystem for Image-Conversion +-------------------------------------------- + + +.. _configure-an-optional-cinder-file-system-ol-zjs-1xn-tnb: + +#. Use the :command:`host-fs-add` command to add a file system dedicated to + qcow2 to raw **image-conversion**. + + .. code-block:: none + + ~(keystone_admin)]$ system host-fs-add <> <> + + Where: + + **hostname or id** + is the location where the file system will be added + + **fs-name** + is the file system name + + **size** + is an integer indicating the file system size in Gigabytes + + For example: + + .. code-block:: none + + ~(keystone_admin)]$ system host-fs-add controller-0 image-conversion=8 + +----------------+--------------------------------------+ + | Property | Value | + +----------------+--------------------------------------+ + | uuid | 52bfd1c6-93b8-4175-88eb-a8ee5566ce71 | + | name | image-conversion | + | size | 8 | + | logical_volume | conversion-lv | + | created_at | 2020-09-18T17:08:54.413424+00:00 | + | updated_at | None | + +----------------+--------------------------------------+ + +#. When the **image-conversion** filesystem is added, a new partition + /opt/conversion is created and mounted. + +#. Use the following command to list the file systems. + + .. code-block:: none + + ~(keystone_admin)]$ system host-fs-list controller-0 + +--------------------+------------------+-------------+----------------+ + | UUID | FS Name | Size in GiB | Logical Volume | + +--------------------+------------------+-------------+----------------+ + | b5ffb565-4af2-4f26 | backup | 25 | backup-lv | + | a52c5c9f-ec3d-457c | docker | 30 | docker-lv | + | 52bfd1c6-93b8-4175 | image-conversion | 8 | conversion-lv | + | a2fabab2-054d-442d | kubelet | 10 | kubelet-lv | + | 2233ccf4-6426-400c | scratch | 16 | scratch-lv | + +--------------------+------------------+-------------+----------------+ + + + +.. _configure-an-optional-cinder-file-system-section-txm-qzn-tnb: + +------------------------ +Resizing the File System +------------------------ + +You can change the size of the **image-conversion** file system at runtime +using the following command: + +.. code-block:: none + + ~(keystone_admin)]$ system host-fs-modify + +For example: + +.. code-block:: none + + ~(keystone_admin)]$ system host-fs-modify controller-0 image-conversion=8 + + + +.. _configure-an-optional-cinder-file-system-section-ubp-f14-tnb: + +------------------------ +Removing the File System +------------------------ + + +.. _configure-an-optional-cinder-file-system-ol-nmb-pg4-tnb: + +#. You can remove an **image-conversion** file system dedicated to qcow2 + **image-conversion** using the following command: + + .. code-block:: none + + ~(keystone_admin)]$ system host-fs-delete <> <> + +#. When the **image-conversion** file system is removed from the system, the + /opt/conversion partition is also removed. + + +.. note:: + + You cannot delete an **image-conversion** file system when + |prefix|-openstack is in the **applying**,**applied**, or **removing** + state. + + You cannot add or remove any other file systems using these commands. + diff --git a/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst b/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst new file mode 100644 index 000000000..8e44f9f96 --- /dev/null +++ b/doc/source/storage/openstack/create-or-change-the-size-of-nova-local-storage.rst @@ -0,0 +1,190 @@ + +.. pcs1565033493776 +.. _create-or-change-the-size-of-nova-local-storage: + +=================================================== +Create or Change the Size of Nova-local Storage +=================================================== + +You must configure the storage resources on a host before you can unlock it. If +you prefer, you can use the |CLI|. + +.. rubric:: |context| + +You can use entire disks or disk partitions on compute hosts for use as +**nova-local** storage. You can add multiple disks or disk partitions. Once a +disk is added and configuration is persisted through a lock/unlock, the disk +can no longer be removed. + +.. caution:: + + If a root-disk partition on *any* compute host is used for local storage, + then for performance reasons, *all* VMs on the system must be booted from + Cinder volumes, and must not use ephemeral or swap disks. For more + information, see :ref:`Storage on Compute Hosts + `. + +.. rubric:: |proc| + +#. Lock the compute node. + + .. code-block:: none + + ~(keystone_admin)$ system host-lock compute-0 + +#. Log in to the active controller as the Keystone **admin** user. + +#. Review the available disk space and capacity. + + .. code-block:: none + + ~(keystone_admin)$ system host-disk-list compute-0 + +--------------------------------------+--------------++--------------+ + | uuid | device_node | available_gib | + | | | | + +--------------------------------------+--------------+---------------+ + | 5dcb3a0e-c677-4363-a030-58e245008504 | /dev/sda | 12216 | + | c2932691-1b46-4faf-b823-2911a9ecdb9b | /dev/sdb | 20477 | + +--------------------------------------+--------------+---------------+ + +#. During initial set-up, add the **nova-local** local volume group. + + .. code-block:: none + + ~(keystone_admin)$ system host-lvg-add compute-0 nova-local + +-----------------+------------------------------------------------------------+ + | Property | Value | + +-----------------+------------------------------------------------------------+ + | lvm_vg_name | nova-local | + | vg_state | adding | + | uuid | 5b8f0792-25b5-4e43-8058-d274bf8fa51c | + | ihost_uuid | 327b2136-ffb6-4cd5-8fed-d2ec545302aa | + | lvm_vg_access | None | + | lvm_max_lv | 0 | + | lvm_cur_lv | 0 | + | lvm_max_pv | 0 | + | lvm_cur_pv | 0 | + | lvm_vg_size_gb | 0 | + | lvm_vg_total_pe | 0 | + | lvm_vg_free_pe | 0 | + | created_at | 2015-12-23T16:30:25.524251+00:00 | + | updated_at | None | + | parameters | {u'instance_backing': u'lvm', u'instances_lv_size_mib': 0} | + +-----------------+------------------------------------------------------------+ + + +#. Obtain the |UUID| of the disk or partition to use for **nova-local** storage. + + To obtain the |UUIDs| for available disks, use the :command:`system + host-disk-list` command as shown earlier. + + To obtain the |UUIDs| for available partitions on a disk, use + :command:`system host-disk-partition-list`. + + .. code-block:: none + + ~(keystone_admin)$ system host-disk-partition-list compute-0 --disk + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system host-disk-partition-list compute-0 --disk + c2932691-1b46-4faf-b823-2911a9ecdb9b + +--------------------------------------+-----------------------------+--------------+----------+----------------------+ + | uuid | device_path | device_node | size_gib | status | + | | | | | | + +--------------------------------------+-----------------------------+--------------+----------+----------------------+ + | 08fd8b75-a99e-4a8e-af6c-7aab2a601e68 | /dev/disk/by-path/pci-0000: | /dev/sdb1 | 1024 | Creating (on unlock) | + | | 00:01.1-ata-1.1-part1 | | | | + | | | | | | + | | | | | | + +--------------------------------------+-----------------------------+--------------+----------+----------------------+ + +#. Create a partition to add to the volume group. + + If you plan on using an entire disk, you can skip this step. + + Do this using the :command:`host-disk-partition-add` command. The syntax is: + + .. code-block:: none + + system host-disk-partition-add [-t ] + + + + For example. + + .. code-block:: none + + ~(keystone_admin)$ system host-disk-partition-add compute-0 \ + c2932691-1b46-4faf-b823-2911a9ecdb9b 1 + +-------------+--------------------------------------------------+ + | Property | Value | + +-------------+--------------------------------------------------+ + | device_path | /dev/disk/by-path/pci-0000:00:01.1-ata-1.1-part1 | + | device_node | /dev/sdb1 | + | type_guid | ba5eba11-0000-1111-2222-000000000001 | + | type_name | None | + | start_mib | None | + | end_mib | None | + | size_mib | 1024 | + | uuid | 6a194050-2328-40af-b313-22dbfa6bab87 | + | ihost_uuid | 0acf8e83-e74c-486e-9df4-00ce1441a899 | + | idisk_uuid | c2932691-1b46-4faf-b823-2911a9ecdb9b | + | ipv_uuid | None | + | status | Creating (on unlock) | + | created_at | 2018-01-24T20:25:41.852388+00:00 | + | updated_at | None | + +-------------+--------------------------------------------------+ + +#. Obtain the |UUID| of the partition to use for **nova-local** storage as + described in step + +.. xbooklink :ref:`5 `. + +#. Add a disk or partition to the **nova-local** group, using a command of the + following form: + + .. note:: + The host must be locked + + .. code-block:: none + + ~(keystone_admin)$ system host-pv-add compute-0 nova-local + + where is the |UUID| of the disk or partition, obtained using + :command:`system host-partition-list`, or of the disk, obtained using + :command:`system host-disk-list`. + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system host-pv-add compute-0 nova-local \ + 08fd8b75-a99e-4a8e-af6c-7aab2a601e68 + +--------------------------+--------------------------------------------------+ + | Property | Value | + +--------------------------+--------------------------------------------------+ + | uuid | 8eea6ca7-5192-4ee0-bd7b-7d7fa7c637f1 | + | pv_state | adding | + | pv_type | partition | + | disk_or_part_uuid | 08fd8b75-a99e-4a8e-af6c-7aab2a601e68 | + | disk_or_part_device_node | /dev/sdb1 | + | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:01.1-ata-1.1-part1 | + | lvm_pv_name | /dev/sdb1 | + | lvm_vg_name | nova-local | + | lvm_pv_uuid | None | + | lvm_pv_size_gib | 0.0 | + | lvm_pe_total | 0 | + | lvm_pe_alloced | 0 | + | ihost_uuid | 0acf8e83-e74c-486e-9df4-00ce1441a899 | + | created_at | 2018-01-25T18:20:14.423947+00:00 | + | updated_at | None | + +--------------------------+--------------------------------------------------+ + + .. note:: + Multiple disks/partitions can be added to nova-local by repeating steps + 5-8, above. + + diff --git a/doc/source/storage/openstack/index.rst b/doc/source/storage/openstack/index.rst new file mode 100644 index 000000000..30cc9d158 --- /dev/null +++ b/doc/source/storage/openstack/index.rst @@ -0,0 +1,23 @@ +======== +Contents +======== + +.. check what put here + +.. toctree:: + :maxdepth: 1 + + storage-configuration-and-management-overview + storage-configuration-and-management-storage-resources + config-and-management-ceph-placement-group-number-dimensioning-for-storage-cluster + configuration-and-management-storage-on-controller-hosts + configure-an-optional-cinder-file-system + create-or-change-the-size-of-nova-local-storage + replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system + nova-ephemeral-storage + replacing-a-nova-local-disk + storage-on-compute-hosts + storage-configuration-and-management-storage-on-storage-hosts + specifying-the-storage-type-for-vm-ephemeral-disks + storage-configuring-and-management-storage-related-cli-commands + storage-configuration-and-management-storage-utilization-display diff --git a/doc/source/storage/openstack/nova-ephemeral-storage.rst b/doc/source/storage/openstack/nova-ephemeral-storage.rst new file mode 100644 index 000000000..cdcf8ad9e --- /dev/null +++ b/doc/source/storage/openstack/nova-ephemeral-storage.rst @@ -0,0 +1,78 @@ + +.. ugv1564682723675 +.. _nova-ephemeral-storage: + +====================== +Nova Ephemeral Storage +====================== + +.. contents:: + :local: + :depth: 1 + +This is the default OpenStack storage option used for creating VMs. Virtual +machine instances are typically created with at least one ephemeral disk which +is used to run the VM guest operating system and boot partition. + +Ephemeral storage for VMs, which includes swap disk storage, ephemeral disk +storage, and root disk storage if the VM is configured for boot-from-image, is +implemented by the **nova** service. For flexibility and scalability, this +storage is defined using a **nova-local** local volume group created on the +compute hosts. + +The nova-local group can be backed locally by one or more disks or partitions +on the compute host, or remotely by resources on the internal Ceph cluster \(on +controller or storage hosts\). If it is backed locally on the compute host, +then it uses CoW-image storage backing. For more information about +**nova-local** backing options, see Cloud Platform Storage Configuration: +:ref:`Block Storage for Virtual Machines `. + +Compute hosts are grouped into host aggregates based on whether they offer CoW +or remote Ceph-backed local storage. The host aggregates are used for +instantiation scheduling. + + +.. _nova-ephemeral-storage-section-N10149-N1001F-N10001: + +------------ +Instances LV +------------ + +For storage on compute hosts, CoW-image backing uses an instances logical +volume, or **Instances LV**. This contains the /etc/nova/instances file system, +and is used for the following: + + +.. _nova-ephemeral-storage-ul-mrd-bxv-q5: + +- the nova image cache, containing images downloaded from Glance + +- various small nova control and log files, such as the libvirt.xml file, + which is used to pass parameters to **libvirt** at launch, and the console.log + file + + +For CoW-image-backed local storage, **Instances LV** is also used to hold +CoW-image disk files for use as VM disk storage. It is the only volume in +**nova-local**. + +By default, no size is specified for the **Instances LV**. For non-root disks, +the minimum required space is 2 GB for a **nova-local** volume group with a +total size less that 80 GB, and 5 GB for a **nova-local** volume group larger +or equal than 80 GB; you must specify at least this amount. You can allocate +more **Instances LV** space to support the anticipated number of +boot-from-image VMs, up to 50% of the maximum available storage of the local +volume group. At least 50% free space in the volume group is required to +provide space for allocating logical volume disks for launched instances. The +value provided for the **Instance LV Size** is limited by this maximum. + +Instructions for allocating the **Instances LV Size** using the Web +administration interface or the CLI are included in as part of configuring the +compute nodes. Suggested sizes are indicated in the Web administration +interface. + +.. caution:: + + If less than the minimum required space is available, the compute host + cannot be unlocked. + diff --git a/doc/source/storage/openstack/replacing-a-nova-local-disk.rst b/doc/source/storage/openstack/replacing-a-nova-local-disk.rst new file mode 100644 index 000000000..952901980 --- /dev/null +++ b/doc/source/storage/openstack/replacing-a-nova-local-disk.rst @@ -0,0 +1,21 @@ + +.. tjr1539798511628 +.. _replacing-a-nova-local-disk: + +=========================== +Replace a Nova-Local Disk +=========================== + +You can replace failed nova-local disks on compute nodes. + +.. rubric:: |context| + +To replace a nova-local storage disk on a compute node, follow the instructions +in Cloud Platform Node Management: :ref:`Changing Hardware Components for a +Worker Host `. + +To avoid reconfiguration, ensure that the replacement disk is assigned to the +same location on the host, and is the same size as the original. The new disk +is automatically provisioned for nova-local storage based on the existing +system configuration. + diff --git a/doc/source/storage/openstack/replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system.rst b/doc/source/storage/openstack/replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system.rst new file mode 100644 index 000000000..a784cc86e --- /dev/null +++ b/doc/source/storage/openstack/replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system.rst @@ -0,0 +1,66 @@ + +.. syu1590591059068 +.. _replacing-the-nova-local-storage-disk-on-a-cloud-platform-simplex-system: + +======================================================================== +Replace Nova-local Storage Disk on a Cloud Platform Simplex System +======================================================================== + +On a |prod-long| Simplex system, a special procedure is +recommended for replacing or upgrading the nova-local storage device, to allow +for the fact that |VMs| cannot be migrated. + +.. rubric:: |context| + +For this procedure, you must use the |CLI|. + +.. note:: + The volume group will be rebuilt as part of the disk replacement procedure. + You can select a replacement disk of any size provided that the ephemeral + storage requirements for all |VMs| are met. + +.. rubric:: |proc| + +#. Delete all |VMs|. + +#. Lock the controller. + + .. code-block:: none + + ~(keystone_admin)$ system host-lock controller-0 + +#. Delete the nova-local volume group. + + .. code-block:: none + + ~(keystone_admin)$ system host-lvg-delete controller-0 nova-local + +#. Shut down the controller. + + .. code-block:: none + + ~(keystone_admin)$ sudo shutdown -h now + +#. Replace the physical device. + + + #. Power down the physical device. + + #. Replace the drive with an equivalent or larger drive. + + #. Power up the device. + + Wait for the node to boot to the command prompt. + + +#. Source the environment. + +#. Unlock the node. + + .. code-block:: none + + ~(keystone_admin)$ system host-unlock controller-0 + +#. Relaunch the deleted |VMs|. + + diff --git a/doc/source/storage/openstack/specifying-the-storage-type-for-vm-ephemeral-disks.rst b/doc/source/storage/openstack/specifying-the-storage-type-for-vm-ephemeral-disks.rst new file mode 100644 index 000000000..4f7a16ede --- /dev/null +++ b/doc/source/storage/openstack/specifying-the-storage-type-for-vm-ephemeral-disks.rst @@ -0,0 +1,66 @@ + +.. zjx1464641246986 +.. _specifying-the-storage-type-for-vm-ephemeral-disks: + +================================================== +Specify the Storage Type for VM Ephemeral Disks +================================================== + +You can specify the ephemeral storage type for virtual machines \(|VMs|\) by +using a flavor with the appropriate extra specification. + +.. rubric:: |context| + +.. note:: + On a system with one or more single-disk compute hosts, do not use + ephemeral disks for *any* |VMs| unless *all* single-disk compute hosts are + configured to use remote Ceph backing. For more information, see + |os-intro-doc|: + +.. xbooklink:ref:`Storage on Storage Hosts `. + +Each new flavor is automatically assigned a Storage Type extra spec that +specifies, as the default, instantiation on compute hosts configured for +image-backed local storage \(Local |CoW| Image Backed\). You can change the extra +spec to specify instantiation on compute hosts configured for Ceph-backed +remote storage, if this is available \(Remote Storage Backed\). Ceph-backed +remote storage is available only on systems configured with a Ceph storage +backend. + +The designated storage type is used for ephemeral disk and swap disk space, and +for the root disk if the virtual machine is launched using boot-from-image. +Local storage is allocated from the Local Volume Group on the host, and does +not persist when the instance is terminated. Remote storage is allocated from a +Ceph storage pool configured on the storage host resources, and persists until +the pool resources are reallocated for other purposes. The choice of storage +type affects migration behavior; for more information, see Cloud Platform +Storage Configuration: :ref:`VM Storage Settings for Migration, Resize, or +Evacuation `. + +If the instance is configured to boot from volume, the root disk is implemented +using persistent Cinder-based storage allocated from the controller \(for a +system using LVM\) or from storage hosts \(for a system using Ceph\) by +default. On a system that offers both LVM and Ceph storage backends for Cinder +storage, you can specify to use the LVM backend when you launch an instance. + +To specify the type of storage offered by a compute host, see Cloud Platform +Storage Configuration: :ref:`Work with Local Volume Groups +`. + +.. rubric:: |context| + +.. caution:: + Unlike Cinder-based storage, ephemeral storage does not persist if the + instance is terminated or the compute node fails. + + +.. _specifying-the-storage-type-for-vm-ephemeral-disks-d29e17: + +In addition, for local ephemeral storage, migration and resizing support +depends on the storage backing type specified for the instance, as well as the +boot source selected at launch. + +To change the storage type using the Web administration interface, click +**Edit** for the existing **Storage Type** extra specification, and select from +the **Storage** drop-down menu. + diff --git a/doc/source/storage/openstack/storage-configuration-and-management-overview.rst b/doc/source/storage/openstack/storage-configuration-and-management-overview.rst new file mode 100644 index 000000000..a848c7be3 --- /dev/null +++ b/doc/source/storage/openstack/storage-configuration-and-management-overview.rst @@ -0,0 +1,18 @@ + +.. fxm1589998951395 +.. _storage-configuration-and-management-overview: + +======== +Overview +======== + +|prod-os| is a containerized application running on top of the |prod-long|. + +The storage management of hosts is not specific to |prod-os|. For more +information, see |prod-long| System Configuration: + +.. xbooklink :ref:`System Configuration Management Overview `. + +This chapter covers concepts and additional considerations related to storage +management that are specific to |prod-os|. + diff --git a/doc/source/storage/openstack/storage-configuration-and-management-storage-on-storage-hosts.rst b/doc/source/storage/openstack/storage-configuration-and-management-storage-on-storage-hosts.rst new file mode 100644 index 000000000..db192b7d9 --- /dev/null +++ b/doc/source/storage/openstack/storage-configuration-and-management-storage-on-storage-hosts.rst @@ -0,0 +1,15 @@ + +.. tfu1590592352767 +.. _storage-configuration-and-management-storage-on-storage-hosts: + +======================== +Storage on Storage Hosts +======================== + +|prod-os| creates default Ceph storage pools for Glance images, Cinder volumes, +Cinder backups, and Nova ephemeral data object data. + +For more information, see the :ref:`Cloud Platform Storage Configuration +` guide for details on configuring the +internal Ceph cluster on either controller or storage hosts. + diff --git a/doc/source/storage/openstack/storage-configuration-and-management-storage-resources.rst b/doc/source/storage/openstack/storage-configuration-and-management-storage-resources.rst new file mode 100644 index 000000000..f7fdd2a0b --- /dev/null +++ b/doc/source/storage/openstack/storage-configuration-and-management-storage-resources.rst @@ -0,0 +1,138 @@ + +.. fhe1590514169842 +.. _storage-configuration-and-management-storage-resources: + +================= +Storage Resources +================= + +|prod-os| uses storage resources on the controller-labelled master hosts, the +compute-labeled worker hosts, and on storage hosts if they are present. + +The storage configuration for |prod-os| is very flexible. The specific +configuration depends on the type of system installed, and the requirements of +the system. + + +.. _storage-configuration-and-management-storage-resources-section-j2k-5mw-5lb: + +.. contents:: + :local: + :depth: 1 + +----------------------------- +Storage Services and Backends +----------------------------- + +The figure below shows the storage options and backends for |prod-os|. + +.. image:: ../figures/zpk1486667625575.png + + + +Each service can use different storage backends. + +**Ceph** + This provides storage managed by the internal Ceph cluster. Depending on + the deployment configuration, the internal Ceph cluster is provided through + OSDs on |prod-os| master / controller hosts or storage hosts. + + +.. _storage-configuration-and-management-storage-resources-table-djz-14w-5lb: + + +.. table:: + :widths: auto + + +---------+----------------------------------------------------------------+---------------------------------------------------------------+ + | Service | Description | Available Backends | + +=========+================================================================+===============================================================+ + | Cinder | - persistent block storage | - Internal Ceph on master/controller hosts or storage hosts | + | | | | + | | - used for VM boot disk volumes | | + | | | | + | | - used as additional disk volumes for VMs booted from images | | + | | | | + | | - snapshots and persistent backups for volumes | | + +---------+----------------------------------------------------------------+---------------------------------------------------------------+ + | Glance | - image file storage | - Internal Ceph on master/controller hosts or storage hosts | + | | | | + | | - used for VM boot disk images | | + +---------+----------------------------------------------------------------+---------------------------------------------------------------+ + | Nova | - ephemeral object storage | - CoW-Image on Compute Nodes | + | | | | + | | - used for VM ephemeral disks | - Internal Ceph on master/controller hosts or storage hosts | + +---------+----------------------------------------------------------------+---------------------------------------------------------------+ + + +.. _storage-configuration-and-management-storage-resources-section-erw-5mw-5lb: + +-------------------- +Uses of Disk Storage +-------------------- + +**|prod-os| System** + The |prod-os| system containers use a combination of local container + ephemeral disk, Persistent Volume Claims backed by Ceph and a containerized + HA mariadb deployment for configuration and database files. + +**VM Ephemeral Boot Disk Volumes** + When booting from an image, virtual machines by default use local ephemeral + disk storage on computes for Nova ephemeral local boot disk volumes built + from images. These virtual disk volumes are created when the VM instances + are launched. These virtual volumes are destroyed when the VM instances are + terminated. + + A host can be configured to instead support 'remote' ephemeral disk + storage, backed by Ceph. These virtual volumes are still destroyed when the + VM instances are terminated, but provide better live migration performance + as they are remote. + +**VM Persistent Boot Disk Volumes** + When booting from Cinder volumes, virtual machines can use the Ceph-backed + storage cluster for backing Cinder boot disk volumes. This provides + permanent storage for the VM root disks, facilitating faster machine + startup, faster live migration support, but requiring more storage + resources. + +**VM Additional Disk** + Virtual machines can optionally use local or remote ephemeral disk storage + on computes for additional virtual disks, such as swap disks. These disks + are ephemeral; they are created when a VM instance is launched, and + destroyed when the VM instance is terminated. + +**VM block storage backups** + Cinder volumes can be backed up for long term storage in a separate Ceph + pool. + + +.. _storage-configuration-and-management-storage-resources-section-mhx-5mw-5lb: + +----------------- +Storage Locations +----------------- + +The various storage locations for |prod-os| include: + +**Controller Hosts** + In the Standard with Controller Storage deployment option, one or more + disks can be used on controller hosts to provide a small Ceph-based cluster + for providing the storage backend for Cinder volumes, Cinder backups, + Glance images, and remote Nova ephemeral volumes. + +**Compute Hosts** + One or more disks can be used on compute hosts to provide local Nova + ephemeral storage for virtual machines. + +**Combined Controller-Compute Hosts** + One or more disks can be used on combined hosts in Simplex or Duplex + systems to provide local Nova Ephemeral Storage for virtual machines and a + small Ceph-backed storage cluster for backing Cinder, Glance, and Remote + Nova Ephemeral storage. + +**Storage Hosts** + One or more disks are used on storage hosts to provide a large scale + Ceph-backed storage cluster for backing Cinder, Glance, and Remote Nova + Ephemeral storage. Storage hosts are used only on |prod-os| with Dedicated + Storage systems. + diff --git a/doc/source/storage/openstack/storage-configuration-and-management-storage-utilization-display.rst b/doc/source/storage/openstack/storage-configuration-and-management-storage-utilization-display.rst new file mode 100644 index 000000000..31f2e02aa --- /dev/null +++ b/doc/source/storage/openstack/storage-configuration-and-management-storage-utilization-display.rst @@ -0,0 +1,17 @@ + +.. akj1590593707486 +.. _storage-configuration-and-management-storage-utilization-display: + +=========================== +Storage Utilization Display +=========================== + +|prod-long| provides enhanced backend storage usage details through the |os-prod-hor-long|. + +Upstream storage utilization display is limited to the hypervisor statistics +which include only local storage utilization on the worker nodes. Cloud +Platform provides enhanced storage utilization statistics for the ceph, and +controller-fs backends. The statistics are available using the |CLI| and Horizon. + +In |os-prod-hor-long|, the Storage Overview panel includes storage Services and Usage with storage details. + diff --git a/doc/source/storage/openstack/storage-configuring-and-management-storage-related-cli-commands.rst b/doc/source/storage/openstack/storage-configuring-and-management-storage-related-cli-commands.rst new file mode 100644 index 000000000..60fe308d5 --- /dev/null +++ b/doc/source/storage/openstack/storage-configuring-and-management-storage-related-cli-commands.rst @@ -0,0 +1,129 @@ + +.. jem1464901298578 +.. _storage-configuring-and-management-storage-related-cli-commands: + +============================ +Storage-Related CLI Commands +============================ + +You can use |CLI| commands when working with storage specific to OpenStack. + +For more information, see :ref:`Cloud Platform Storage Configuration ` + +.. _storage-configuring-and-management-storage-related-cli-commands-section-N10044-N1001C-N10001: + +.. contents:: + :local: + :depth: 1 + +---------------------------------------- +Add, Modify, or Display Storage Backends +---------------------------------------- + +To list the storage backend types installed on a system: + +.. code-block:: none + + ~(keystone_admin)$ system storage-backend-list + + +--------+---------------+--------+-----------+----+--------+------------+ + | uuid | name |backend |state |task|services|capabilities| + +--------+---------------+--------+-----------+----+--------+------------+ + | 27e... |ceph-store |ceph |configured |None| None |min_repli.:1| + | | | | | | |replicati.:1| + | 502... |shared_services|external|configured |None| glance | | + +--------+---------------+--------+-----------+----+--------+------------+ + + + +To show details for a storage backend: + +.. code-block:: none + + ~(keystone_admin)$ system storage-backend-show + + +For example: + +.. code-block:: none + + ~(keystone_admin)$ system storage-backend-show ceph-store + +--------------------+-------------------------------------+ + |Property | Value | + +--------------------+-------------------------------------+ + |backend | ceph | + |name | ceph-store | + |state | configured | + |task | None | + |services | None | + |capabilities | min_replication: 1 | + | | replication: 1 | + |object_gateway | False | + |ceph_total_space_gib| 198 | + |object_pool_gib | None | + |cinder_pool_gib | None | + |kube_pool_gib | None | + |glance_pool_gib | None | + |ephemeral_pool_gib | None | + |tier_name | storage | + |tier_uuid | d3838363-a527-4110-9345-00e299e6a252| + |created_at | 2019-08-12T21:08:50.166006+00:00 | + |updated_at | None | + +--------------------+-------------------------------------+ + + +.. _storage-configuring-and-management-storage-related-cli-commands-section-N10086-N1001C-N10001: + +------------------ +List Glance Images +------------------ + +You can use this command to identify the storage backend type for Glance +images. \(The column headers in the following example have been modified +slightly to fit the page.\) + +.. code-block:: none + + ~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 + ~(keystone_admin)$ openstack image list + +----+------+-------+--------+-----------+------+--------+------------+-----------+ + | ID | Name | Store | Disk | Container | Size | Status | Cache Size | Raw Cache | + | | | | Format | Format | | | | | + +----+------+-------+--------+-----------+------+--------+------------+-----------+ + | .. | img1 | rbd | raw | bare | 1432 | active | | | + | .. | img2 | file | raw | bare | 1432 | active | | | + +----+------+-------+--------+-----------+------+--------+------------+-----------+ + + +.. _storage-configuring-and-management-storage-related-cli-commands-ul-jvc-dnx-jnb: + +- The value **rbd** indicates a Ceph backend. + +- You can use the –long option to show additional information. + + + +.. _storage-configuring-and-management-storage-related-cli-commands-section-N100A1-N1001C-N10001: + +----------------- +Show Glance Image +----------------- + +You can use this command to obtain information about a Glance image. + +.. code-block:: none + + ~(keystone_admin)$ OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 + ~(keystone_admin)$ openstack image-show <> + +------------------+--------------------------------------+ + | Property | Value | + +------------------+--------------------------------------+ + | checksum | c11edf9e31b416c46125600ddef1a8e8 | + | name | ubuntu-14.014.img | + | store | rbd | + | owner | 05be70a23c81420180c51e9740dc730a | + +------------------+--------------------------------------+ + + +The Glance **store** value can be either file or rbd. The rbd value indicates a Ceph backend. + diff --git a/doc/source/storage/openstack/storage-on-compute-hosts.rst b/doc/source/storage/openstack/storage-on-compute-hosts.rst new file mode 100644 index 000000000..e64dbacbf --- /dev/null +++ b/doc/source/storage/openstack/storage-on-compute-hosts.rst @@ -0,0 +1,17 @@ + +.. jow1443470081421 +.. _storage-on-compute-hosts: + +======================== +Storage on Compute Hosts +======================== + +Compute-labelled worker hosts can provide ephemeral storage for Virtual Machine +\(VM\) disks. + +.. note:: + + On All-in-One Simplex or Duplex systems, compute storage is provided using + resources on the combined host. For more information, see |os-intro-doc|: + :ref:`Storage on Controller Hosts `. + diff --git a/doc/source/system_configuration/kubernetes/configuring-ptp-service-using-the-cli.rst b/doc/source/system_configuration/kubernetes/configuring-ptp-service-using-the-cli.rst index 639afc37d..7f5381ed8 100644 --- a/doc/source/system_configuration/kubernetes/configuring-ptp-service-using-the-cli.rst +++ b/doc/source/system_configuration/kubernetes/configuring-ptp-service-using-the-cli.rst @@ -18,7 +18,7 @@ using the Horizon Web interface see `. You can also specify the |PTP| service for **clock\_synchronization** using -the web administration interface. +the |os-prod-hor| interface. .. xbooklink For more information, see |node-doc|: `Host Inventory `. diff --git a/doc/source/system_configuration/openstack/adding-configuration-rpc-response-max-timeout-in-neutron-conf.rst b/doc/source/system_configuration/openstack/adding-configuration-rpc-response-max-timeout-in-neutron-conf.rst index 85110216d..9efd7e8e1 100644 --- a/doc/source/system_configuration/openstack/adding-configuration-rpc-response-max-timeout-in-neutron-conf.rst +++ b/doc/source/system_configuration/openstack/adding-configuration-rpc-response-max-timeout-in-neutron-conf.rst @@ -2,11 +2,11 @@ .. gkr1591372948568 .. _adding-configuration-rpc-response-max-timeout-in-neutron-conf: -======================================================== -Add Configuration rpc\_response\_max\_timeout in Neutron -======================================================== +============================================================= +Add Configuration rpc\_response\_max\_timeout in neutron.conf +============================================================= -You can add the rpc\_response\_max\_timeout to Neutron using Helm +You can add the rpc\_response\_max\_timeout to neutron.conf using Helm overrides. .. rubric:: |context| @@ -14,17 +14,17 @@ overrides. Maximum rpc timeout is now configurable by rpc\_response\_max\_timeout from Neutron config instead of being calculated as 10 \* rpc\_response\_timeout. -This configuration can be used to change the maximum rpc timeout. If the -maximum rpc timeout is too big, some requests which should fail will be held -for a long time before the server returns failure. If this value is too small -and the server is very busy, the requests may need more time than maximum rpc -timeout and the requests will fail though they can succeed with a bigger -maximum rpc timeout. +This configuration can be used to change the maximum rpc timeout. If maximum +rpc timeout is too big, some requests which should fail will be held for a long +time before the server returns failure. If this value is too small and the +server is very busy, the requests may need more time than maximum rpc timeout +and the requests will fail though they can succeed with a bigger maximum rpc +timeout. .. rubric:: |proc| -1. Create a yaml file to add configuration rpc\_response\_max\_timeout in - Neutron. +#. create a yaml file to add configuration rpc\_response\_max\_timeout in + neutron.conf. .. code-block:: none @@ -35,15 +35,15 @@ maximum rpc timeout. rpc_response_max_timeout: 600 EOF -2. Update the neutron overrides and apply to |prefix|-openstack. +#. Update the neutron overrides and apply to |prefix|-openstack. .. parsed-literal:: ~(keystone_admin)]$ system helm-override-update |prefix|-openstack neutron openstack --values neutron-overrides.yaml ~(keystone_admin)]$ system application-apply |prefix|-openstack -3. Verify that configuration rpc\_response\_max\_time has been added in - Neutron. +#. Verify that configuration rpc\_response\_max\_time has been added in + neutron.conf. .. code-block:: none diff --git a/doc/source/system_configuration/openstack/assigning-a-dedicated-vlan-id-to-a-target-project-network.rst b/doc/source/system_configuration/openstack/assigning-a-dedicated-vlan-id-to-a-target-project-network.rst index 3c83b05f1..342e7e688 100644 --- a/doc/source/system_configuration/openstack/assigning-a-dedicated-vlan-id-to-a-target-project-network.rst +++ b/doc/source/system_configuration/openstack/assigning-a-dedicated-vlan-id-to-a-target-project-network.rst @@ -6,7 +6,7 @@ Assign a Dedicated VLAN ID to a Target Project Network ====================================================== -To assign a dedicated VLAN segment ID you must first enable the Neutron +To assign a dedicated |VLAN| segment ID you must first enable the Neutron **segments** plugin. .. rubric:: |proc| @@ -73,6 +73,10 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron | | | +--------------------+---------------------------------------------------------------------------------------------------------------------+ +.. note:: + + The value for DEFAULT is folded onto two lines in the example above for + display purposes. #. Apply the |prefix|-openstack application. @@ -80,7 +84,7 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron ~(keystone_admin)]$ system application-apply |prefix|-openstack -#. You can now assign the VLAN network type to a datanetwork. +#. You can now assign the |VLAN| network type to a datanetwork. #. Identify the name of the data network to assign. @@ -158,17 +162,21 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron | faf63edf-63f0-4e9b-b930-5fa8f43b5484 | None | 865b9576-1815-4734-a7e4-c2d0dd31d19c | vlan | 2001 | +--------------------------------------+--------------------------------------------+--------------------------------------+--------------+---------+ + .. note:: + + Thr name **test1-st-segement01-mx6fa5eonzrr** has been folded onto + two lines in the sample output above for display pruposes. #. List subnets. .. code-block:: none ~(keystone_admin)]$ openstack subnet list - +------------...----+---------------------+---------------...-----+------------------+ - | ID ... | Name | Network ... | Subnet | - +------------...----+---------------------+---------------...-----+------------------+ - | 0f64c277-82...f2f | external01-subnet | 6bbd3e4e-9419-...cab7 | 10.10.10.0/24 | - | bb9848b6-4b...ddc | subnet-temp | 865b9576-1815-...d19c | 192.168.17.0/24 | - +------------...----+---------------------+-----------------------+------------------+ + +--------------------------------------+---------------------+--------------------------------------+------------------+ + | ID | Name | Network | Subnet | + +--------------------------------------+---------------------+--------------------------------------+------------------+ + | 0f64c277-82d7-4161-aa47-fc4cfadacf2f | external01-subnet | 6bbd3e4e-9419-49c6-a68a-ed51fbc1cab7 | 10.10.10.0/24 | + | bb9848b6-63f0-4e9b-b930-5fa8f43b5ddc | subnet-temp | 865b9576-1815-4734-a7e4-c2d0dd31d19c | 192.168.17.0/24 | + +--------------------------------------+---------------------+--------------------------------------+------------------+ In this example, the subnet external01-subnet uses a dedicated segment ID. @@ -176,14 +184,9 @@ To assign a dedicated VLAN segment ID you must first enable the Neutron .. code-block:: none - ~(keystone_admin)]$ openstack subnet show 0f64c277-82d7-4161-aa47-fc4cfadacf2f - - The output from this command is a row from ascii table output, it - displays the following: - - .. code-block:: none - - |grep segment | segment_id | 502e3f4f-6187-4737-b1f5-1be7fd3fc45e | + ~(keystone_admin)]$ openstack subnet show + 0f64c277-82d7-4161-aa47-fc4cfadacf2f | grep segment | segment_id | + 502e3f4f-6187-4737-b1f5-1be7fd3fc45e | .. note:: Dedicated segment IDs should not be in the range created using the diff --git a/doc/source/system_configuration/openstack/configuring-a-live-migration-completion-timeout-in-nova.rst b/doc/source/system_configuration/openstack/configuring-a-live-migration-completion-timeout-in-nova.rst index 3be6271c9..5ba827325 100644 --- a/doc/source/system_configuration/openstack/configuring-a-live-migration-completion-timeout-in-nova.rst +++ b/doc/source/system_configuration/openstack/configuring-a-live-migration-completion-timeout-in-nova.rst @@ -9,6 +9,8 @@ Configure a Live Migration Completion Timeout in Nova You can configure how long to allow for a compute live migration to complete before the operation is aborted. +.. rubric:: |context| + The following example applies a timeout of 300 seconds to all hosts. The same basic workflow of *creating an overrides file*, then @@ -29,12 +31,25 @@ to apply other Nova overrides globally. live_migration_completion_timeout: 300 EOF + #. Update the Helm overrides using the new configuration file. .. parsed-literal:: ~(keystone_admin)]$ system helm-override-update --values ./nova_override.yaml |prefix|-openstack nova openstack --reuse-values +#. Confirm that the user\_override lists the correct live migration completion timeout. + + .. parsed-literal:: + + ~(keystone_admin)$ system helm-override-show |prefix|-openstack nova openstack + + The output should include the following: + + .. code-block:: none + + live_migration_completion_timeout: 300 + #. Apply the changes. .. parsed-literal:: diff --git a/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst b/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst index cd3889b2f..a2ca7ec5a 100644 --- a/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst +++ b/doc/source/system_configuration/openstack/configuring-the-rpc-response-timeout-in-cinder.rst @@ -28,6 +28,19 @@ override. ~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/cinder-overrides.yaml |prefix|-openstack cinder openstack --reuse-values +#. Confirm that the user\_override lists the correct live migration completion timeout. + + .. parsed-literal:: + + ~(keystone_admin)$ system helm-override-show |prefix|-openstack nova openstack + + + The output should include the following: + + .. code-block:: none + + rpc_response_timepout: 30 + #. Update |prefix|-openstack to apply the update. .. parsed-literal:: diff --git a/doc/source/system_configuration/openstack/creating-optional-telemetry-services.rst b/doc/source/system_configuration/openstack/creating-optional-telemetry-services.rst index 9ee91d4e3..5c9225237 100644 --- a/doc/source/system_configuration/openstack/creating-optional-telemetry-services.rst +++ b/doc/source/system_configuration/openstack/creating-optional-telemetry-services.rst @@ -11,6 +11,8 @@ services are optional and includes Ceilometer \(Data collection service\), Panko \(Event storage service\), Gnocchi \(Time series metric storage service\), and Aodh \(Alarming service\). +.. rubric:: |context| + You can use the following procedure to enable these optional telemetry services on the active controller. @@ -26,10 +28,10 @@ services on the active controller. [--enabled ] - Modify helm chart attributes. This function is provided to modify system - behavioral attributes related to a chart. This does not modify a chart, nor - does it modify chart overrides which are managed through the helm-override- - update command. + Modify helm chart attributes. This function is provided to modify + system behaviorial attributes related to a chart. This does not modify + a chart, nor does it modify chart overrides which are managed through + the helm-override- update command. Positional arguments: Name of the application @@ -97,40 +99,40 @@ services on the active controller. .. parsed-literal:: ~(keystone_admin)]$ system helm-override-list |prefix|-openstack -l - +---------------------+--------------------------------+---------------+ - | chart name | overrides namespaces | chart enabled | - +---------------------+--------------------------------+---------------+ - | aodh | [u'openstack'] | [True] | - | barbican | [u'openstack'] | [False] | - | ceilometer | [u'openstack'] | [True] | - | ceph-rgw | [u'openstack'] | [False] | - | cinder | [u'openstack'] | [True] | - | dcdbsync | [u'openstack'] | [True] | - | fm-rest-api | [u'openstack'] | [True] | - | garbd | [u'openstack'] | [True] | - | glance | [u'openstack'] | [True] | - | gnocchi | [u'openstack'] | [True] | - | heat | [u'openstack'] | [True] | - | helm-toolkit | [] | [] | - | horizon | [u'openstack'] | [True] | - | ingress | [u'kube-system', u'openstack'] | [True, True] | - | ironic | [u'openstack'] | [False] | - | keystone | [u'openstack'] | [True] | - | keystone-api-proxy | [u'openstack'] | [True] | - | libvirt | [u'openstack'] | [True] | - | mariadb | [u'openstack'] | [True] | - | memcached | [u'openstack'] | [True] | - | networking-avs | [u'openstack'] | [True] | - | neutron | [u'openstack'] | [True] | - | nginx-ports-control | [] | [] | - | nova | [u'openstack'] | [True] | - | nova-api-proxy | [u'openstack'] | [True] | - | openvswitch | [u'openstack'] | [True] | - | panko | [u'openstack'] | [True] | - | placement | [u'openstack'] | [True] | - | rabbitmq | [u'openstack'] | [True] | - | version_check | [] | [] | - +---------------------+--------------------------------+---------------+ + +---------------------------+--------------------------------+---------------+ + | chart name | overrides namespaces | chart enabled | + +---------------------------+--------------------------------+---------------+ + | aodh | [u'openstack'] | [True] | + | barbican | [u'openstack'] | [False] | + | ceilometer | [u'openstack'] | [True] | + | ceph-rgw | [u'openstack'] | [False] | + | cinder | [u'openstack'] | [True] | + | dcdbsync | [u'openstack'] | [True] | + | fm-rest-api | [u'openstack'] | [False] | + | garbd | [u'openstack'] | [True] | + | glance | [u'openstack'] | [True] | + | gnocchi | [u'openstack'] | [True] | + | heat | [u'openstack'] | [True] | + | horizon | [u'openstack'] | [True] | + | ingress | [u'kube-system', u'openstack'] | [True, True] | + | ironic | [u'openstack'] | [False] | + | keystone | [u'openstack'] | [True] | + | keystone-api-proxy | [u'openstack'] | [True] | + | libvirt | [u'openstack'] | [True] | + | mariadb | [u'openstack'] | [True] | + | memcached | [u'openstack'] | [True] | + | networking-avs | [u'openstack'] | [True] | + | neutron | [u'openstack'] | [True] | + | nginx-ports-control | [] | [] | + | nova | [u'openstack'] | [True] | + | nova-api-proxy | [u'openstack'] | [True] | + | openstack-helm-toolkit | [] | [] | + | openstack-psp-rolebinding | [u'openstack'] | [True] | + | openvswitch | [u'openstack'] | [True] | + | panko | [u'openstack'] | [True] | + | placement | [u'openstack'] | [True] | + | rabbitmq | [u'openstack'] | [True] | + +---------------------------+--------------------------------+---------------+ #. To reapply these changes to the |prefix|-openstack application, run the following command. @@ -140,10 +142,10 @@ services on the active controller. ~(keystone_admin)]$ system application-apply |prefix|-openstack Once |prefix|-openstack is applied successfully, telemetry services - will be available. + should be available. #. Run the following helm command to verify the updates. .. code-block:: none - ~(keystone_admin)]$ helm list | grep -E ceilometer|gnocchi|panko|aodh + ~(keystone_admin)]$ helm list | grep -E 'ceilometer|gnocchi|panko|aodh' diff --git a/doc/source/system_configuration/openstack/customize-openstack-horizon-and-login-banner-branding.rst b/doc/source/system_configuration/openstack/customize-openstack-horizon-and-login-banner-branding.rst new file mode 100644 index 000000000..45375a83b --- /dev/null +++ b/doc/source/system_configuration/openstack/customize-openstack-horizon-and-login-banner-branding.rst @@ -0,0 +1,26 @@ + +.. ugk1563906611679 +.. _customize-openstack-horizon-and-login-banner-branding: + +===================================================== +Customize OpenStack Horizon and Login Banner Branding +===================================================== + +Learn from instructions and examples for creating and applying a tarball +containing a custom |os-prod-hor-long| theme, and associated branding files, +and customizing pre-login messages \(issue\) and post-login messages. + +You can modify the existing style sheet, font, and image files to develop your +own branding, package it, and then apply the branding by installing the tarball +that includes the modified files along with a manifest. For more information, +see |sysconf-doc|: :ref:`Creating a Custom Branding +Tarball `. + +You can also customize pre-login messages \(issue\) and post-login messages of +the day \(motd\) across |prod-os| during system commissioning and installation. +For more information, see |sysconf-doc|: :ref:`Branding +the Login Banner During Commissioning +` and :ref:`Branding the Login +Banner on a Commissioned System +`. + diff --git a/doc/source/system_configuration/openstack/index.rst b/doc/source/system_configuration/openstack/index.rst index 664e72be8..109500932 100644 --- a/doc/source/system_configuration/openstack/index.rst +++ b/doc/source/system_configuration/openstack/index.rst @@ -21,3 +21,4 @@ Configure OpenStack Services Using Helm Chart Overrides using-helm-overrides-to-enable-internal-dns adding-configuration-rpc-response-max-timeout-in-neutron-conf assigning-a-dedicated-vlan-id-to-a-target-project-network + customize-openstack-horizon-and-login-banner-branding diff --git a/doc/source/updates/index.rst b/doc/source/updates/index.rst new file mode 100644 index 000000000..1eb643ec9 --- /dev/null +++ b/doc/source/updates/index.rst @@ -0,0 +1,13 @@ +======= +Updates +======= +---------- +Openstack +---------- + +.. check what put here + +.. toctree:: + :maxdepth: 2 + + openstack/index diff --git a/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst b/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst new file mode 100644 index 000000000..0f1e7ad0a --- /dev/null +++ b/doc/source/updates/openstack/apply-update-to-the-stx-openstack-application.rst @@ -0,0 +1,115 @@ + +.. uqi1590003050708 +.. _apply-update-to-the-stx-openstack-application: + +========================================= +Apply Update to the Openstack Application +========================================= + +|prod-os| is managed using the StarlingX Application Package Manager. + +.. rubric:: |context| + +Use the StarlingX Application Package Manager :command:`application-update` +command to perform an update. + +.. code-block:: none + + ~(keystone_admin)$ system application-update [-n | --app-name] + [-v | --app-version] + +where the following are optional arguments: + +**** + The name of the application to update. + + You can look up the name of an application using the + :command:`application-list` command: + + .. parsed-literal:: + + ~(keystone_admin)$ system application-list + +--------------------------+----------+-------------------------------+---------------------------+----------+-----------+ + | application | version | manifest name | manifest file | status | progress | + +--------------------------+----------+-------------------------------+---------------------------+----------+-----------+ + | cert-manager | 20.06-5 | cert-manager-manifest | certmanager-manifest.yaml | applied | completed | + | nginx-ingress-controller | 20.06-0 | nginx-ingress-controller- | nginx_ingress_controller | applied | completed | + | | | -manifest | _manifest.yaml | | | + | oidc-auth-apps | 20.06-28 | oidc-auth-manifest | manifest.yaml | uploaded | completed | + | platform-integ-apps | 20.06-11 | platform-integration-manifest | manifest.yaml | applied | completed | + | |prefix|-openstack |s| | 20.10-0- | armada-manifest | |prefix|-openstack.yaml |s| | applied | completed | + | | centos- | | | | | + | | stable- | | | | | + | | versioned| | | | | + +--------------------------+----------+-------------------------------+---------------------------+----------+-----------+ + + The output indicates that the currently installed version of + |prefix|-openstack is 20.10-0. + +**** + The version to update the application to. + +and the following is a positional argument which must come last: + +**** + The tar file containing the application manifest, Helm charts and + configuration file. + +.. note:: + + In a |prod-dc| configuration the |prod-dc| System Controllers should be + upgrade before the subclouds. + +.. rubric:: |proc| + + +.. _apply-update-to-the-stx-openstack-application-steps-inn-llt-kmb: + +#. Retrieve the latest |prod-os| application tarball, + |prefix|-openstack-.-patch.tgz, from |dnload-loc|. + + .. note:: + The major-minor version is based on the current product release + version. The patch version will change within the release based on + incremental updates. + +#. Source the environment. + + .. code-block:: none + + $ source /etc/platform/openrc + ~(keystone_admin)$ + +#. Update the application. + + This will upload the software version and automatically apply it to the + system. + + For example: + + .. code-block:: none + + ~(keystone_admin)$ system application-update |prefix|-openstack-20.10-1.tgz + +#. Monitor the status of the application-apply operation until it has + completed successfully. + + .. code-block:: none + + ~(keystone_admin)$ system application-show |prefix|-openstack + +---------------+----------------------------------+ + | Property | Value | + +---------------+----------------------------------+ + | active | True | + | app_version | 20.06-1 | + | created_at | 2020-05-02T17:11:48.718963+00:00 | + | manifest_file | |prefix|-openstack.yaml |s| | + | manifest_name | openstack-armada-manifest | + | name | |prefix|-openstack |s| | + | progress | completed | + | status | applied | + | updated_at | 2020-05-02T17:44:40.152201+00:00 | + +---------------+----------------------------------+ + + + diff --git a/doc/source/updates/openstack/index.rst b/doc/source/updates/openstack/index.rst new file mode 100644 index 000000000..5127221de --- /dev/null +++ b/doc/source/updates/openstack/index.rst @@ -0,0 +1,14 @@ + +--------- +OpenStack +--------- + +=============== +Software Update +=============== + +.. toctree:: + :maxdepth: 1 + + apply-update-to-the-stx-openstack-application + software-updates-and-upgrades-overview \ No newline at end of file diff --git a/doc/source/updates/openstack/software-updates-and-upgrades-overview.rst b/doc/source/updates/openstack/software-updates-and-upgrades-overview.rst new file mode 100644 index 000000000..f2b205a38 --- /dev/null +++ b/doc/source/updates/openstack/software-updates-and-upgrades-overview.rst @@ -0,0 +1,26 @@ + +.. dqn1590002648435 +.. _software-updates-and-upgrades-overview: + +======== +Overview +======== + +The system application-update -n |prefix|-openstack -v +command is used for corrective content \(bug fixes\) -type updates to the +running containerized openstack application. + +This means that the system application-update -n |prefix|-openstack is **not** +used for upgrading between OpenStack releases \(e.g. Train to Ussuri\). The +:command:`system application-update` assumes that there is no data schema +changes or data migration required in order to update to the new openstack +container image\(s\). + +The system application-update -n |prefix|-openstack effectively performs a +helm upgrade of one or more of the OpenStack Helm chart releases within the +Armada Application. One or all of the containerized OpenStack deployments will +be updated according to their deployment specification. + +.. note:: + Compute nodes do not need to be reset, and hosted application |VMs| are not impacted. + diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-access-overview.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-access-overview.rst index 5b7eef561..f4a0bac16 100644 --- a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-access-overview.rst +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-access-overview.rst @@ -7,7 +7,7 @@ Access Overview =============== A general user of |prod| can access the system using remote -**kubectl**/**helm** :abbr:`CLIs (Command Line Interfaces)` and the Kubernetes +**kubectl**/**helm** |CLIs| and the Kubernetes Dashboard. Your |prod| administrator must setup a user account \(that is, either a local diff --git a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host.rst b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host.rst index a2b2836cc..cb90fc331 100644 --- a/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host.rst +++ b/doc/source/usertasks/kubernetes/kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host.rst @@ -7,9 +7,8 @@ Install Kubectl and Helm Clients Directly on a Host =================================================== -As an alternative to using the container-backed Remote :abbr:`CLIs (Command -Line Interfaces)` for kubectl and helm, you can install these commands -directly on your remote host. +As an alternative to using the container-backed Remote |CLIs| for kubectl and +helm, you can install these commands directly on your remote host. .. rubric:: |context| @@ -30,7 +29,7 @@ You will need the following information from your |prod| administrator: .. _kubernetes-user-tutorials-installing-kubectl-and-helm-clients-directly-on-a-host-ul-nlr-1pq-nlb: -- the floating OAM IP address of the |prod| +- the floating |OAM| IP address of the |prod| - login credential information; in this example, it is the "TOKEN" for a local Kubernetes ServiceAccount. diff --git a/doc/source/usertasks/kubernetes/remote-cli-access.rst b/doc/source/usertasks/kubernetes/remote-cli-access.rst index a5ae38867..acf2276ac 100644 --- a/doc/source/usertasks/kubernetes/remote-cli-access.rst +++ b/doc/source/usertasks/kubernetes/remote-cli-access.rst @@ -6,8 +6,8 @@ Remote CLI Access ================= -You can access the system :abbr:`CLIs (Command Line Interfaces)` from a -remote workstation using one of the two methods. +You can access the system |CLIs| from a remote workstation using one of the two +methods. .. xreflink .. note:: To use the remote Windows Active Directory server for authentication of @@ -20,7 +20,7 @@ remote workstation using one of the two methods. Interface)` tarball from |dnload-loc| to install a set of container-backed remote CLIs for accessing a remote |prod-long|. This provides access to the kubernetes-related CLIs \(kubectl, helm\). This approach is - simple to install, portable across Linux, OSX and Windows, and provides + simple to install, portable across Linux, MacOS and Windows, and provides access to all |prod-long| CLIs. However, commands such as those that reference local files or require a shell are awkward to run in this environment. diff --git a/doc/source/usertasks/kubernetes/usertask-using-container-backed-remote-clis-and-clients.rst b/doc/source/usertasks/kubernetes/usertask-using-container-backed-remote-clis-and-clients.rst index 039a403e0..55cc82694 100644 --- a/doc/source/usertasks/kubernetes/usertask-using-container-backed-remote-clis-and-clients.rst +++ b/doc/source/usertasks/kubernetes/usertask-using-container-backed-remote-clis-and-clients.rst @@ -6,9 +6,9 @@ Use Container-backed Remote CLIs ================================ -Remote platform :abbr:`CLIs (Command Line Interfaces)` can be used in any shell -after sourcing the generated remote CLI/client RC file. This RC file sets up -the required environment variables and aliases for the remote CLI commands. +Remote platform |CLIs| can be used in any shell after sourcing the generated +remote CLI/client RC file. This RC file sets up the required environment +variables and aliases for the remote CLI commands. .. contents:: The following topics are discussed below: :local: