Merge "Align r5 and r6 bare metal procedures"

This commit is contained in:
Zuul 2021-09-15 19:35:55 +00:00 committed by Gerrit Code Review
commit 80bc959a99
15 changed files with 107 additions and 224 deletions

View File

@ -92,13 +92,13 @@ Configure worker nodes
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prefix|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal .. parsed-literal::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
@ -111,7 +111,7 @@ Configure worker nodes
If using |OVS-DPDK| vswitch, run the following commands: If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for worker node is to use two cores on numa-node 0 Default recommendation for worker node is to use two cores on numa-node 0
for |OVS-DPDK| vSwitch; physical NICs are typically on first numa-node. for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
This should have been automatically configured, if not run the following This should have been automatically configured, if not run the following
command. command.
@ -124,7 +124,6 @@ Configure worker nodes
done done
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge each |NUMA| node on the host. It is recommended to configure 1x 1G huge
page (-1G 1) for vSwitch memory on each |NUMA| node on the host. page (-1G 1) for vSwitch memory on each |NUMA| node on the host.

View File

@ -254,8 +254,8 @@ Configure controller-0
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prefix|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.

View File

@ -216,6 +216,7 @@ The newly installed controller needs to be configured.
To configure a vlan or aggregated ethernet interface, see :ref:`Node To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`. Interfaces <node-interfaces-index>`.
#. Configure |NTP| servers for network time synchronization: #. Configure |NTP| servers for network time synchronization:
:: ::
@ -235,11 +236,11 @@ The newly installed controller needs to be configured.
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the StarlingX OpenStack application
(|prereq|-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx .. only:: starlingx
@ -275,7 +276,7 @@ The newly installed controller needs to be configured.
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of |prereq|-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -311,7 +312,6 @@ The newly installed controller needs to be configured.
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch # assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA| to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
@ -331,7 +331,6 @@ The newly installed controller needs to be configured.
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 1 system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important:: .. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
@ -356,7 +355,7 @@ The newly installed controller needs to be configured.
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for |prereq|-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -583,8 +582,8 @@ machine.
.. only:: openstack .. only:: openstack
* **For OpenStack only:** Due to the additional openstack services * **For OpenStack only:** Due to the additional openstack services
containers running on the controller host, the size of the docker filesystem containers running on the controller host, the size of the docker
needs to be increased from the default size of 30G to 60G. filesystem needs to be increased from the default size of 30G to 60G.
.. code-block:: bash .. code-block:: bash

View File

@ -281,7 +281,7 @@ Configure controller-0
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: ::
@ -293,7 +293,7 @@ Configure controller-0
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of |prereq|-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -312,7 +312,7 @@ Configure controller-0
system modify --vswitch_type none system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of |prereq|-openstack the containerized |OVS| defined in the helm charts of |prefix|-openstack
manifest. manifest.
To deploy |OVS-DPDK|, run the following command: To deploy |OVS-DPDK|, run the following command:
@ -481,7 +481,6 @@ Configure controller-1
To configure a vlan or aggregated ethernet interface, see :ref:`Node To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`. Interfaces <node-interfaces-index>`.
#. The MGMT interface is partially set up by the network install procedure; #. The MGMT interface is partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt". specifying the attached network of "mgmt".
@ -506,7 +505,7 @@ Configure controller-1
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
**For OpenStack only:** Assign OpenStack host labels to controller-1 in **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: ::
@ -526,7 +525,6 @@ Unlock controller-1 in order to bring it into service:
system host-unlock controller-1 system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host service. This can take 5-10 minutes, depending on the performance of the host
machine. machine.
@ -624,7 +622,7 @@ Configure worker nodes
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal:: .. parsed-literal::
@ -698,7 +696,7 @@ Configure worker nodes
done done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, #. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for |prereq|-openstack nova ephemeral disks. needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash

View File

@ -277,9 +277,9 @@ Configure worker nodes
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal .. parsed-literal::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
@ -302,6 +302,7 @@ Configure worker nodes
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch # assign 2 cores on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 2 $NODE system host-cpu-modify -f vswitch -p0 2 $NODE
done done
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
@ -349,7 +350,7 @@ Configure worker nodes
done done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, #. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for |prereq|-openstack nova ephemeral disks. needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash

View File

@ -63,7 +63,7 @@ standard configuration, either:
This guide assumes that you have a standard deployment installed and configured This guide assumes that you have a standard deployment installed and configured
with 2x controllers and at least 1x compute-labeled worker node, with the with 2x controllers and at least 1x compute-labeled worker node, with the
StarlingX OpenStack application (|prereq|-openstack) applied. StarlingX OpenStack application (|prefix|-openstack) applied.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -29,7 +29,7 @@ Install software on worker nodes
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | None | None | locked | disabled | offline | | 3 | None | None | locked | disabled | offline |
| 4 | None | None | locked | disabled | offline | | 4 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
@ -93,10 +93,10 @@ Configure worker nodes
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application **These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (|prefix|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal:: .. parsed-literal::
@ -108,10 +108,10 @@ Configure worker nodes
#. **For OpenStack only:** Configure the host settings for the vSwitch. #. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:** If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for worker node is to use two cores on numa-node 0 Default recommendation for worker node is to use two cores on numa-node 0
for |OVS|-|DPDK| vSwitch; physical |NICs| are typically on first numa-node. for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
This should have been automatically configured, if not run the following This should have been automatically configured, if not run the following
command. command.
@ -124,9 +124,9 @@ Configure worker nodes
done done
When using |OVS|-|DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge each |NUMA| node on the host. It is recommended to configure 1x 1G huge
page (-1G 1) for vSwitch memory on each |NUMA| node on host. page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge page However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M size is supported on any one host. If your application VMs require 2M
@ -148,7 +148,7 @@ Configure worker nodes
.. important:: .. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property: huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large hw:mem_page_size=large
@ -168,30 +168,8 @@ Configure worker nodes
done done
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list worker-0
~(keystone)admin)$ system host-cpu-list worker-1
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
for NODE in worker-0 worker-1; do
system host-cpu-modify -f application-isolated -p0 10 $NODE
system host-cpu-modify -f application-isolated -p1 10 $NODE
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, #. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks. needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -205,7 +183,7 @@ Configure worker nodes
# List hosts disks and take note of UUID of disk to be used # List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE} system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of # ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} --nowrap | fgrep rootfs ) # system host-show ${NODE} | fgrep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response # Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of # The size of the PARTITION needs to be large enough to hold the aggregate size of
@ -216,7 +194,7 @@ Configure worker nodes
# Additional PARTITION(s) from additional disks can be added later if required. # Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30 PARTITION_SIZE=30
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group # Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID> system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>

View File

@ -255,19 +255,19 @@ Configure controller-0
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx .. only:: starlingx
:: .. parsed-literal::
system host-label-assign controller-0 openstack-control-plane=enabled system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled system host-label-assign controller-0 |vswitch-label|
system host-label-assign controller-0 sriov=enabled system host-label-assign controller-0 sriov=enabled
.. only:: partner .. only:: partner
@ -294,7 +294,7 @@ Configure controller-0
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -313,8 +313,8 @@ Configure controller-0
system modify --vswitch_type none system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of stx-openstack the containerized |OVS| defined in the helm charts of
manifest. |prefix|-openstack manifest.
To deploy |OVS-DPDK|, run the following command: To deploy |OVS-DPDK|, run the following command:
@ -380,25 +380,9 @@ Configure controller-0
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list controller-0
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-0
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-0
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -464,8 +448,6 @@ Configure controller-0
# Create Data Networks that vswitch 'data' interfaces will be connected to # Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces # Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0} system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
@ -619,19 +601,18 @@ Unlock controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0 system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than # if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk partition to cgts-vg. # 60G, you will need to add a new disk partition to cgts-vg.
# There must be at least 20GB of available space after the docker
# filesystem is increased.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk ) # ( if not use another unused disk )
# Get device path of ROOT DISK # Get device path of ROOT DISK
system host-show controller-0 --nowrap | fgrep rootfs system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks # Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0 system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response # Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G # Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30 PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group # Add new partition to cgts-vg local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID> system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added sleep 2 # wait for partition to be added
@ -734,7 +715,7 @@ Configure controller-1
system host-label-assign controller-1 openstack-control-plane=enabled system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled system host-label-assign controller-1 |vswitch-label|
system host-label-assign controller-1 sriov=enabled system host-label-assign controller-1 sriov=enabled
.. only:: partner .. only:: partner
@ -757,7 +738,7 @@ Configure controller-1
#. **For OpenStack only:** Configure the host settings for the vSwitch. #. **For OpenStack only:** Configure the host settings for the vSwitch.
If using |OVS-DPDK| vSwitch, run the following commands: If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for an |AIO|-controller is to use a single core Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch. This should have been automatically configured, for |OVS-DPDK| vSwitch. This should have been automatically configured,
@ -770,8 +751,9 @@ Configure controller-1
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge each |NUMA| node on the host. It is recommended
page (-1G 1) for vSwitch memory on each |NUMA| node on the host. to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge page However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M size is supported on any one host. If your application VMs require 2M
@ -784,7 +766,7 @@ Configure controller-1
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch # assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-1 0 system host-memory-modify -f vswitch -1G 1 controller-1 0
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-1 1 system host-memory-modify -f vswitch -1G 1 controller-1 1
@ -806,25 +788,9 @@ Configure controller-1
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications # assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
system host-memory-modify -f application -1G 10 controller-1 1 system host-memory-modify -f application -1G 10 controller-1 1
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list controller-1
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-1
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-1
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -1012,8 +978,8 @@ machine.
.. only:: openstack .. only:: openstack
* **For OpenStack only:** Due to the additional openstack services containers * **For OpenStack only:** Due to the additional openstack services containers
running on the controller host, the size of the docker filesystem needs to running on the controller host, the size of the docker filesystem needs to be
be increased from the default size of 30G to 60G. increased from the default size of 30G to 60G.
.. code-block:: bash .. code-block:: bash

View File

@ -116,8 +116,9 @@ Bootstrap system on controller-0
.. only:: starlingx .. only:: starlingx
In either of the above options, the bootstrap playbooks default values In either of the above options, the bootstrap playbooks default
will pull all container images required for the |prod-p| from Docker hub. values will pull all container images required for the |prod-p| from
Docker hub
If you have setup a private Docker registry to use for bootstrapping If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml: then you will need to add the following lines in $HOME/localhost.yml:
@ -235,11 +236,11 @@ The newly installed controller needs to be configured.
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx .. only:: starlingx
@ -274,7 +275,7 @@ The newly installed controller needs to be configured.
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -303,7 +304,7 @@ The newly installed controller needs to be configured.
system modify --vswitch_type |ovs-dpdk| system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a single core Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vswitch. for |OVS-DPDK| vSwitch.
.. code-block:: bash .. code-block:: bash
@ -311,8 +312,9 @@ The newly installed controller needs to be configured.
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge each |NUMA| node on the host. It is recommended
page (-1G 1) for vSwitch memory on each |NUMA| node on the host. to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge page However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M size is supported on any one host. If your application |VMs| require 2M
@ -351,25 +353,8 @@ The newly installed controller needs to be configured.
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list controller-0
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-0
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-0
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -383,7 +368,7 @@ The newly installed controller needs to be configured.
# List hosts disks and take note of UUID of disk to be used # List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE} system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of # ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} --nowrap | fgrep rootfs ) # system host-show ${NODE} | fgrep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response # Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of # The size of the PARTITION needs to be large enough to hold the aggregate size of
@ -705,4 +690,4 @@ machine.
.. only:: partner .. only:: partner
.. include:: /_includes/72hr-to-license.rest .. include:: /_includes/72hr-to-license.rest

View File

@ -175,9 +175,9 @@ Bootstrap system on controller-0
docker_no_proxy: docker_no_proxy:
- 1.2.3.4 - 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>` Refer to :ref:`Ansible Bootstrap Configurations
for information on additional Ansible bootstrap configurations for advanced <ansible_bootstrap_configs_r6>` for information on additional Ansible
Ansible bootstrap scenarios. bootstrap configurations for advanced Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -281,7 +281,7 @@ Configure controller-0
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: ::
@ -293,7 +293,7 @@ Configure controller-0
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -312,7 +312,7 @@ Configure controller-0
system modify --vswitch_type none system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of stx-openstack the containerized |OVS| defined in the helm charts of |prefix|-openstack
manifest. manifest.
To deploy |OVS-DPDK|, run the following command: To deploy |OVS-DPDK|, run the following command:
@ -324,7 +324,7 @@ Configure controller-0
Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller
or worker nodes created will default to automatically assigning 1 vSwitch or worker nodes created will default to automatically assigning 1 vSwitch
core for |AIO| controllers and 2 vSwitch cores (both on numa-node 0; core for |AIO| controllers and 2 vSwitch cores (both on numa-node 0;
physical NICs are typically on first numa-node) for compute-labeled physical |NICs| are typically on first numa-node) for compute-labeled
worker nodes. worker nodes.
.. note:: .. note::
@ -505,7 +505,7 @@ Configure controller-1
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
**For OpenStack only:** Assign OpenStack host labels to controller-1 in **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: ::
@ -622,13 +622,13 @@ Configure worker nodes
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. code-block:: bash .. parsed-literal::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
@ -695,30 +695,8 @@ Configure worker nodes
done done
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list worker-0
~(keystone)admin)$ system host-cpu-list worker-1
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
for NODE in worker-0 worker-1; do
system host-cpu-modify -f application-isolated -p0 10 $NODE
system host-cpu-modify -f application-isolated -p1 10 $NODE
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, #. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks. needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
@ -732,7 +710,7 @@ Configure worker nodes
# List hosts disks and take note of UUID of disk to be used # List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE} system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of # ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} --nowrap | fgrep rootfs ) # system host-show ${NODE} | fgrep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response # Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of # The size of the PARTITION needs to be large enough to hold the aggregate size of
@ -811,7 +789,7 @@ Optionally Configure PCI-SRIOV Interfaces
* Configure the pci-sriov interfaces for worker nodes. * Configure the pci-sriov interfaces for worker nodes.
.. code-block:: .. code-block:: bash
# Execute the following lines with # Execute the following lines with
export NODE=worker-0 export NODE=worker-0
@ -934,4 +912,4 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
.. only:: partner .. only:: partner
.. include:: /_includes/72hr-to-license.rest .. include:: /_includes/72hr-to-license.rest

View File

@ -277,7 +277,7 @@ Configure worker nodes
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal:: .. parsed-literal::
@ -291,9 +291,10 @@ Configure worker nodes
If using |OVS-DPDK| vSwitch, run the following commands: If using |OVS-DPDK| vSwitch, run the following commands:
Default recommendation for worker node is to use two cores on numa-node 0 Default recommendation for worker node is to use node two cores on
for |OVS-DPDK| vSwitch. This should have been automatically configured, numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
if not run the following command. numa-node. This should have been automatically configured, if not run
the following command.
.. code-block:: bash .. code-block:: bash
@ -348,28 +349,6 @@ Configure worker nodes
done done
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
NOVAs dedicated CPU Pool for this host. By default, all remaining host
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVAs
shared CPU Pool for this host. List the number of host cpus and function
assignments and configure the required dedicated host CPUs.
.. code-block:: bash
# list the number and function assignments for hosts CPUs
# application function → in NOVAs shared CPU Pool
# application-isolated function → in NOVAs dedicated CPU Pool
~(keystone)admin)$ system host-cpu-list worker-0
~(keystone)admin)$ system host-cpu-list worker-1
# Configure the required number of host CPUs in NOVAs dedicated CPU Pool for each processor/socket
for NODE in worker-0 worker-1; do
system host-cpu-modify -f application-isolated -p0 10 $NODE
system host-cpu-modify -f application-isolated -p1 10 $NODE
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, #. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks. needed for stx-openstack nova ephemeral disks.
@ -385,7 +364,7 @@ Configure worker nodes
# List hosts disks and take note of UUID of disk to be used # List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE} system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of # ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} --nowrap | fgrep rootfs ) # system host-show ${NODE} | fgrep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response # Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of # The size of the PARTITION needs to be large enough to hold the aggregate size of

View File

@ -33,10 +33,10 @@ installed. Download the latest release tarball for Cygwin from
tarball, extract it to any location and change the Windows <PATH> variable to tarball, extract it to any location and change the Windows <PATH> variable to
include its bin folder from the extracted winpty folder. include its bin folder from the extracted winpty folder.
For access to remote CLI, it is required to set the DNS in the cluster using the For access to remote CLI, it is required to set the DNS in the cluster using
:command:`system service-parameter-add openstack helm endpoint_domain=domain_name` the :command:`system service-parameter-add openstack helm
command and reapply OpenStack using **system application-apply |prefix|-openstack** endpoint_domain=domain_name` command and reapply OpenStack using the system
command. application-apply |prefix|-openstack command.
The following procedure shows how to configure the Container-backed Remote The following procedure shows how to configure the Container-backed Remote
|CLIs| for OpenStack remote access. |CLIs| for OpenStack remote access.

View File

@ -138,7 +138,7 @@ serviceparameter-add` command to configure and set the OpenStack domain name:
.. note:: .. note::
If an error occurs, remove the following ingress parameters, **nova-cluster-fqdn** If an error occurs, remove the following ingress parameters, **nova-cluster-fqdn**
and **nova-namespace-fqdn** and reapply OpenStack using :command:`system application-apply |prefix|-openstack`. and **nova-namespace-fqdn** and reapply OpenStack using system application-apply |prefix|-openstack.
#. Apply the |prefix|-openstack application. #. Apply the |prefix|-openstack application.

View File

@ -2,11 +2,11 @@
.. important:: .. important::
**This step is required only if the StarlingX OpenStack application This step is required only if the StarlingX OpenStack application
(|prereq|-openstack) will be installed.** (|prefix-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prereq|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal:: .. parsed-literal::
@ -21,7 +21,7 @@
StarlingX has |OVS| (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of |prereq|-openstack * Runs in a container; defined within the helm charts of |prefix|-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
@ -40,7 +40,7 @@
system modify --vswitch_type none system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of |prereq|-openstack the containerized |OVS| defined in the helm charts of |prefix|-openstack
manifest. manifest.
To deploy |OVS-DPDK|, run the following command: To deploy |OVS-DPDK|, run the following command:
@ -94,7 +94,7 @@
controllers) to apply the change. controllers) to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for |prereq|-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
:: ::

View File

@ -195,7 +195,7 @@ commands to manage containerized applications provided as part of |prod|.
For example: For example:
.. code-block:: none .. parsed-literal::
~(keystone_admin)]$ system helm-override-show |prefix|-openstack glance openstack ~(keystone_admin)]$ system helm-override-show |prefix|-openstack glance openstack
@ -329,7 +329,7 @@ commands to manage containerized applications provided as part of |prod|.
**mode** **mode**
An application-specific mode controlling how the manifest is An application-specific mode controlling how the manifest is
applied. This option is used to back-up and restore the applied. This option is used to back-up and restore the
**|prefix|-openstack** application. |prefix|-openstack application.
and the following is a positional argument: and the following is a positional argument:
@ -369,7 +369,7 @@ commands to manage containerized applications provided as part of |prod|.
For example: For example:
.. code-block:: none .. parsed-literal::
~(keystone_admin)]$ system application-abort |prefix|-openstack ~(keystone_admin)]$ system application-abort |prefix|-openstack
Application abort request has been accepted. If the previous operation has not Application abort request has been accepted. If the previous operation has not