Align r5 and r6 bare metal procedures
Bring r6 procedures into alignment with updates on r5 Fix substitutions in unexpandable contexts. |prereq| > |prefix| as required. Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: I4a13d8cb447c014f311228943551e117f8f3b773
This commit is contained in:
parent
7555b32373
commit
6abab2e712
@ -92,13 +92,13 @@ Configure worker nodes
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.**
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -111,7 +111,7 @@ Configure worker nodes
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS-DPDK| vSwitch; physical NICs are typically on first numa-node.
|
||||
for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
|
||||
This should have been automatically configured, if not run the following
|
||||
command.
|
||||
|
||||
@ -124,7 +124,6 @@ Configure worker nodes
|
||||
|
||||
done
|
||||
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
@ -254,8 +254,8 @@ Configure controller-0
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.**
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
@ -216,6 +216,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure |NTP| servers for network time synchronization:
|
||||
|
||||
::
|
||||
@ -235,11 +236,11 @@ The newly installed controller needs to be configured.
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(|prereq|-openstack) will be installed.**
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
|
||||
.. only:: starlingx
|
||||
@ -275,7 +276,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prereq|-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -311,7 +312,6 @@ The newly installed controller needs to be configured.
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
@ -331,7 +331,6 @@ The newly installed controller needs to be configured.
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-0 1
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
@ -356,7 +355,7 @@ The newly installed controller needs to be configured.
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for |prereq|-openstack nova ephemeral disks.
|
||||
group, which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -583,8 +582,8 @@ machine.
|
||||
.. only:: openstack
|
||||
|
||||
* **For OpenStack only:** Due to the additional openstack services’
|
||||
containers running on the controller host, the size of the docker filesystem
|
||||
needs to be increased from the default size of 30G to 60G.
|
||||
containers running on the controller host, the size of the docker
|
||||
filesystem needs to be increased from the default size of 30G to 60G.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -281,7 +281,7 @@ Configure controller-0
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
@ -293,7 +293,7 @@ Configure controller-0
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prereq|-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -312,7 +312,7 @@ Configure controller-0
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of |prereq|-openstack
|
||||
the containerized |OVS| defined in the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
@ -481,7 +481,6 @@ Configure controller-1
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
|
||||
#. The MGMT interface is partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
@ -506,7 +505,7 @@ Configure controller-1
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
@ -526,7 +525,6 @@ Unlock controller-1 in order to bring it into service:
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
@ -624,7 +622,7 @@ Configure worker nodes
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@ -698,7 +696,7 @@ Configure worker nodes
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prereq|-openstack nova ephemeral disks.
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -277,9 +277,9 @@ Configure worker nodes
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -302,6 +302,7 @@ Configure worker nodes
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
@ -349,7 +350,7 @@ Configure worker nodes
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prereq|-openstack nova ephemeral disks.
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -63,7 +63,7 @@ standard configuration, either:
|
||||
|
||||
This guide assumes that you have a standard deployment installed and configured
|
||||
with 2x controllers and at least 1x compute-labeled worker node, with the
|
||||
StarlingX OpenStack application (|prereq|-openstack) applied.
|
||||
StarlingX OpenStack application (|prefix|-openstack) applied.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
@ -29,7 +29,7 @@ Install software on worker nodes
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | None | None | locked | disabled | offline |
|
||||
| 4 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
@ -93,10 +93,10 @@ Configure worker nodes
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
(|prefix|-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@ -108,10 +108,10 @@ Configure worker nodes
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
**If using OVS-DPDK vswitch, run the following commands:**
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS|-|DPDK| vSwitch; physical |NICs| are typically on first numa-node.
|
||||
for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
|
||||
This should have been automatically configured, if not run the following
|
||||
command.
|
||||
|
||||
@ -124,9 +124,9 @@ Configure worker nodes
|
||||
|
||||
done
|
||||
|
||||
When using |OVS|-|DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on host.
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application VMs require 2M
|
||||
@ -148,7 +148,7 @@ Configure worker nodes
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
@ -168,30 +168,8 @@ Configure worker nodes
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list worker-0
|
||||
~(keystone)admin)$ system host-cpu-list worker-1
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
system host-cpu-modify -f application-isolated -p0 10 $NODE
|
||||
system host-cpu-modify -f application-isolated -p1 10 $NODE
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -205,7 +183,7 @@ Configure worker nodes
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
@ -216,7 +194,7 @@ Configure worker nodes
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
|
@ -255,19 +255,19 @@ Configure controller-0
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
::
|
||||
.. parsed-literal::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 openvswitch=enabled
|
||||
system host-label-assign controller-0 |vswitch-label|
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
.. only:: partner
|
||||
@ -294,7 +294,7 @@ Configure controller-0
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -313,8 +313,8 @@ Configure controller-0
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||
manifest.
|
||||
the containerized |OVS| defined in the helm charts of
|
||||
|prefix|-openstack manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
@ -380,25 +380,9 @@ Configure controller-0
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list controller-0
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-0
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-0
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for stx-openstack nova ephemeral disks.
|
||||
group, which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -464,8 +448,6 @@ Configure controller-0
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
@ -619,19 +601,18 @@ Unlock controller-0
|
||||
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||
system host-lvg-list controller-0
|
||||
# if existing docker fs size + cgts-vg available space is less than
|
||||
# 80G, you will need to add a new disk partition to cgts-vg.
|
||||
# There must be at least 20GB of available space after the docker
|
||||
# filesystem is increased.
|
||||
# 60G, you will need to add a new disk partition to cgts-vg.
|
||||
|
||||
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||
# ( if not use another unused disk )
|
||||
# Get device path of ROOT DISK
|
||||
system host-show controller-0 --nowrap | fgrep rootfs
|
||||
system host-show controller-0 | fgrep rootfs
|
||||
# Get UUID of ROOT DISK by listing disks
|
||||
system host-disk-list controller-0
|
||||
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||
PARTITION_SIZE=30
|
||||
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
|
||||
# Add new partition to ‘cgts-vg’ local volume group
|
||||
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
|
||||
sleep 2 # wait for partition to be added
|
||||
@ -734,7 +715,7 @@ Configure controller-1
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
system host-label-assign controller-1 |vswitch-label|
|
||||
system host-label-assign controller-1 sriov=enabled
|
||||
|
||||
.. only:: partner
|
||||
@ -757,7 +738,7 @@ Configure controller-1
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vSwitch, run the following commands:
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vSwitch. This should have been automatically configured,
|
||||
@ -770,8 +751,9 @@ Configure controller-1
|
||||
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application VMs require 2M
|
||||
@ -784,7 +766,7 @@ Configure controller-1
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-1 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-1 1
|
||||
|
||||
|
||||
@ -806,25 +788,9 @@ Configure controller-1
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-1 1
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list controller-1
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-1
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-1
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -1012,8 +978,8 @@ machine.
|
||||
.. only:: openstack
|
||||
|
||||
* **For OpenStack only:** Due to the additional openstack services’ containers
|
||||
running on the controller host, the size of the docker filesystem needs to
|
||||
be increased from the default size of 30G to 60G.
|
||||
running on the controller host, the size of the docker filesystem needs to be
|
||||
increased from the default size of 30G to 60G.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
|
@ -116,8 +116,9 @@ Bootstrap system on controller-0
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default values
|
||||
will pull all container images required for the |prod-p| from Docker hub.
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
@ -235,11 +236,11 @@ The newly installed controller needs to be configured.
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
@ -274,7 +275,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -303,7 +304,7 @@ The newly installed controller needs to be configured.
|
||||
system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vswitch.
|
||||
for |OVS-DPDK| vSwitch.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -311,8 +312,9 @@ The newly installed controller needs to be configured.
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
@ -351,25 +353,8 @@ The newly installed controller needs to be configured.
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list controller-0
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p0 10 controller-0
|
||||
~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 10 controller-0
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for stx-openstack nova ephemeral disks.
|
||||
group, which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -383,7 +368,7 @@ The newly installed controller needs to be configured.
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
@ -705,4 +690,4 @@ machine.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/72hr-to-license.rest
|
||||
.. include:: /_includes/72hr-to-license.rest
|
||||
|
@ -175,9 +175,9 @@ Bootstrap system on controller-0
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r6>` for information on additional Ansible
|
||||
bootstrap configurations for advanced Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
@ -281,7 +281,7 @@ Configure controller-0
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
@ -293,7 +293,7 @@ Configure controller-0
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -312,7 +312,7 @@ Configure controller-0
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||
the containerized |OVS| defined in the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
@ -324,7 +324,7 @@ Configure controller-0
|
||||
Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller
|
||||
or worker nodes created will default to automatically assigning 1 vSwitch
|
||||
core for |AIO| controllers and 2 vSwitch cores (both on numa-node 0;
|
||||
physical NICs are typically on first numa-node) for compute-labeled
|
||||
physical |NICs| are typically on first numa-node) for compute-labeled
|
||||
worker nodes.
|
||||
|
||||
.. note::
|
||||
@ -505,7 +505,7 @@ Configure controller-1
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
@ -622,13 +622,13 @@ Configure worker nodes
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. code-block:: bash
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
@ -695,30 +695,8 @@ Configure worker nodes
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list worker-0
|
||||
~(keystone)admin)$ system host-cpu-list worker-1
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
system host-cpu-modify -f application-isolated -p0 10 $NODE
|
||||
system host-cpu-modify -f application-isolated -p1 10 $NODE
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -732,7 +710,7 @@ Configure worker nodes
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
@ -811,7 +789,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -934,4 +912,4 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/72hr-to-license.rest
|
||||
.. include:: /_includes/72hr-to-license.rest
|
||||
|
@ -277,7 +277,7 @@ Configure worker nodes
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@ -291,9 +291,10 @@ Configure worker nodes
|
||||
|
||||
If using |OVS-DPDK| vSwitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS-DPDK| vSwitch. This should have been automatically configured,
|
||||
if not run the following command.
|
||||
Default recommendation for worker node is to use node two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
|
||||
numa-node. This should have been automatically configured, if not run
|
||||
the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -348,28 +349,6 @@ Configure worker nodes
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack Only:** Optionally configure the number of host CPUs in
|
||||
NOVA’s dedicated CPU Pool for this host. By default, all remaining host
|
||||
CPUs, outside of platform and vswitch host CPUs, are assigned to NOVA’s
|
||||
shared CPU Pool for this host. List the number of host cpus and function
|
||||
assignments and configure the required dedicated host CPUs.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# list the number and function assignments for host’s CPUs
|
||||
# ‘application’ function → in NOVA’s shared CPU Pool
|
||||
# ‘application-isolated’ function → in NOVA’s dedicated CPU Pool
|
||||
~(keystone)admin)$ system host-cpu-list worker-0
|
||||
~(keystone)admin)$ system host-cpu-list worker-1
|
||||
|
||||
# Configure the required number of host CPUs in NOVA’s dedicated CPU Pool for each processor/socket
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
system host-cpu-modify -f application-isolated -p0 10 $NODE
|
||||
system host-cpu-modify -f application-isolated -p1 10 $NODE
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
@ -385,7 +364,7 @@ Configure worker nodes
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
|
@ -33,10 +33,10 @@ installed. Download the latest release tarball for Cygwin from
|
||||
tarball, extract it to any location and change the Windows <PATH> variable to
|
||||
include its bin folder from the extracted winpty folder.
|
||||
|
||||
For access to remote CLI, it is required to set the DNS in the cluster using the
|
||||
:command:`system service-parameter-add openstack helm endpoint_domain=domain_name`
|
||||
command and reapply OpenStack using **system application-apply |prefix|-openstack**
|
||||
command.
|
||||
For access to remote CLI, it is required to set the DNS in the cluster using
|
||||
the :command:`system service-parameter-add openstack helm
|
||||
endpoint_domain=domain_name` command and reapply OpenStack using the system
|
||||
application-apply |prefix|-openstack command.
|
||||
|
||||
The following procedure shows how to configure the Container-backed Remote
|
||||
|CLIs| for OpenStack remote access.
|
||||
|
@ -138,7 +138,7 @@ service‐parameter-add` command to configure and set the OpenStack domain name:
|
||||
|
||||
.. note::
|
||||
If an error occurs, remove the following ingress parameters, **nova-cluster-fqdn**
|
||||
and **nova-namespace-fqdn** and reapply OpenStack using :command:`system application-apply |prefix|-openstack`.
|
||||
and **nova-namespace-fqdn** and reapply OpenStack using system application-apply |prefix|-openstack.
|
||||
|
||||
#. Apply the |prefix|-openstack application.
|
||||
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(|prereq|-openstack) will be installed.**
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prereq|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@ -21,7 +21,7 @@
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prereq|-openstack
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
@ -40,7 +40,7 @@
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of |prereq|-openstack
|
||||
the containerized |OVS| defined in the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
@ -94,7 +94,7 @@
|
||||
controllers) to apply the change.
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for |prereq|-openstack nova ephemeral disks.
|
||||
group, which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
|
@ -195,7 +195,7 @@ commands to manage containerized applications provided as part of |prod|.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
.. parsed-literal::
|
||||
|
||||
~(keystone_admin)]$ system helm-override-show |prefix|-openstack glance openstack
|
||||
|
||||
@ -329,7 +329,7 @@ commands to manage containerized applications provided as part of |prod|.
|
||||
**mode**
|
||||
An application-specific mode controlling how the manifest is
|
||||
applied. This option is used to back-up and restore the
|
||||
**|prefix|-openstack** application.
|
||||
|prefix|-openstack application.
|
||||
|
||||
and the following is a positional argument:
|
||||
|
||||
@ -369,7 +369,7 @@ commands to manage containerized applications provided as part of |prod|.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
.. parsed-literal::
|
||||
|
||||
~(keystone_admin)]$ system application-abort |prefix|-openstack
|
||||
Application abort request has been accepted. If the previous operation has not
|
||||
|
Loading…
Reference in New Issue
Block a user