Install conditionalizations
OVS related deployment conditionalizations. Patchset 1 review updates. Updates based on additional inputs. Patchset 3 review updates. Fixed some unexpanded substitutions and formatting issues throughout. Patchset 5 updates. Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: Ib86bf0e13236a40f7a472d4448a9b2d3cc165deb Signed-off-by: Ron Stone <ronald.stone@windriver.com> Reorg OpenStack installion for DS consumption This review replaces https://review.opendev.org/c/starlingx/docs/+/801130 Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: Iab9c8d56cff9c1bc57e7e09fb3ceef7cf626edad Signed-off-by: Ron Stone <ronald.stone@windriver.com>
This commit is contained in:
parent
f6180faca7
commit
d8d90b4d75
3
.vscode/settings.json
vendored
3
.vscode/settings.json
vendored
@ -1,5 +1,6 @@
|
|||||||
{
|
{
|
||||||
"restructuredtext.languageServer.disabled": true,
|
"restructuredtext.languageServer.disabled": true,
|
||||||
"restructuredtext.preview.sphinx.disabled": true,
|
"restructuredtext.preview.sphinx.disabled": true,
|
||||||
"restructuredtext.linter.disabled": true
|
"restructuredtext.linter.disabled": true,
|
||||||
|
"restructuredtext.confPath": ""
|
||||||
}
|
}
|
0
doc/source/_includes/deploy-install-avs.rest
Normal file
0
doc/source/_includes/deploy-install-avs.rest
Normal file
@ -69,4 +69,7 @@
|
|||||||
.. Misc
|
.. Misc
|
||||||
..
|
..
|
||||||
|
|
||||||
.. |installer-image-name| replace:: bootimage
|
.. |installer-image-name| replace:: bootimage
|
||||||
|
|
||||||
|
.. |OVS-DPDK| replace:: |OVS|-|DPDK|
|
||||||
|
.. |ovs-dpdk| replace:: ovs-dpdk
|
@ -127,9 +127,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS|-|DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application VMs require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -150,8 +156,9 @@ Configure worker nodes
|
|||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -183,7 +190,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -272,7 +286,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -283,8 +298,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -39,7 +39,8 @@ Bootstrap system on controller-0
|
|||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||||
When logging in for the first time, you will be forced to change the password.
|
When logging in for the first time, you will be forced to change the
|
||||||
|
password.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -107,9 +108,9 @@ Bootstrap system on controller-0
|
|||||||
#. Create a minimal user configuration override file.
|
#. Create a minimal user configuration override file.
|
||||||
|
|
||||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||||
and provide the minimum required parameters for the deployment configuration
|
and provide the minimum required parameters for the deployment
|
||||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||||
applicable to your deployment environment.
|
ADDRESSing applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -148,7 +149,7 @@ Bootstrap system on controller-0
|
|||||||
:start-after: docker-reg-begin
|
:start-after: docker-reg-begin
|
||||||
:end-before: docker-reg-end
|
:end-before: docker-reg-end
|
||||||
|
|
||||||
.. code-block::
|
.. code-block:: yaml
|
||||||
|
|
||||||
docker_registries:
|
docker_registries:
|
||||||
quay.io:
|
quay.io:
|
||||||
@ -187,7 +188,7 @@ Bootstrap system on controller-0
|
|||||||
:start-after: firewall-begin
|
:start-after: firewall-begin
|
||||||
:end-before: firewall-end
|
:end-before: firewall-end
|
||||||
|
|
||||||
.. code-block::
|
.. code-block:: bash
|
||||||
|
|
||||||
# Add these lines to configure Docker to use a proxy server
|
# Add these lines to configure Docker to use a proxy server
|
||||||
docker_http_proxy: http://my.proxy.com:1080
|
docker_http_proxy: http://my.proxy.com:1080
|
||||||
@ -222,44 +223,50 @@ Configure controller-0
|
|||||||
#. Configure the |OAM| interface of controller-0 and specify the
|
#. Configure the |OAM| interface of controller-0 and specify the
|
||||||
attached network as "oam".
|
attached network as "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port. Use the |OAM| port name that is applicable to your
|
||||||
|
deployment environment, for example eth0:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
OAM_IF=<OAM-PORT>
|
OAM_IF=<OAM-PORT>
|
||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||||
networks of both "mgmt" and "cluster-host".
|
networks of both "mgmt" and "cluster-host".
|
||||||
|
|
||||||
Use the MGMT port name that is applicable to your deployment environment,
|
The following example configures the MGMT interface on a physical untagged
|
||||||
for example eth1:
|
ethernet port. Use the MGMT port name that is applicable to your deployment
|
||||||
|
environment, for example eth1:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: bash
|
||||||
|
|
||||||
MGMT_IF=<MGMT-PORT>
|
MGMT_IF=<MGMT-PORT>
|
||||||
|
|
||||||
# De-provision loopback interface and
|
|
||||||
# remove mgmt and cluster-host networks from loopback interface
|
|
||||||
system host-if-modify controller-0 lo -c none
|
system host-if-modify controller-0 lo -c none
|
||||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||||
for UUID in $IFNET_UUIDS; do
|
for UUID in $IFNET_UUIDS; do
|
||||||
system interface-network-remove ${UUID}
|
system interface-network-remove ${UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
# Configure management interface and assign mgmt and cluster-host networks to it
|
|
||||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -281,71 +288,125 @@ Configure controller-0
|
|||||||
system host-label-assign controller-0 openvswitch=enabled
|
system host-label-assign controller-0 openvswitch=enabled
|
||||||
system host-label-assign controller-0 sriov=enabled
|
system host-label-assign controller-0 sriov=enabled
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||||
|
system host-cpu-modify -f platform -p0 6 controller-0
|
||||||
|
|
||||||
|
#. Due to the additional openstack services’ containers running on the
|
||||||
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Plane Development Kit, which is supported only on bare metal hardware)
|
manifest.
|
||||||
should be used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
system modify --vswitch_type none
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack
|
|
||||||
manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
|
|
||||||
Default recommendation for an |AIO|-controller is to use a single core
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
for |OVS|-|DPDK| vswitch.
|
|
||||||
|
|
||||||
::
|
Default recommendation for an |AIO|-controller is to use a single
|
||||||
|
core |OVS-DPDK| vswitch.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created
|
Once vswitch_type is set to |OVS-DPDK|, any subsequent nodes created will
|
||||||
will default to automatically assigning 1 vSwitch core for |AIO|
|
default to automatically assigning 1 vSwitch core for |AIO| controllers
|
||||||
controllers and 2 vSwitch cores for compute-labeled worker nodes.
|
and 2 vSwitch cores (1 on each numa-node) for compute-labeled worker
|
||||||
|
nodes.
|
||||||
|
|
||||||
|
When using |OVS-DPDK|, configure 1G huge page for vSwitch memory on each
|
||||||
|
|NUMA| node where vswitch is running on this host. It is recommended to
|
||||||
|
configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA| node
|
||||||
|
where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
.. code-block::
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
|
||||||
following command:
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
|
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
``hw:mem_page_size=large``
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
the commands:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||||
system host-memory-modify -f application -1G 10 controller-0 0
|
system host-memory-modify -f application -1G 10 controller-0 0
|
||||||
@ -353,6 +414,7 @@ Configure controller-0
|
|||||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||||
system host-memory-modify -f application -1G 10 controller-0 1
|
system host-memory-modify -f application -1G 10 controller-0 1
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
@ -363,6 +425,7 @@ Configure controller-0
|
|||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# Create ‘nova-local’ local volume group
|
||||||
system host-lvg-add ${NODE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
|
|
||||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||||
@ -375,7 +438,13 @@ Configure controller-0
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -389,11 +458,12 @@ Configure controller-0
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
|
A compute-labeled All-in-one controller host **MUST** have at least
|
||||||
|
one Data class interface.
|
||||||
|
|
||||||
* Configure the data interfaces for controller-0.
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
export NODE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
@ -439,7 +509,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
|
|
||||||
* Configure the pci-sriov interfaces for controller-0.
|
* Configure the pci-sriov interfaces for controller-0.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
export NODE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
@ -457,7 +527,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -468,8 +539,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
@ -481,7 +552,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
containers on this host, configure the number of 1G Huge pages required
|
containers on this host, configure the number of 1G Huge pages required
|
||||||
on both |NUMA| nodes.
|
on both |NUMA| nodes.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||||
system host-memory-modify -f application controller-0 0 -1G 10
|
system host-memory-modify -f application controller-0 0 -1G 10
|
||||||
@ -611,8 +682,9 @@ Configure controller-1
|
|||||||
#. Configure the |OAM| interface of controller-1 and specify the
|
#. Configure the |OAM| interface of controller-1 and specify the
|
||||||
attached network of "oam".
|
attached network of "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your
|
||||||
|
deployment environment, for example eth0:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -620,6 +692,9 @@ Configure controller-1
|
|||||||
system host-if-modify controller-1 $OAM_IF -c platform
|
system host-if-modify controller-1 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-1 $OAM_IF oam
|
system interface-network-assign controller-1 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. The MGMT interface is partially set up by the network install procedure;
|
#. The MGMT interface is partially set up by the network install procedure;
|
||||||
configuring the port used for network install as the MGMT port and
|
configuring the port used for network install as the MGMT port and
|
||||||
specifying the attached network of "mgmt".
|
specifying the attached network of "mgmt".
|
||||||
@ -639,11 +714,11 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -652,12 +727,57 @@ Configure controller-1
|
|||||||
system host-label-assign controller-1 openvswitch=enabled
|
system host-label-assign controller-1 openvswitch=enabled
|
||||||
system host-label-assign controller-1 sriov=enabled
|
system host-label-assign controller-1 sriov=enabled
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# assign 6 cores on processor/numa-node 0 on controller-1 to platform
|
||||||
|
system host-cpu-modify -f platform -p0 6 controller-1
|
||||||
|
|
||||||
|
#. Due to the additional openstack services’ containers running on the
|
||||||
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for an AIO-controller is to use a single core
|
Default recommendation for an |AIO|-controller is to use a single core
|
||||||
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
|
for |OVS-DPDK| vswitch. This should have been automatically configured,
|
||||||
if not run the following command.
|
if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -666,9 +786,15 @@ Configure controller-1
|
|||||||
system host-cpu-modify -f vswitch -p0 1 controller-1
|
system host-cpu-modify -f vswitch -p0 1 controller-1
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application VMs require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -678,12 +804,13 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -699,6 +826,7 @@ Configure controller-1
|
|||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# Create ‘nova-local’ local volume group
|
||||||
system host-lvg-add ${NODE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
|
|
||||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||||
@ -711,7 +839,14 @@ Configure controller-1
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -793,7 +928,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov' interfaces
|
||||||
|
# will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -804,8 +940,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -222,8 +222,9 @@ The newly installed controller needs to be configured.
|
|||||||
source /etc/platform/openrc
|
source /etc/platform/openrc
|
||||||
|
|
||||||
#. Configure the |OAM| interface of controller-0 and specify the attached
|
#. Configure the |OAM| interface of controller-0 and specify the attached
|
||||||
network as "oam". Use the |OAM| port name that is applicable to your
|
network as "oam". The following example configures the OAM interface on a
|
||||||
deployment environment, for example eth0:
|
physical untagged ethernet port, use |OAM| port name that is applicable to
|
||||||
|
your deployment environment, for example eth0:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -231,12 +232,17 @@ The newly installed controller needs to be configured.
|
|||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -260,62 +266,120 @@ The newly installed controller needs to be configured.
|
|||||||
system host-label-assign controller-0 openvswitch=enabled
|
system host-label-assign controller-0 openvswitch=enabled
|
||||||
system host-label-assign controller-0 sriov=enabled
|
system host-label-assign controller-0 sriov=enabled
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
.. code-block::
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane
|
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||||
Development Kit, which is supported only on bare metal hardware) should be
|
system host-cpu-modify -f platform -p0 6 controller-0
|
||||||
used:
|
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
#. Due to the additional openstack services’ containers running on the
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
**To deploy the default containerized OVS:**
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system modify --vswitch_type none
|
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack
|
|
||||||
manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
|
||||||
|
|
||||||
Default recommendation for an |AIO|-controller is to use a single core
|
|
||||||
for |OVS|-|DPDK| vswitch.
|
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
|
||||||
following command:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
|
.. only:: starlingx
|
||||||
|
|
||||||
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
|
|
||||||
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system modify --vswitch_type none
|
||||||
|
|
||||||
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of
|
||||||
|
|prefix|-openstack manifest.
|
||||||
|
|
||||||
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
|
.. parsed-literal::
|
||||||
|
|
||||||
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
|
|
||||||
|
Default recommendation for an |AIO|-controller is to use a single core
|
||||||
|
for |OVS-DPDK| vswitch.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
|
|
||||||
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host. However, due to a limitation with
|
||||||
|
Kubernetes, only a single huge page size is supported on any one host. If
|
||||||
|
your application |VMs| require 2M huge pages, then configure 500x 2M huge
|
||||||
|
pages (-2M 500) for vSwitch memory on each |NUMA| node where vswitch is
|
||||||
|
running on host.
|
||||||
|
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
|
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on
|
Configure the huge pages for VMs in an |OVS-DPDK| environment on this
|
||||||
this host with the commands:
|
host, assuming 1G huge page size is being used on this host, with the
|
||||||
|
following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -335,6 +399,7 @@ The newly installed controller needs to be configured.
|
|||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# Create ‘nova-local’ local volume group
|
||||||
system host-lvg-add ${NODE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
|
|
||||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||||
@ -347,7 +412,14 @@ The newly installed controller needs to be configured.
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -362,7 +434,8 @@ The newly installed controller needs to be configured.
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled |AIO|-controller host **MUST** have at least one
|
||||||
|
Data class interface.
|
||||||
|
|
||||||
* Configure the data interfaces for controller-0.
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
@ -431,7 +504,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov' interfaces will
|
||||||
|
# be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -442,8 +516,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -226,8 +226,9 @@ Configure controller-0
|
|||||||
#. Configure the |OAM| interface of controller-0 and specify the
|
#. Configure the |OAM| interface of controller-0 and specify the
|
||||||
attached network as "oam".
|
attached network as "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||||
|
environment, for example eth0:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -235,11 +236,15 @@ Configure controller-0
|
|||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||||
networks of both "mgmt" and "cluster-host".
|
networks of both "mgmt" and "cluster-host".
|
||||||
|
|
||||||
Use the MGMT port name that is applicable to your deployment environment,
|
The following example configures the MGMT interface on a physical untagged
|
||||||
for example eth1:
|
ethernet port, use the MGMT port name that is applicable to your deployment
|
||||||
|
environment, for example eth1:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -258,6 +263,8 @@ Configure controller-0
|
|||||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
@ -265,6 +272,9 @@ Configure controller-0
|
|||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
#. If required, configure Ceph storage backend:
|
#. If required, configure Ceph storage backend:
|
||||||
|
|
||||||
A persistent storage backend is required if your application requires |PVCs|.
|
A persistent storage backend is required if your application requires |PVCs|.
|
||||||
@ -287,8 +297,8 @@ Configure controller-0
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -299,44 +309,47 @@ Configure controller-0
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (OVS with the Data Plane
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Development Kit, which is supported only on bare metal hardware) should
|
manifest.
|
||||||
be used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
system modify --vswitch_type none
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
|
|
||||||
Once vswitch_type is set to OVS-|DPDK|, any subsequent |AIO|-controller
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
|
|
||||||
|
Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller
|
||||||
or worker nodes created will default to automatically assigning 1 vSwitch
|
or worker nodes created will default to automatically assigning 1 vSwitch
|
||||||
core for |AIO| controllers and 2 vSwitch cores for compute-labeled worker
|
core for |AIO| controllers and 2 vSwitch cores (1 on each numa-node)
|
||||||
nodes.
|
for compute-labeled worker nodes.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all compute-labeled worker nodes (and/or |AIO|
|
locking and unlocking controller-0 to apply the change.
|
||||||
controllers) to apply the change.
|
|
||||||
|
|
||||||
.. incl-config-controller-0-storage-end:
|
.. incl-config-controller-0-storage-end:
|
||||||
|
|
||||||
@ -436,8 +449,9 @@ Configure controller-1
|
|||||||
#. Configure the |OAM| interface of controller-1 and specify the
|
#. Configure the |OAM| interface of controller-1 and specify the
|
||||||
attached network of "oam".
|
attached network of "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||||
|
environment, for example eth0:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -445,6 +459,10 @@ Configure controller-1
|
|||||||
system host-if-modify controller-1 $OAM_IF -c platform
|
system host-if-modify controller-1 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-1 $OAM_IF oam
|
system interface-network-assign controller-1 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
|
|
||||||
#. The MGMT interface is partially set up by the network install procedure;
|
#. The MGMT interface is partially set up by the network install procedure;
|
||||||
configuring the port used for network install as the MGMT port and
|
configuring the port used for network install as the MGMT port and
|
||||||
specifying the attached network of "mgmt".
|
specifying the attached network of "mgmt".
|
||||||
@ -465,8 +483,8 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
This step is required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -544,8 +562,8 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -560,10 +578,10 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each
|
Default recommendation for worker node is to use a single core on each
|
||||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
numa-node for |OVS-DPDK| vswitch. This should have been automatically
|
||||||
configured, if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -579,9 +597,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -598,12 +622,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with the
|
||||||
hw:mem_page_size=large
|
property ``hw:mem_page_size=large``
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -635,7 +660,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -645,12 +677,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
underlying assigned Data Network.
|
underlying assigned Data Network.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled worker host **MUST** have at least one Data class
|
||||||
|
interface.
|
||||||
|
|
||||||
* Configure the data interfaces for worker nodes.
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
@ -697,7 +730,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
@ -724,7 +757,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -735,8 +769,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -273,8 +273,8 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -289,10 +289,10 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each
|
Default recommendation for worker node is to use a single core on each
|
||||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
numa-node for |OVS-DPDK| vswitch. This should have been automatically
|
||||||
configured, if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -308,9 +308,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -327,12 +333,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -364,7 +371,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -374,12 +388,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
underlying assigned Data Network.
|
underlying assigned Data Network.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled worker host **MUST** have at least one Data class
|
||||||
|
interface.
|
||||||
|
|
||||||
* Configure the data interfaces for worker nodes.
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
@ -426,7 +441,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||||
|
|
||||||
|
|
||||||
@ -453,7 +468,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not created already, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -464,8 +480,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -17,58 +17,62 @@
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Development Kit, which is supported only on bare metal hardware) should be
|
manifest.
|
||||||
used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
To deploy the default containerized |OVS|:
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
system modify --vswitch_type none
|
||||||
|OVS| defined in the helm charts of stx-openstack manifest.
|
|
||||||
|
|
||||||
To deploy |OVS|-|DPDK|, run the following command:
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
will default to automatically assigning 1 vSwitch core for |AIO|
|
|
||||||
controllers and 2 vSwitch cores for compute-labeled worker nodes.
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with
|
Default recommendation for an |AIO|-controller is to use a single core
|
||||||
the following command:
|
for |OVS-DPDK| vswitch.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
::
|
When using |OVS-DPDK|, configure 1x 1G huge page for vSwitch memory on
|
||||||
|
each |NUMA| node where vswitch is running on this host, with the
|
||||||
|
following command:
|
||||||
|
|
||||||
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
.. code-block:: bash
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
|
|
||||||
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment with the
|
||||||
the command:
|
command:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -78,7 +82,8 @@
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify worker-0 0 -1G 10
|
system host-memory-modify controller-0 0 -1G 10
|
||||||
|
system host-memory-modify controller-0 1 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -261,12 +261,11 @@ OpenStack-specific host configuration
|
|||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
|
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for |AIO| controllers and
|
||||||
vSwitch cores for compute-labeled worker nodes.
|
2 vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure vSwitch memory per NUMA node with the
|
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with the
|
||||||
following
|
following command:
|
||||||
command:
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -333,7 +332,8 @@ Unlock controller-0 in order to bring it into service:
|
|||||||
system host-unlock controller-0
|
system host-unlock controller-0
|
||||||
|
|
||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host
|
||||||
|
machine.
|
||||||
|
|
||||||
-------------------------------------------------
|
-------------------------------------------------
|
||||||
Install software on controller-1 and worker nodes
|
Install software on controller-1 and worker nodes
|
||||||
@ -377,17 +377,18 @@ Install software on controller-1 and worker nodes
|
|||||||
|
|
||||||
system host-update 3 personality=worker hostname=worker-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
|
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||||
be discovered by checking 'system host-list':
|
(hostname=None) to be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 4 personality=worker hostname=worker-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
|
|
||||||
For rook storage, there is no storage personality. Some hosts with worker personality
|
For rook storage, there is no storage personality. Some hosts with worker
|
||||||
providers storage service. Here we still named these worker host storage-x.
|
personality providers storage service. Here we still named these worker host
|
||||||
Repeat for storage-0 and storage-1. Power on storage-0, storage-1 and wait for the
|
storage-x. Repeat for storage-0 and storage-1. Power on storage-0, storage-1
|
||||||
new host (hostname=None) to be discovered by checking 'system host-list':
|
and wait for the new host (hostname=None) to be discovered by checking
|
||||||
|
'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -401,9 +402,9 @@ Install software on controller-1 and worker nodes
|
|||||||
A node with Edgeworker personality is also available. See
|
A node with Edgeworker personality is also available. See
|
||||||
:ref:`deploy-edgeworker-nodes` for details.
|
:ref:`deploy-edgeworker-nodes` for details.
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
#. Wait for the software installation on controller-1, worker-0, and worker-1
|
||||||
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
to complete, for all servers to reboot, and for all to show as
|
||||||
'system host-list'.
|
locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -426,9 +427,9 @@ Configure controller-1
|
|||||||
|
|
||||||
.. incl-config-controller-1-start:
|
.. incl-config-controller-1-start:
|
||||||
|
|
||||||
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
|
Configure the |OAM| and MGMT interfaces of controller-0 and specify the
|
||||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
|
attached networks. Use the |OAM| and MGMT port names, for example eth0, that
|
||||||
to your deployment environment.
|
are applicable to your deployment environment.
|
||||||
|
|
||||||
(Note that the MGMT interface is partially set up automatically by the network
|
(Note that the MGMT interface is partially set up automatically by the network
|
||||||
install procedure.)
|
install procedure.)
|
||||||
@ -516,12 +517,12 @@ Configure worker nodes
|
|||||||
|
|
||||||
This step is **required** for OpenStack.
|
This step is **required** for OpenStack.
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
This step is optional for Kubernetes: Do this step if using |SRIOV|
|
||||||
attachments in hosted application containers.
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
For Kubernetes SRIOV network attachments:
|
For Kubernetes |SRIOV| network attachments:
|
||||||
|
|
||||||
* Configure SRIOV device plug in:
|
* Configure |SRIOV| device plug in:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -529,10 +530,10 @@ Configure worker nodes
|
|||||||
system host-label-assign ${NODE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
* If planning on running |DPDK| in containers on this host, configure the
|
||||||
of 1G Huge pages required on both NUMA nodes:
|
number of 1G Huge pages required on both |NUMA| nodes:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-memory-modify ${NODE} 0 -1G 100
|
system host-memory-modify ${NODE} 0 -1G 100
|
||||||
@ -541,7 +542,7 @@ Configure worker nodes
|
|||||||
|
|
||||||
For both Kubernetes and OpenStack:
|
For both Kubernetes and OpenStack:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
@ -587,7 +588,7 @@ OpenStack-specific host configuration
|
|||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
@ -598,7 +599,7 @@ OpenStack-specific host configuration
|
|||||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
which is needed for stx-openstack nova ephemeral disks.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $NODE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
@ -617,14 +618,15 @@ Unlock worker nodes
|
|||||||
|
|
||||||
Unlock worker nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $NODE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The worker nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
into service. This can take 5-10 minutes, depending on the performance of the
|
||||||
|
host machine.
|
||||||
|
|
||||||
-----------------------
|
-----------------------
|
||||||
Configure storage nodes
|
Configure storage nodes
|
||||||
@ -632,9 +634,10 @@ Configure storage nodes
|
|||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||||
|
|
||||||
Note that the MGMT interfaces are partially set up by the network install procedure.
|
Note that the MGMT interfaces are partially set up by the network install
|
||||||
|
procedure.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $NODE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
@ -653,7 +656,7 @@ Unlock storage nodes
|
|||||||
|
|
||||||
Unlock storage nodes in order to bring them into service:
|
Unlock storage nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for STORAGE in storage-0 storage-1; do
|
for STORAGE in storage-0 storage-1; do
|
||||||
system host-unlock $STORAGE
|
system host-unlock $STORAGE
|
||||||
@ -715,7 +718,7 @@ On host storage-0 and storage-1:
|
|||||||
|
|
||||||
system application-apply rook-ceph-apps
|
system application-apply rook-ceph-apps
|
||||||
|
|
||||||
#. Wait for OSDs pod ready.
|
#. Wait for |OSDs| pod ready.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -218,6 +218,10 @@ For host-based Ceph,
|
|||||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||||
system host-stor-list controller-0
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||||
|
additional info on configuring the Ceph storage backend such as supporting
|
||||||
|
SSD-backed journals, multiple storage tiers, and so on.
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
#. Initialize with add ceph-rook backend:
|
#. Initialize with add ceph-rook backend:
|
||||||
|
@ -209,6 +209,10 @@ For host-based Ceph,
|
|||||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||||
system host-stor-list controller-0
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||||
|
additional info on configuring the Ceph storage backend such as supporting
|
||||||
|
SSD-backed journals, multiple storage tiers, and so on.
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
#. Initialize with add ceph-rook backend:
|
#. Initialize with add ceph-rook backend:
|
||||||
|
@ -36,7 +36,7 @@ Install software on worker nodes
|
|||||||
|
|
||||||
#. Using the host id, set the personality of this host to 'worker':
|
#. Using the host id, set the personality of this host to 'worker':
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=worker-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
system host-update 4 personality=worker hostname=worker-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
@ -127,9 +127,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS|-|DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application VMs require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -150,10 +156,11 @@ Configure worker nodes
|
|||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
|
|
||||||
@ -183,7 +190,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -272,7 +286,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -283,8 +298,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
@ -326,4 +341,3 @@ Unlock worker nodes in order to bring them into service:
|
|||||||
The worker nodes will reboot to apply configuration changes and come into
|
The worker nodes will reboot to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host
|
service. This can take 5-10 minutes, depending on the performance of the host
|
||||||
machine.
|
machine.
|
||||||
|
|
||||||
|
@ -39,7 +39,8 @@ Bootstrap system on controller-0
|
|||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||||
When logging in for the first time, you will be forced to change the password.
|
When logging in for the first time, you will be forced to change the
|
||||||
|
password.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -107,9 +108,9 @@ Bootstrap system on controller-0
|
|||||||
#. Create a minimal user configuration override file.
|
#. Create a minimal user configuration override file.
|
||||||
|
|
||||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||||
and provide the minimum required parameters for the deployment configuration
|
and provide the minimum required parameters for the deployment
|
||||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||||
applicable to your deployment environment.
|
ADDRESSing applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -222,8 +223,9 @@ Configure controller-0
|
|||||||
#. Configure the |OAM| interface of controller-0 and specify the
|
#. Configure the |OAM| interface of controller-0 and specify the
|
||||||
attached network as "oam".
|
attached network as "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port. Use the |OAM| port name that is applicable to your
|
||||||
|
deployment environment, for example eth0:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -231,11 +233,15 @@ Configure controller-0
|
|||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||||
networks of both "mgmt" and "cluster-host".
|
networks of both "mgmt" and "cluster-host".
|
||||||
|
|
||||||
Use the MGMT port name that is applicable to your deployment environment,
|
The following example configures the MGMT interface on a physical untagged
|
||||||
for example eth1:
|
ethernet port. Use the MGMT port name that is applicable to your deployment
|
||||||
|
environment, for example eth1:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -249,12 +255,18 @@ Configure controller-0
|
|||||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -276,56 +288,111 @@ Configure controller-0
|
|||||||
system host-label-assign controller-0 openvswitch=enabled
|
system host-label-assign controller-0 openvswitch=enabled
|
||||||
system host-label-assign controller-0 sriov=enabled
|
system host-label-assign controller-0 sriov=enabled
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||||
|
system host-cpu-modify -f platform -p0 6 controller-0
|
||||||
|
|
||||||
|
#. Due to the additional openstack services’ containers running on the
|
||||||
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Plane Development Kit, which is supported only on bare metal hardware)
|
manifest.
|
||||||
should be used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
function.
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
system modify --vswitch_type none
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack
|
|
||||||
manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
|
|
||||||
Default recommendation for an |AIO|-controller is to use a single core
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
for |OVS|-|DPDK| vswitch.
|
|
||||||
|
Default recommendation for an |AIO|-controller is to use a single
|
||||||
|
core for |OVS-DPDK| vswitch.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created
|
Once vswitch_type is set to |OVS-DPDK|, any subsequent nodes created will
|
||||||
will default to automatically assigning 1 vSwitch core for |AIO|
|
default to automatically assigning 1 vSwitch core for |AIO| controllers
|
||||||
controllers and 2 vSwitch cores for compute-labeled worker nodes.
|
and 2 vSwitch cores (1 on each numa-node) for compute-labeled worker
|
||||||
|
nodes.
|
||||||
|
|
||||||
|
When using |OVS-DPDK|, configure 1G huge page for vSwitch memory on each
|
||||||
|
|NUMA| node where vswitch is running on this host. It is recommended to
|
||||||
|
configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA| node
|
||||||
|
where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
.. code-block::
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
|
||||||
following command:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
@ -333,12 +400,13 @@ Configure controller-0
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
``hw:mem_page_size=large``
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
the commands:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -349,6 +417,7 @@ Configure controller-0
|
|||||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||||
system host-memory-modify -f application -1G 10 controller-0 1
|
system host-memory-modify -f application -1G 10 controller-0 1
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
@ -372,7 +441,14 @@ Configure controller-0
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -381,12 +457,13 @@ Configure controller-0
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
underlying assigned Data Network.
|
underlying assigned Data Network.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface.
|
A compute-labeled All-in-one controller host **MUST** have at least
|
||||||
|
one Data class interface.
|
||||||
|
|
||||||
* Configure the data interfaces for controller-0.
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
@ -454,7 +531,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -465,8 +543,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
@ -608,8 +686,9 @@ Configure controller-1
|
|||||||
#. Configure the |OAM| interface of controller-1 and specify the
|
#. Configure the |OAM| interface of controller-1 and specify the
|
||||||
attached network of "oam".
|
attached network of "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your
|
||||||
|
deployment environment, for example eth0:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -617,6 +696,9 @@ Configure controller-1
|
|||||||
system host-if-modify controller-1 $OAM_IF -c platform
|
system host-if-modify controller-1 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-1 $OAM_IF oam
|
system interface-network-assign controller-1 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. The MGMT interface is partially set up by the network install procedure;
|
#. The MGMT interface is partially set up by the network install procedure;
|
||||||
configuring the port used for network install as the MGMT port and
|
configuring the port used for network install as the MGMT port and
|
||||||
specifying the attached network of "mgmt".
|
specifying the attached network of "mgmt".
|
||||||
@ -636,11 +718,11 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -649,12 +731,58 @@ Configure controller-1
|
|||||||
system host-label-assign controller-1 openvswitch=enabled
|
system host-label-assign controller-1 openvswitch=enabled
|
||||||
system host-label-assign controller-1 sriov=enabled
|
system host-label-assign controller-1 sriov=enabled
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# assign 6 cores on processor/numa-node 0 on controller-1 to platform
|
||||||
|
system host-cpu-modify -f platform -p0 6 controller-1
|
||||||
|
|
||||||
|
#. Due to the additional openstack services’ containers running on the
|
||||||
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for an AIO-controller is to use a single core
|
Default recommendation for an |AIO|-controller is to use a single core
|
||||||
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
|
for |OVS-DPDK| vswitch. This should have been automatically configured,
|
||||||
if not run the following command.
|
if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -663,9 +791,16 @@ Configure controller-1
|
|||||||
system host-cpu-modify -f vswitch -p0 1 controller-1
|
system host-cpu-modify -f vswitch -p0 1 controller-1
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application VMs require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -675,14 +810,15 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||||
system host-memory-modify -f application -1G 10 controller-1 0
|
system host-memory-modify -f application -1G 10 controller-1 0
|
||||||
@ -709,7 +845,14 @@ Configure controller-1
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -791,7 +934,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov' interfaces
|
||||||
|
# will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -802,12 +946,12 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
system host-label-assign controller-1 sriovdp=enabled
|
system host-label-assign controller-1 sriovdp=enabled
|
||||||
|
|
||||||
@ -855,7 +999,7 @@ For host-based Ceph:
|
|||||||
#. Assign Rook host labels to controller-1 in support of installing the
|
#. Assign Rook host labels to controller-1 in support of installing the
|
||||||
rook-ceph-apps manifest/helm-charts later:
|
rook-ceph-apps manifest/helm-charts later:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||||
@ -867,7 +1011,7 @@ Unlock controller-1
|
|||||||
|
|
||||||
Unlock controller-1 in order to bring it into service:
|
Unlock controller-1 in order to bring it into service:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
system host-unlock controller-1
|
system host-unlock controller-1
|
||||||
|
|
||||||
@ -902,7 +1046,7 @@ machine.
|
|||||||
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|
||||||
|OSD|.
|
|OSD|.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||||
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
|
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
|
||||||
|
@ -109,8 +109,8 @@ Bootstrap system on controller-0
|
|||||||
|
|
||||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||||
and provide the minimum required parameters for the deployment
|
and provide the minimum required parameters for the deployment
|
||||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
configuration as shown in the example below. Use the |OAM| IP SUBNET and
|
||||||
ADDRESSing applicable to your deployment environment.
|
IP ADDRESSing applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -220,9 +220,10 @@ The newly installed controller needs to be configured.
|
|||||||
|
|
||||||
source /etc/platform/openrc
|
source /etc/platform/openrc
|
||||||
|
|
||||||
#. Configure the |OAM| interface of controller-0 and specify the attached network
|
#. Configure the |OAM| interface of controller-0 and specify the attached
|
||||||
as "oam". Use the |OAM| port name that is applicable to your deployment
|
network as "oam". The following example configures the OAM interface on a
|
||||||
environment, for example eth0:
|
physical untagged ethernet port, use |OAM| port name that is applicable to
|
||||||
|
your deployment environment, for example eth0:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -230,12 +231,18 @@ The newly installed controller needs to be configured.
|
|||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
.. only:: openstack
|
.. only:: openstack
|
||||||
|
|
||||||
*************************************
|
*************************************
|
||||||
@ -259,63 +266,121 @@ The newly installed controller needs to be configured.
|
|||||||
system host-label-assign controller-0 openvswitch=enabled
|
system host-label-assign controller-0 openvswitch=enabled
|
||||||
system host-label-assign controller-0 sriov=enabled
|
system host-label-assign controller-0 sriov=enabled
|
||||||
|
|
||||||
|
#. **For OpenStack only:** Due to the additional openstack services running
|
||||||
|
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||||
|
required, 6 platform cores are recommended.
|
||||||
|
|
||||||
|
Increase the number of platform cores with the following commands:
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||||
|
system host-cpu-modify -f platform -p0 6 controller-0
|
||||||
|
|
||||||
|
#. Due to the additional openstack services’ containers running on the
|
||||||
|
controller host, the size of the docker filesystem needs to be
|
||||||
|
increased from the default size of 30G to 60G.
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# check existing size of docker fs
|
||||||
|
system host-fs-list controller-1
|
||||||
|
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||||
|
system host-lvg-list controller-1
|
||||||
|
# if existing docker fs size + cgts-vg available space is less than 60G,
|
||||||
|
# you will need to add a new disk partition to cgts-vg
|
||||||
|
|
||||||
|
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||||
|
# ( if not use another unused disk )
|
||||||
|
|
||||||
|
# Get device path of ROOT DISK
|
||||||
|
system host-show controller-1 --nowrap | fgrep rootfs
|
||||||
|
|
||||||
|
# Get UUID of ROOT DISK by listing disks
|
||||||
|
system host-disk-list controller-1
|
||||||
|
|
||||||
|
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||||
|
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
|
# Add new partition to ‘cgts-vg’ local volume group
|
||||||
|
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||||
|
sleep 2 # wait for partition to be added
|
||||||
|
|
||||||
|
# Increase docker filesystem to 60G
|
||||||
|
system host-fs-modify controller-1 docker=60
|
||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Development Kit, which is supported only on bare metal hardware) should be
|
manifest.
|
||||||
used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
system modify --vswitch_type none
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack
|
|
||||||
manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of
|
||||||
|
|prefix|-openstack manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
|
|
||||||
Default recommendation for an AIO-controller is to use a single core
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
for |OVS|-|DPDK| vswitch.
|
|
||||||
|
|
||||||
::
|
Default recommendation for an |AIO|-controller is to use a single core
|
||||||
|
for |OVS-DPDK| vswitch.
|
||||||
|
|
||||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
.. code-block:: bash
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
where vswitch is running on this host, with the following command:
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host. However, due to a limitation with
|
||||||
|
Kubernetes, only a single huge page size is supported on any one host. If
|
||||||
|
your application |VMs| require 2M huge pages, then configure 500x 2M huge
|
||||||
|
pages (-2M 500) for vSwitch memory on each |NUMA| node where vswitch is
|
||||||
|
running on host.
|
||||||
|
|
||||||
|
|
||||||
|
.. code-block::
|
||||||
|
|
||||||
|
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
|
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
|
Configure the huge pages for VMs in an |OVS-DPDK| environment on this
|
||||||
the commands:
|
host, assuming 1G huge page size is being used on this host, with the
|
||||||
|
following commands:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||||
system host-memory-modify -f application -1G 10 controller-0 0
|
system host-memory-modify -f application -1G 10 controller-0 0
|
||||||
@ -346,7 +411,14 @@ The newly installed controller needs to be configured.
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -361,7 +433,8 @@ The newly installed controller needs to be configured.
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled |AIO|-controller host **MUST** have at least one
|
||||||
|
Data class interface.
|
||||||
|
|
||||||
* Configure the data interfaces for controller-0.
|
* Configure the data interfaces for controller-0.
|
||||||
|
|
||||||
@ -430,7 +503,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov' interfaces will
|
||||||
|
# be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -441,8 +515,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -62,7 +62,7 @@ Bootstrap system on controller-0
|
|||||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||||
deployment environment.
|
deployment environment.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||||
sudo ip link set up dev <PORT>
|
sudo ip link set up dev <PORT>
|
||||||
@ -111,7 +111,7 @@ Bootstrap system on controller-0
|
|||||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||||
ADDRESSing applicable to your deployment environment.
|
ADDRESSing applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
cd ~
|
cd ~
|
||||||
cat <<EOF > localhost.yml
|
cat <<EOF > localhost.yml
|
||||||
@ -226,8 +226,9 @@ Configure controller-0
|
|||||||
#. Configure the |OAM| interface of controller-0 and specify the
|
#. Configure the |OAM| interface of controller-0 and specify the
|
||||||
attached network as "oam".
|
attached network as "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||||
|
environment, for example eth0:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -235,30 +236,45 @@ Configure controller-0
|
|||||||
system host-if-modify controller-0 $OAM_IF -c platform
|
system host-if-modify controller-0 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-0 $OAM_IF oam
|
system interface-network-assign controller-0 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||||
networks of both "mgmt" and "cluster-host".
|
networks of both "mgmt" and "cluster-host".
|
||||||
|
|
||||||
Use the MGMT port name that is applicable to your deployment environment,
|
The following example configures the MGMT interface on a physical untagged
|
||||||
for example eth1:
|
ethernet port, use the MGMT port name that is applicable to your deployment
|
||||||
|
environment, for example eth1:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
MGMT_IF=<MGMT-PORT>
|
MGMT_IF=<MGMT-PORT>
|
||||||
|
|
||||||
|
# De-provision loopback interface and
|
||||||
|
# remove mgmt and cluster-host networks from loopback interface
|
||||||
system host-if-modify controller-0 lo -c none
|
system host-if-modify controller-0 lo -c none
|
||||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||||
for UUID in $IFNET_UUIDS; do
|
for UUID in $IFNET_UUIDS; do
|
||||||
system interface-network-remove ${UUID}
|
system interface-network-remove ${UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# Configure management interface and assign mgmt and cluster-host networks to it
|
||||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. Configure |NTP| servers for network time synchronization:
|
#. Configure |NTP| servers for network time synchronization:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||||
|
|
||||||
|
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||||
|
<ptp-server-config-index>`.
|
||||||
|
|
||||||
#. If required, configure Ceph storage backend:
|
#. If required, configure Ceph storage backend:
|
||||||
|
|
||||||
A persistent storage backend is required if your application requires |PVCs|.
|
A persistent storage backend is required if your application requires |PVCs|.
|
||||||
@ -281,8 +297,8 @@ Configure controller-0
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -293,44 +309,47 @@ Configure controller-0
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (OVS with the Data Plane
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Development Kit, which is supported only on bare metal hardware) should
|
manifest.
|
||||||
be used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the
|
system modify --vswitch_type none
|
||||||
containerized |OVS| defined in the helm charts of stx-openstack manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
|
|
||||||
Once vswitch_type is set to OVS-|DPDK|, any subsequent AIO-controller or
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
worker nodes created will default to automatically assigning 1 vSwitch
|
|
||||||
core for |AIO| controllers and 2 vSwitch cores for compute-labeled worker
|
Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller
|
||||||
nodes.
|
or worker nodes created will default to automatically assigning 1 vSwitch
|
||||||
|
core for |AIO| controllers and 2 vSwitch cores (1 on each numa-node)
|
||||||
|
for compute-labeled worker nodes.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all compute-labeled worker nodes (and/or |AIO|
|
locking and unlocking controller-0 to apply the change.
|
||||||
controllers) to apply the change.
|
|
||||||
|
|
||||||
.. incl-config-controller-0-storage-end:
|
.. incl-config-controller-0-storage-end:
|
||||||
|
|
||||||
@ -430,8 +449,9 @@ Configure controller-1
|
|||||||
#. Configure the |OAM| interface of controller-1 and specify the
|
#. Configure the |OAM| interface of controller-1 and specify the
|
||||||
attached network of "oam".
|
attached network of "oam".
|
||||||
|
|
||||||
Use the |OAM| port name that is applicable to your deployment environment,
|
The following example configures the |OAM| interface on a physical untagged
|
||||||
for example eth0:
|
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||||
|
environment, for example eth0:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -439,6 +459,9 @@ Configure controller-1
|
|||||||
system host-if-modify controller-1 $OAM_IF -c platform
|
system host-if-modify controller-1 $OAM_IF -c platform
|
||||||
system interface-network-assign controller-1 $OAM_IF oam
|
system interface-network-assign controller-1 $OAM_IF oam
|
||||||
|
|
||||||
|
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||||
|
Interfaces <node-interfaces-index>`.
|
||||||
|
|
||||||
#. The MGMT interface is partially set up by the network install procedure;
|
#. The MGMT interface is partially set up by the network install procedure;
|
||||||
configuring the port used for network install as the MGMT port and
|
configuring the port used for network install as the MGMT port and
|
||||||
specifying the attached network of "mgmt".
|
specifying the attached network of "mgmt".
|
||||||
@ -459,8 +482,8 @@ Configure controller-1
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**This step is required only if the StarlingX OpenStack application
|
This step is required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -538,8 +561,8 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -554,10 +577,10 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each
|
Default recommendation for worker node is to use a single core on each
|
||||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
numa-node for |OVS-DPDK| vswitch. This should have been automatically
|
||||||
configured, if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -573,9 +596,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -592,12 +621,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with the
|
||||||
hw:mem_page_size=large
|
property ``hw:mem_page_size=large``
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -629,7 +659,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -639,12 +676,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
underlying assigned Data Network.
|
underlying assigned Data Network.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled worker host **MUST** have at least one Data class
|
||||||
|
interface.
|
||||||
|
|
||||||
* Configure the data interfaces for worker nodes.
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
@ -718,7 +756,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not already created, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -729,8 +768,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
|
@ -273,8 +273,8 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
**These steps are required only if the StarlingX OpenStack application
|
These steps are required only if the |prod-os| application
|
||||||
(stx-openstack) will be installed.**
|
(|prefix|-openstack) will be installed.
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
@ -289,10 +289,10 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||||
|
|
||||||
**If using OVS-DPDK vswitch, run the following commands:**
|
If using |OVS-DPDK| vswitch, run the following commands:
|
||||||
|
|
||||||
Default recommendation for worker node is to use a single core on each
|
Default recommendation for worker node is to use a single core on each
|
||||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
numa-node for |OVS-DPDK| vswitch. This should have been automatically
|
||||||
configured, if not run the following command.
|
configured, if not run the following command.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
@ -308,9 +308,15 @@ Configure worker nodes
|
|||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||||
each |NUMA| node where vswitch is running on this host, with the
|
each |NUMA| node where vswitch is running on the host. It is recommended
|
||||||
following command:
|
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||||
|
node where vswitch is running on host.
|
||||||
|
|
||||||
|
However, due to a limitation with Kubernetes, only a single huge page
|
||||||
|
size is supported on any one host. If your application |VMs| require 2M
|
||||||
|
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||||
|
memory on each |NUMA| node where vswitch is running on host.
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -327,12 +333,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||||
this host with the command:
|
this host, assuming 1G huge page size is being used on this host, with
|
||||||
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -364,7 +371,14 @@ Configure worker nodes
|
|||||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||||
|
|
||||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||||
|
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||||
|
# but is limited by the size and space available on the physical disk you chose above.
|
||||||
|
# The following example uses a small PARTITION size such that you can fit it on the
|
||||||
|
# root disk, if that is what you chose above.
|
||||||
|
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||||
|
PARTITION_SIZE=30
|
||||||
|
|
||||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||||
|
|
||||||
# Add new partition to ‘nova-local’ local volume group
|
# Add new partition to ‘nova-local’ local volume group
|
||||||
@ -374,12 +388,13 @@ Configure worker nodes
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||||
underlying assigned Data Network.
|
underlying assigned Data Network.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
A compute-labeled worker host **MUST** have at least one Data class
|
||||||
|
interface.
|
||||||
|
|
||||||
* Configure the data interfaces for worker nodes.
|
* Configure the data interfaces for worker nodes.
|
||||||
|
|
||||||
@ -432,7 +447,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
|
|
||||||
* Configure the pci-sriov interfaces for worker nodes.
|
* Configure the pci-sriov interfaces for worker nodes.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: bash
|
||||||
|
|
||||||
# Execute the following lines with
|
# Execute the following lines with
|
||||||
export NODE=worker-0
|
export NODE=worker-0
|
||||||
@ -453,7 +468,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
|
||||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
|
||||||
|
|
||||||
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
|
# If not created already, create Data Networks that the 'pci-sriov'
|
||||||
|
# interfaces will be connected to
|
||||||
DATANET0='datanet0'
|
DATANET0='datanet0'
|
||||||
DATANET1='datanet1'
|
DATANET1='datanet1'
|
||||||
system datanetwork-add ${DATANET0} vlan
|
system datanetwork-add ${DATANET0} vlan
|
||||||
@ -464,8 +480,8 @@ Optionally Configure PCI-SRIOV Interfaces
|
|||||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||||
|
|
||||||
|
|
||||||
* To enable using |SRIOV| network attachments for the above interfaces in
|
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||||
Kubernetes hosted application containers:
|
the above interfaces in Kubernetes hosted application containers:
|
||||||
|
|
||||||
* Configure the Kubernetes |SRIOV| device plugin.
|
* Configure the Kubernetes |SRIOV| device plugin.
|
||||||
|
|
||||||
@ -519,4 +535,4 @@ host machine.
|
|||||||
|
|
||||||
.. only:: partner
|
.. only:: partner
|
||||||
|
|
||||||
.. include:: /_includes/72hr-to-license.rest
|
.. include:: /_includes/72hr-to-license.rest
|
||||||
|
@ -17,59 +17,70 @@
|
|||||||
|
|
||||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||||
|
|
||||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
.. only:: starlingx
|
||||||
|
|
||||||
* Runs in a container; defined within the helm charts of stx-openstack
|
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||||
manifest.
|
|
||||||
* Shares the core(s) assigned to the platform.
|
|
||||||
|
|
||||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane
|
* Runs in a container; defined within the helm charts of stx-openstack
|
||||||
Development Kit, which is supported only on bare metal hardware) should be
|
manifest.
|
||||||
used:
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
Plane Development Kit, which is supported only on bare metal hardware)
|
||||||
|
should be used:
|
||||||
|
|
||||||
**To deploy the default containerized OVS:**
|
* Runs directly on the host (it is not containerized).
|
||||||
|
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
::
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
system modify --vswitch_type none
|
::
|
||||||
|
|
||||||
This does not run any vSwitch directly on the host, instead, it uses the containerized
|
system modify --vswitch_type none
|
||||||
|OVS| defined in the helm charts of stx-openstack manifest.
|
|
||||||
|
|
||||||
**To deploy OVS-DPDK, run the following command:**
|
This does not run any vSwitch directly on the host, instead, it uses
|
||||||
|
the containerized |OVS| defined in the helm charts of stx-openstack
|
||||||
|
manifest.
|
||||||
|
|
||||||
::
|
To deploy |OVS-DPDK|, run the following command:
|
||||||
|
|
||||||
system modify --vswitch_type ovs-dpdk
|
.. parsed-literal::
|
||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created
|
system modify --vswitch_type |ovs-dpdk|
|
||||||
will default to automatically assigning 1 vSwitch core for |AIO|
|
|
||||||
controllers and 2 vSwitch cores for compute-labeled worker nodes.
|
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with
|
Default recommendation for an |AIO|-controller is to use a single core for
|
||||||
the following command:
|
|OVS-DPDK| vswitch.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||||
|
system host-cpu-modify -f vswitch -p0 0 controller-0
|
||||||
|
|
||||||
|
|
||||||
|
When using |OVS-DPDK|, configure 1x 1G huge page for vSwitch memory on
|
||||||
|
each |NUMA| node where vswitch is running on this host, with the
|
||||||
|
following command:
|
||||||
|
|
||||||
|
.. code-block:: bash
|
||||||
|
|
||||||
|
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||||
|
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||||
|
|
||||||
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
|
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||||
huge pages to enable networking and must use a flavor with property:
|
huge pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with
|
Configure the huge pages for |VMs| in an |OVS-DPDK| environment with
|
||||||
the command:
|
the command:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify -f application -1G 10 worker-0 0
|
system host-memory-modify controller-0 0 -1G 10
|
||||||
system host-memory-modify -f application -1G 10 worker-1 1
|
system host-memory-modify controller-0 1 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -118,7 +118,7 @@ Bootstrap system on controller-0
|
|||||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||||
applicable to your deployment environment.
|
applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
cd ~
|
cd ~
|
||||||
cat <<EOF > localhost.yml
|
cat <<EOF > localhost.yml
|
||||||
@ -180,7 +180,7 @@ Configure controller-0
|
|||||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||||
applicable to your deployment environment.
|
applicable to your deployment environment.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
OAM_IF=<OAM-PORT>
|
OAM_IF=<OAM-PORT>
|
||||||
MGMT_IF=<MGMT-PORT>
|
MGMT_IF=<MGMT-PORT>
|
||||||
@ -242,9 +242,10 @@ OpenStack-specific host configuration
|
|||||||
used:
|
used:
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
* Runs directly on the host (it is not containerized).
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
* Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||||
|
function.
|
||||||
|
|
||||||
To deploy the default containerized OVS|:
|
To deploy the default containerized |OVS|:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -261,12 +262,11 @@ OpenStack-specific host configuration
|
|||||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||||
|
|
||||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
|
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for |AIO| controllers and
|
||||||
vSwitch cores for compute-labeled worker nodes.
|
2 vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using |OVS|-|DPDK|, configure vSwitch memory per NUMA node with the
|
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with the
|
||||||
following
|
following command:
|
||||||
command:
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -282,7 +282,7 @@ OpenStack-specific host configuration
|
|||||||
pages to enable networking and must use a flavor with property:
|
pages to enable networking and must use a flavor with property:
|
||||||
hw:mem_page_size=large
|
hw:mem_page_size=large
|
||||||
|
|
||||||
Configure the huge pages for VMs in an |OVS|-|DPDK| environment with the
|
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with the
|
||||||
command:
|
command:
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -298,7 +298,7 @@ OpenStack-specific host configuration
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
locking and unlocking all compute-labeled worker nodes (and/or |AIO|
|
||||||
controllers) to apply the change.
|
controllers) to apply the change.
|
||||||
|
|
||||||
.. incl-config-controller-0-storage-end:
|
.. incl-config-controller-0-storage-end:
|
||||||
@ -346,8 +346,8 @@ Install software on controller-1 and worker nodes
|
|||||||
#. As controller-1 boots, a message appears on its console instructing you to
|
#. As controller-1 boots, a message appears on its console instructing you to
|
||||||
configure the personality of the node.
|
configure the personality of the node.
|
||||||
|
|
||||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
#. On the console of controller-0, list hosts to see newly discovered
|
||||||
host (hostname=None):
|
controller-1 host (hostname=None):
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -428,9 +428,9 @@ Configure controller-1
|
|||||||
|
|
||||||
.. incl-config-controller-1-start:
|
.. incl-config-controller-1-start:
|
||||||
|
|
||||||
Configure the |OAM| and MGMT interfaces of controller-0 and specify the attached
|
Configure the |OAM| and MGMT interfaces of controller-0 and specify the
|
||||||
networks. Use the |OAM| and MGMT port names, for example eth0, that are applicable
|
attached networks. Use the |OAM| and MGMT port names, for example eth0, that
|
||||||
to your deployment environment.
|
are applicable to your deployment environment.
|
||||||
|
|
||||||
(Note that the MGMT interface is partially set up automatically by the network
|
(Note that the MGMT interface is partially set up automatically by the network
|
||||||
install procedure.)
|
install procedure.)
|
||||||
@ -518,12 +518,12 @@ Configure worker nodes
|
|||||||
|
|
||||||
This step is **required** for OpenStack.
|
This step is **required** for OpenStack.
|
||||||
|
|
||||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
This step is optional for Kubernetes: Do this step if using |SRIOV|
|
||||||
attachments in hosted application containers.
|
network attachments in hosted application containers.
|
||||||
|
|
||||||
For Kubernetes SRIOV network attachments:
|
For Kubernetes |SRIOV| network attachments:
|
||||||
|
|
||||||
* Configure SRIOV device plug in:
|
* Configure |SRIOV| device plug in:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -531,8 +531,8 @@ Configure worker nodes
|
|||||||
system host-label-assign ${NODE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running |DPDK| in containers on this host, configure the number
|
* If planning on running |DPDK| in containers on this host, configure the
|
||||||
of 1G Huge pages required on both |NUMA| nodes:
|
number of 1G Huge pages required on both |NUMA| nodes:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -619,7 +619,7 @@ Unlock worker nodes
|
|||||||
|
|
||||||
Unlock worker nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in worker-0 worker-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $NODE
|
system host-unlock $NODE
|
||||||
@ -638,7 +638,7 @@ Configure storage nodes
|
|||||||
Note that the MGMT interfaces are partially set up by the network install
|
Note that the MGMT interfaces are partially set up by the network install
|
||||||
procedure.
|
procedure.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for NODE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $NODE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
@ -657,7 +657,7 @@ Unlock storage nodes
|
|||||||
|
|
||||||
Unlock storage nodes in order to bring them into service:
|
Unlock storage nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
for STORAGE in storage-0 storage-1; do
|
for STORAGE in storage-0 storage-1; do
|
||||||
system host-unlock $STORAGE
|
system host-unlock $STORAGE
|
||||||
|
@ -218,6 +218,10 @@ For host-based Ceph,
|
|||||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||||
system host-stor-list controller-0
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||||
|
additional info on configuring the Ceph storage backend such as supporting
|
||||||
|
SSD-backed journals, multiple storage tiers, and so on.
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
#. Initialize with add ceph-rook backend:
|
#. Initialize with add ceph-rook backend:
|
||||||
|
@ -209,6 +209,10 @@ For host-based Ceph,
|
|||||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||||
system host-stor-list controller-0
|
system host-stor-list controller-0
|
||||||
|
|
||||||
|
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||||
|
additional info on configuring the Ceph storage backend such as supporting
|
||||||
|
SSD-backed journals, multiple storage tiers, and so on.
|
||||||
|
|
||||||
For Rook container-based Ceph:
|
For Rook container-based Ceph:
|
||||||
|
|
||||||
#. Initialize with add ceph-rook backend:
|
#. Initialize with add ceph-rook backend:
|
||||||
@ -301,11 +305,11 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
#. **For OpenStack only:** A vSwitch is required.
|
#. **For OpenStack only:** A vSwitch is required.
|
||||||
|
|
||||||
The default vSwitch is containerized OVS that is packaged with the
|
The default vSwitch is containerized |OVS| that is packaged with the
|
||||||
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
|OVS-DPDK| on the host, however, in the virtual environment |OVS-DPDK| is
|
||||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
NOT supported, only |OVS| is supported. Therefore, simply use the default
|
||||||
vSwitch here.
|
|OVS| vSwitch here.
|
||||||
|
|
||||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
which is needed for stx-openstack nova ephemeral disks.
|
||||||
@ -339,8 +343,8 @@ Unlock virtual controller-0 to bring it into service:
|
|||||||
|
|
||||||
system host-unlock controller-0
|
system host-unlock controller-0
|
||||||
|
|
||||||
Controller-0 will reboot to apply configuration changes and come into
|
Controller-0 will reboot to apply configuration changes and come into service.
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
--------------------------------------------------------------------------
|
--------------------------------------------------------------------------
|
||||||
Optionally, finish configuration of Ceph-based Persistent Storage Backend
|
Optionally, finish configuration of Ceph-based Persistent Storage Backend
|
||||||
@ -393,7 +397,7 @@ On **virtual** controller-0:
|
|||||||
|
|
||||||
system application-apply rook-ceph-apps
|
system application-apply rook-ceph-apps
|
||||||
|
|
||||||
#. Wait for OSDs pod ready
|
#. Wait for |OSDs| pod ready.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
@ -28,9 +28,9 @@ A Standard System Controller with Controller Storage is not supported.
|
|||||||
Complete the |prod| procedure for your deployment scenario with the
|
Complete the |prod| procedure for your deployment scenario with the
|
||||||
modifications noted above and below.
|
modifications noted above and below.
|
||||||
|
|
||||||
.. include:: ../_includes/installing-and-provisioning-the-central-cloud.rest
|
.. include:: ../_includes/installing-and-provisioning-the-central-cloud.rest
|
||||||
:start-after: deployment-scenario-begin
|
:start-after: deployment-scenario-begin
|
||||||
:end-before: deployment-scenario-end
|
:end-before: deployment-scenario-end
|
||||||
|
|
||||||
You will also need to make the following modifications:
|
You will also need to make the following modifications:
|
||||||
|
|
||||||
|
@ -103,6 +103,8 @@ Host memory provisioning
|
|||||||
host_memory_provisioning/allocating-host-memory-using-horizon
|
host_memory_provisioning/allocating-host-memory-using-horizon
|
||||||
host_memory_provisioning/allocating-host-memory-using-the-cli
|
host_memory_provisioning/allocating-host-memory-using-the-cli
|
||||||
|
|
||||||
|
.. _node-interfaces-index:
|
||||||
|
|
||||||
---------------
|
---------------
|
||||||
Node interfaces
|
Node interfaces
|
||||||
---------------
|
---------------
|
||||||
|
@ -11,6 +11,12 @@ Install REST API and Horizon Certificate
|
|||||||
This certificate must be valid for the domain configured for OpenStack, see the
|
This certificate must be valid for the domain configured for OpenStack, see the
|
||||||
sections on :ref:`Accessing the System <access-using-the-default-set-up>`.
|
sections on :ref:`Accessing the System <access-using-the-default-set-up>`.
|
||||||
|
|
||||||
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
|
Before installing the openstack certificate and key, you must install the ROOT
|
||||||
|
|CA| for the openstack certificate as a trusted ca, :ref:`Install a Trusted CA
|
||||||
|
Certificate <install-a-trusted-ca-certificate>`.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
#. Install the certificate for OpenStack as Helm chart overrides.
|
#. Install the certificate for OpenStack as Helm chart overrides.
|
||||||
@ -23,7 +29,7 @@ sections on :ref:`Accessing the System <access-using-the-default-set-up>`.
|
|||||||
private key.
|
private key.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
The OpenStack certificate must be created with wildcard SAN.
|
The OpenStack certificate must be created with wildcard |SAN|.
|
||||||
|
|
||||||
For example, to create a certificate for |FQDN|: west2.us.example.com,
|
For example, to create a certificate for |FQDN|: west2.us.example.com,
|
||||||
the following entry must be included in the certificate:
|
the following entry must be included in the certificate:
|
||||||
|
@ -98,6 +98,8 @@ Storage Hosts
|
|||||||
storage-hosts-storage-on-storage-hosts
|
storage-hosts-storage-on-storage-hosts
|
||||||
replication-groups
|
replication-groups
|
||||||
|
|
||||||
|
.. _configure-ceph-osds-on-a-host:
|
||||||
|
|
||||||
-----------------------------
|
-----------------------------
|
||||||
Configure Ceph OSDs on a Host
|
Configure Ceph OSDs on a Host
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
@ -35,6 +35,8 @@ NTP Server Configuration
|
|||||||
configuring-ntp-servers-and-services-using-the-cli
|
configuring-ntp-servers-and-services-using-the-cli
|
||||||
resynchronizing-a-host-to-the-ntp-server
|
resynchronizing-a-host-to-the-ntp-server
|
||||||
|
|
||||||
|
.. _ptp-server-config-index:
|
||||||
|
|
||||||
------------------------
|
------------------------
|
||||||
PTP Server Configuration
|
PTP Server Configuration
|
||||||
------------------------
|
------------------------
|
||||||
|
@ -18,9 +18,10 @@ hosts are upgraded one at time while continuing to provide its hosting services
|
|||||||
to its hosted applications. An upgrade can be performed manually or using
|
to its hosted applications. An upgrade can be performed manually or using
|
||||||
Upgrade Orchestration, which automates much of the upgrade procedure, leaving a
|
Upgrade Orchestration, which automates much of the upgrade procedure, leaving a
|
||||||
few manual steps to prevent operator oversight. For more information on manual
|
few manual steps to prevent operator oversight. For more information on manual
|
||||||
upgrades, see :ref:`Manual |prod| Components Upgrade
|
upgrades, see :ref:`Manual PLatform Components Upgrade
|
||||||
<manual-upgrade-overview>`. For more information on upgrade orchestration, see
|
<manual-upgrade-overview>`. For more information on upgrade orchestration, see
|
||||||
:ref:`Orchestrated |prod| Component Upgrade <orchestration-upgrade-overview>`.
|
:ref:`Orchestrated Platform Component Upgrade
|
||||||
|
<orchestration-upgrade-overview>`.
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
Do NOT use information in the |updates-doc| guide for |prod-dc|
|
Do NOT use information in the |updates-doc| guide for |prod-dc|
|
||||||
@ -56,7 +57,7 @@ message formats from the old release of software. Before upgrading the second
|
|||||||
controller, is the "point-of-no-return for an in-service abort" of the upgrades
|
controller, is the "point-of-no-return for an in-service abort" of the upgrades
|
||||||
process. The second controller is loaded with the new release of software and
|
process. The second controller is loaded with the new release of software and
|
||||||
becomes the new Standby controller. For more information on manual upgrades,
|
becomes the new Standby controller. For more information on manual upgrades,
|
||||||
see :ref:`Manual |prod| Components Upgrade <manual-upgrade-overview>` .
|
see :ref:`Manual Platform Components Upgrade <manual-upgrade-overview>` .
|
||||||
|
|
||||||
If present, storage nodes are locked, upgraded and unlocked one at a time in
|
If present, storage nodes are locked, upgraded and unlocked one at a time in
|
||||||
order to respect the redundancy model of |prod| storage nodes. Storage nodes
|
order to respect the redundancy model of |prod| storage nodes. Storage nodes
|
||||||
|
Loading…
Reference in New Issue
Block a user