Lots of corrections / clarifications to install guides.

(for Greg W) - replaced abbrev substitutions inside **bold** with plain text as
these this combination does not expand. Also in one additional file that came up
in search. Some related margin adjustments ...
Initial content submit

Signed-off-by: Greg Waines <greg.waines@windriver.com>
Change-Id: If8805359bf80b0f1359ef58c24493307310e7e28
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
This commit is contained in:
Greg Waines 2021-05-03 11:01:51 -04:00 committed by Ron Stone
parent 53d86a6d49
commit 50bc21226b
7 changed files with 922 additions and 564 deletions

View File

@ -64,10 +64,12 @@ Install software on worker nodes
Configure worker nodes Configure worker nodes
---------------------- ----------------------
#. Assign the cluster-host network to the MGMT interface for the worker nodes: #. The MGMT interfaces are partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
(Note that the MGMT interfaces are partially set up automatically by the Complete the MGMT interface configuration of the worker nodes by specifying
network install procedure.) the attached network of "cluster-host".
:: ::
@ -78,100 +80,175 @@ Configure worker nodes
#. Configure data interfaces for worker nodes. Use the DATA port names, for #. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment. example eth0, that are applicable to your deployment environment.
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE"
set -ex
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. only:: starlingx
*************************************
OpenStack-specific host configuration
*************************************
.. important:: .. important::
This step is **required** for OpenStack. **This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
This step is optional for Kubernetes: Do this step if using SRIOV network #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
attachments in hosted application containers. support of installing the stx-openstack manifest and helm-charts later.
For Kubernetes SRIOV network attachments: ::
* Configure SRIOV device plug in: for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
:: #. **For OpenStack only:** Configure the host settings for the vSwitch.
system host-label-assign controller-1 sriovdp=enabled **If using OVS-DPDK vswitch, run the following commands:**
* If planning on running DPDK in containers on this host, configure the number Default recommendation for worker node is to use a single core on each
of 1G Huge pages required on both NUMA nodes: numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
configured, if not run the following command.
:: ::
system host-memory-modify controller-1 0 -1G 100 for NODE in worker-0 worker-1; do
system host-memory-modify controller-1 1 -1G 100
For both Kubernetes and OpenStack: # assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
:: # assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
DATA0IF=<DATA-0-PORT> done
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
echo "Configuring interface for: $NODE" each |NUMA| node where vswitch is running on this host, with the
set -ex following command:
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
************************************* ::
OpenStack-specific host configuration
*************************************
.. important:: for NODE in worker-0 worker-1; do
**This step is required only if the StarlingX OpenStack application # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
(stx-openstack) will be installed.** system host-memory-modify -f vswitch -1G 1 $NODE 0
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
support of installing the stx-openstack manifest and helm-charts later. system host-memory-modify -f vswitch -1G 1 $NODE 1
:: done
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group, .. important::
needed for stx-openstack nova ephemeral disks.
:: |VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
for NODE in worker-0 worker-1; do Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
echo "Configuring Nova local for: $NODE" this host with the command:
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ::
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) for NODE in worker-0 worker-1; do
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-memory-modify -f application -1G 10 $NODE 0
done
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
------------------- -------------------

View File

@ -171,21 +171,32 @@ Configure controller-0
source /etc/platform/openrc source /etc/platform/openrc
#. Configure the |OAM| and MGMT interfaces of controller-0 and specify the #. Configure the |OAM| interface of controller-0 and specify the
attached networks. Use the |OAM| and MGMT port names, for example eth0, that attached network as "oam".
are applicable to your deployment environment.
Use the |OAM| port name that is applicable to your deployment environment,
for example eth0:
:: ::
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
#. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host".
Use the MGMT port name that is applicable to your deployment environment,
for example eth1:
::
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
system host-if-modify controller-0 lo -c none system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID} system interface-network-remove ${UUID}
done done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host system interface-network-assign controller-0 $MGMT_IF cluster-host
@ -196,26 +207,83 @@ Configure controller-0
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
************************************************************** #. Configure data interfaces for controller-0. Use the DATA port names, for example
Optionally, initialize a Ceph-based Persistent Storage Backend eth0, applicable to your deployment environment.
**************************************************************
.. important:: This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
A persistent storage backend is required if your application requires .. important::
Persistent Volume Claims (PVCs). The StarlingX OpenStack application
(stx-openstack) requires |PVCs|, therefore if you plan on using the
stx-openstack application, then you must configure a persistent storage
backend.
.. only:: starlingx This step is **required** for OpenStack.
There are two options for persistent storage backend:
1) the host-based Ceph solution and
2) the Rook container-based Ceph solution.
The Rook container-based Ceph backend is installed after both * Configure the data interfaces
AIO-Controllers are configured and unlocked.
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in kuberentes hosted appliction containers
on this host, configure the number of 1G Huge pages required on both
|NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires |PVCs|.
.. only:: starlingx
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph: For host-based Ceph:
@ -251,74 +319,17 @@ For host-based Ceph:
system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled
#. Configure data interfaces for controller-0. Use the DATA port names, for example
eth0, applicable to your deployment environment.
.. important:: ***********************************
If required, configure Docker Proxy
***********************************
This step is **required** for OpenStack. StarlingX uses publicly available container runtime registries. If you are
behind a corporate firewall or proxy, you need to set docker proxy settings.
This step is optional for Kubernetes: Do this step if using |SRIOV| network Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
attachments in hosted application containers. details about configuring Docker proxy settings.
For Kubernetes |SRIOV| network attachments:
* Configure the |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in containers on this host, configure the number
of 1G Huge pages required on both |NUMA| nodes.
::
system host-memory-modify controller-0 0 -1G 100
system host-memory-modify controller-0 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about Docker proxy settings.
.. only:: starlingx .. only:: starlingx
@ -326,7 +337,116 @@ For host-based Ceph:
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************
.. include:: inc-openstack-specific-host-config.rest .. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later.
::
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled
#. **For OpenStack only:** Configure the system setting for the vSwitch.
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack
manifest.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data
Plane Development Kit, which is supported only on bare metal hardware)
should be used:
* Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
**To deploy the default containerized OVS:**
::
system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses the
containerized |OVS| defined in the helm charts of stx-openstack
manifest.
**To deploy OVS-DPDK, run the following command:**
::
system modify --vswitch_type ovs-dpdk
Default recommendation for an |AIO|-controller is to use a single core
for |OVS|-|DPDK| vswitch.
::
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created
will default to automatically assigning 1 vSwitch core for |AIO|
controllers and 2 vSwitch cores for compute-labeled worker nodes.
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
::
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 0
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
the commands:
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 0
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 1
.. note::
After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks.
::
export NODE=controller-0
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
------------------- -------------------
@ -366,8 +486,9 @@ Install software on controller-1 node
system host-update 2 personality=controller system host-update 2 personality=controller
#. Wait for the software installation on controller-1 to complete, for controller-1 to #. Wait for the software installation on controller-1 to complete, for
reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'. controller-1 to reboot, and for controller-1 to show as
locked/disabled/online in 'system host-list'.
This can take 5-10 minutes, depending on the performance of the host machine. This can take 5-10 minutes, depending on the performance of the host machine.
@ -385,82 +506,96 @@ Install software on controller-1 node
Configure controller-1 Configure controller-1
---------------------- ----------------------
#. Configure the |OAM| and MGMT interfaces of controller-1 and specify the #. Configure the |OAM| interface of controller-1 and specify the
attached networks. Use the |OAM| and MGMT port names, for example eth0, that are attached network of "oam".
applicable to your deployment environment:
(Note that the MGMT interface is partially set up automatically by the network Use the |OAM| port name that is applicable to your deployment environment,
install procedure.) for example eth0:
:: ::
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam system interface-network-assign controller-1 $OAM_IF oam
#. The MGMT interface is partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
Complete the MGMT interface configuration of controller-1 by specifying the
attached network of "cluster-host".
::
system interface-network-assign controller-1 mgmt0 cluster-host system interface-network-assign controller-1 mgmt0 cluster-host
#. Configure data interfaces for controller-1. Use the DATA port names, for example #. Configure data interfaces for controller-1. Use the DATA port names, for
eth0, applicable to your deployment environment. example eth0, applicable to your deployment environment.
This step is optional for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. important:: .. important::
This step is **required** for OpenStack. This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using |SRIOV|
network attachments in hosted application containers.
For Kubernetes |SRIOV| network attachments: * Configure the data interfaces
* Configure the |SRIOV| device plugin:
:: ::
system host-label-assign controller-1 sriovdp=enabled DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-1
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
* If planning on running DPDK in containers on this host, configure the number system datanetwork-add ${PHYSNET0} vlan
of 1G Huge pages required on both NUMA nodes: system datanetwork-add ${PHYSNET1} vlan
:: system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
system host-memory-modify controller-1 0 -1G 100 * To enable using |SRIOV| network attachments for the above interfaes in
system host-memory-modify controller-1 1 -1G 100 Kubernetes hosted application containers:
.. only:: starlingx * Configure the Kubernetes |SRIOV| device plugin:
For both Kubernetes and OpenStack: ::
:: system host-label-assign controller-1 sriovdp=enabled
DATA0IF=<DATA-0-PORT> * If planning on running |DPDK| in Kubernetes hosted application containers
DATA1IF=<DATA-1-PORT> on this host, configure the number of 1G Huge pages required on both
export NODE=controller-1 |NUMA| nodes:
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan ::
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system host-memory-modify -f application controller-1 0 -1G 10
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
************************************************************************************* # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
Optionally, configure host-specific details for Ceph-based Persistent Storage Backend system host-memory-modify -f application controller-1 1 -1G 10
*************************************************************************************
***************************************************************************************
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
***************************************************************************************
For host-based Ceph: For host-based Ceph:
@ -503,6 +638,48 @@ For host-based Ceph:
system host-label-assign controller-1 openvswitch=enabled system host-label-assign controller-1 openvswitch=enabled
system host-label-assign controller-1 sriov=enabled system host-label-assign controller-1 sriov=enabled
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for an AIO-controller is to use a single core
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
if not run the following command.
::
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-1
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
::
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-1 0
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
this host with the command:
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
system host-memory-modify -f application -1G 10 controller-1 0
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
system host-memory-modify -f application -1G 10 controller-1 1
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for stx-openstack nova ephemeral disks.
@ -539,15 +716,13 @@ machine.
.. only:: starlingx .. only:: starlingx
-------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
Optionally, finish configuration of Ceph-based Persistent Storage Backend If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
-------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
For host-based Ceph: Nothing else is required.
For Rook container-based Ceph: For Rook container-based Ceph:
On **virtual** controller-0 and controller-1: On active controller:
#. Wait for the ``rook-ceph-apps`` application to be uploaded #. Wait for the ``rook-ceph-apps`` application to be uploaded

View File

@ -188,16 +188,90 @@ The newly installed controller needs to be configured.
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
************************************************************** #. Configure data interfaces for controller-0. Use the DATA port names, for example
Optionally, initialize a Ceph-based Persistent Storage Backend eth0, applicable to your deployment environment.
**************************************************************
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application controller-0 1 -1G 10
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires A persistent storage backend is required if your application requires
|PVCs|. |PVCs|.
.. only:: starlingx
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
There are two options for persistent storage backend: the host-based Ceph solution and the Rook container-based Ceph solution.
For host-based Ceph: For host-based Ceph:
#. Initialize with add ceph backend: #. Add host-based ceph backend:
:: ::
@ -215,7 +289,7 @@ For host-based Ceph:
For Rook container-based Ceph: For Rook container-based Ceph:
#. Initialize with add ceph-rook backend: #. Add Rook container-based backend:
:: ::
@ -229,87 +303,15 @@ For host-based Ceph:
system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled
#. Configure data interfaces for controller-0. Use the DATA port names, for example ***********************************
eth0, applicable to your deployment environment. If required, configure Docker Proxy
***********************************
.. important:: StarlingX uses publicly available container runtime registries. If you are
behind a corporate firewall or proxy, you need to set docker proxy settings.
This step is **required** for OpenStack. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about configuring Docker proxy settings.
This step is optional for Kubernetes: Do this step if using |SRIOV|
network attachments in hosted application containers.
For Kubernetes |SRIOV| network attachments:
* Configure the |SRIOV| device plugin.
::
system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes.
::
system host-memory-modify controller-0 0 -1G 100
system host-memory-modify controller-0 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
#. Add an |OSD| on controller-0 for Ceph. The following example adds an |OSD|
to the `sdb` disk:
.. important::
This step requires a configured Ceph storage backend
::
echo ">>> Add OSDs to primary tier"
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about Docker proxy settings.
.. only:: starlingx .. only:: starlingx
@ -349,84 +351,81 @@ For host-based Ceph:
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized |OVS|: **To deploy the default containerized OVS:**
:: ::
system modify --vswitch_type none system modify --vswitch_type none
Do not run any vSwitch directly on the host, instead, use the This does not run any vSwitch directly on the host, instead, it uses the
containerized |OVS| defined in the helm charts of stx-openstack containerized |OVS| defined in the helm charts of stx-openstack
manifest. manifest.
To deploy |OVS|-|DPDK|, run the following command: **To deploy OVS-DPDK, run the following command:**
:: ::
system modify --vswitch_type ovs-dpdk system modify --vswitch_type ovs-dpdk
Default recommendation for an AIO-controller is to use a single core
for |OVS|-|DPDK| vswitch.
::
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
will default to automatically assigning 1 vSwitch core for |AIO| where vswitch is running on this host, with the following command:
controllers and 2 vSwitch cores for compute-labeled worker nodes.
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with
the following command:
:: ::
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor> # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 0
For example: .. important::
:: |VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
system host-memory-modify -f vswitch -1G 1 worker-0 0 Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
the commands:
|VMs| created in an |OVS|-|DPDK| environment must be configured to use ::
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
the command: system host-memory-modify -f application -1G 10 controller-0 0
:: # assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 1
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
For example:
::
system host-memory-modify worker-0 0 -1G 10
.. note:: .. note::
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking all compute-labeled worker nodes (and/or AIO locking and unlocking controller-0 to apply the change.
controllers) to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for stx-openstack nova ephemeral disks.
:: ::
export NODE=controller-0 export NODE=controller-0
echo ">>> Getting root disk info" echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local" echo ">>>> Configuring nova-local"
NOVA_SIZE=34 NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2 sleep 2
.. incl-config-controller-0-openstack-specific-aio-simplex-end: .. incl-config-controller-0-openstack-specific-aio-simplex-end:
------------------- -------------------
Unlock controller-0 Unlock controller-0
@ -448,17 +447,13 @@ machine.
.. only:: starlingx .. only:: starlingx
-------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
Optionally, finish configuration of Ceph-based Persistent Storage Backend If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
-------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
For host-based Ceph: Nothing else is required. On controller-0:
For Rook container-based Ceph: #. Wait for application rook-ceph-apps to be uploaded
On **virtual** controller-0:
#. Wait for application rook-ceph-apps uploaded
:: ::
@ -498,7 +493,7 @@ machine.
system application-apply rook-ceph-apps system application-apply rook-ceph-apps
#. Wait for |OSDs| pod ready. #. Wait for |OSDs| pod to be ready.
:: ::

View File

@ -176,26 +176,34 @@ Configure controller-0
source /etc/platform/openrc source /etc/platform/openrc
#. Configure the OAM and MGMT interfaces of controller-0 and specify the #. Configure the |OAM| interface of controller-0 and specify the
attached networks. Use the OAM and MGMT port names, for example eth0, that are attached network as "oam".
applicable to your deployment environment.
Use the |OAM| port name that is applicable to your deployment environment, for example eth0:
:: ::
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
#. Configure the MGMT interface of controller-0 and specify the attached networks of both "mgmt" and "cluster-host".
Use the MGMT port name that is applicable to your deployment environment, for example eth1:
::
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
system host-if-modify controller-0 lo -c none system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID} system interface-network-remove ${UUID}
done done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host system interface-network-assign controller-0 $MGMT_IF cluster-host
#. Configure NTP servers for network time synchronization: #. Configure |NTP| servers for network time synchronization:
:: ::
@ -203,33 +211,31 @@ Configure controller-0
#. Configure Ceph storage backend: #. Configure Ceph storage backend:
.. important:: This step is required only if your application requires persistent storage.
This step required only if your application requires .. only:: starlingx
persistent storage.
**If you want to install the StarlingX Openstack application .. important::
(stx-openstack), this step is mandatory.**
**If you want to install the StarlingX Openstack application
(stx-openstack), this step is mandatory.**
:: ::
system storage-backend-add ceph --confirmed system storage-backend-add ceph --confirmed
#. If required, and not already done as part of bootstrap, configure Docker to #. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server. use a proxy server.
#. List Docker proxy parameters: StarlingX uses publicly available container runtime registries. If you are behind a
corporate firewall or proxy, you need to set docker proxy settings.
:: Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about configuring Docker proxy settings.
system service-parameter-list platform docker
#. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about Docker proxy settings.
.. only:: starlingx .. only:: starlingx
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************
@ -248,71 +254,43 @@ Configure controller-0
#. **For OpenStack only:** Configure the system setting for the vSwitch. #. **For OpenStack only:** Configure the system setting for the vSwitch.
StarlingX has OVS (kernel-based) vSwitch configured as default: StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack * Runs in a container; defined within the helm charts of stx-openstack
manifest. manifest.
* Shares the core(s) assigned to the platform. * Shares the core(s) assigned to the platform.
If you require better performance, OVS-DPDK (OVS with the Data Plane If you require better performance, |OVS|-|DPDK| (OVS with the Data Plane
Development Kit, which is supported only on bare metal hardware) should be Development Kit, which is supported only on bare metal hardware) should
used: be used:
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized OVS: **To deploy the default containerized OVS:**
:: ::
system modify --vswitch_type none system modify --vswitch_type none
Do not run any vSwitch directly on the host, instead, use the containerized This does not run any vSwitch directly on the host, instead, it uses the
OVS defined in the helm charts of stx-openstack manifest. containerized |OVS| defined in the helm charts of stx-openstack manifest.
To deploy OVS-|DPDK|, run the following command: **To deploy OVS-DPDK, run the following command:**
:: ::
system modify --vswitch_type ovs-dpdk system modify --vswitch_type ovs-dpdk
system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to OVS-|DPDK|, any subsequent nodes created Once vswitch_type is set to OVS-|DPDK|, any subsequent AIO-controller or
will default to automatically assigning 1 vSwitch core for AIO worker nodes created will default to automatically assigning 1 vSwitch
controllers and 2 vSwitch cores for compute-labeled worker nodes. core for |AIO| controllers and 2 vSwitch cores for compute-labeled worker
nodes.
When using OVS-|DPDK|, configure vSwitch memory per node with the
following command:
::
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
For example:
::
system host-memory-modify -f vswitch -1G 1 worker-0 0
|VMs| created in an OVS-|DPDK| environment must be configured to use huge pages
to enable networking and must use a flavor with property: hw:mem_page_size=large
Configure the huge pages for VMs in an OVS-|DPDK| environment with the command:
::
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
For example:
::
system host-memory-modify worker-0 0 -1G 10
.. note:: .. note::
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking all compute-labeled worker nodes (and/or AIO locking and unlocking all compute-labeled worker nodes (and/or |AIO|
controllers) to apply the change. controllers) to apply the change.
.. incl-config-controller-0-storage-end: .. incl-config-controller-0-storage-end:
@ -341,8 +319,8 @@ Install software on controller-1 and worker nodes
#. As controller-1 boots, a message appears on its console instructing you to #. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node. configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered controller-1 #. On the console of controller-0, list hosts to see newly discovered
host (hostname=None): controller-1 host (hostname=None):
:: ::
@ -373,16 +351,16 @@ Install software on controller-1 and worker nodes
system host-update 3 personality=worker hostname=worker-0 system host-update 3 personality=worker hostname=worker-0
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to Repeat for worker-1. Power on worker-1 and wait for the new host
be discovered by checking 'system host-list': (hostname=None) to be discovered by checking 'system host-list':
:: ::
system host-update 4 personality=worker hostname=worker-1 system host-update 4 personality=worker hostname=worker-1
#. Wait for the software installation on controller-1, worker-0, and worker-1 to #. Wait for the software installation on controller-1, worker-0, and worker-1
complete, for all servers to reboot, and for all to show as locked/disabled/online in to complete, for all servers to reboot, and for all to show as
'system host-list'. locked/disabled/online in 'system host-list'.
:: ::
@ -403,20 +381,27 @@ Configure controller-1
.. incl-config-controller-1-start: .. incl-config-controller-1-start:
Configure the OAM and MGMT interfaces of controller-1 and specify the attached #. Configure the |OAM| interface of controller-1 and specify the
networks. Use the OAM and MGMT port names, for example eth0, that are applicable attached network of "oam".
to your deployment environment.
(Note that the MGMT interface is partially set up automatically by the network Use the |OAM| port name that is applicable to your deployment environment, for example eth0:
install procedure.)
:: ::
OAM_IF=<OAM-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
#. The MGMT interface is partially set up by the network install procedure; configuring
the port used for network install as the MGMT port and specifying the attached network of "mgmt".
Complete the MGMT interface configuration of controller-1 by specifying the attached
network of "cluster-host".
::
system interface-network-assign controller-1 mgmt0 cluster-host
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
.. only:: starlingx .. only:: starlingx
@ -498,34 +483,16 @@ Configure worker nodes
#. Configure data interfaces for worker nodes. Use the DATA port names, for #. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment. example eth0, that are applicable to your deployment environment.
.. important:: This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
This step is **required** for OpenStack. .. only:: starlingx
This step is optional for Kubernetes: Do this step if using |SRIOV| network .. important::
attachments in hosted application containers.
For Kubernetes |SRIOV| network attachments: This step is **required** for OpenStack.
* Configure |SRIOV| device plug in: * Configure the data interfaces
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
For both Kubernetes and OpenStack:
:: ::
@ -561,6 +528,32 @@ Configure worker nodes
set +ex set +ex
done done
* To enable using |SRIOV| network attachments for the above interfaces in Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. only:: starlingx .. only:: starlingx
************************************* *************************************
@ -583,6 +576,64 @@ Configure worker nodes
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for worker node is to use a single core on each numa-node
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
if not run the following command.
::
for NODE in worker-0 worker-1; do
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
where vswitch is running on this host, with the following command:
::
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for this host with
the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for stx-openstack nova ephemeral disks.

View File

@ -191,7 +191,7 @@ Configure storage nodes
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
done done
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk: #. Add |OSDs| to storage-0. The following example adds |OSDs| to the `sdb` disk:
:: ::
@ -206,7 +206,7 @@ Configure storage nodes
system host-stor-list $HOST system host-stor-list $HOST
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk: #. Add |OSDs| to storage-1. The following example adds |OSDs| to the `sdb` disk:
:: ::
@ -255,34 +255,16 @@ Configure worker nodes
#. Configure data interfaces for worker nodes. Use the DATA port names, for #. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment. example eth0, that are applicable to your deployment environment.
.. important:: This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
This step is **required** for OpenStack. .. only:: starlingx
This step is optional for Kubernetes: Do this step if using SRIOV network .. important::
attachments in hosted application containers.
For Kubernetes SRIOV network attachments: This step is **required** for OpenStack.
* Configure SRIOV device plug in: * Configure the data interfaces
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
For both Kubernetes and OpenStack:
:: ::
@ -318,41 +300,123 @@ Configure worker nodes
set +ex set +ex
done done
************************************* * To enable using SRIOV network attachments for the above interfaces in Kubernetes hosted application containers:
OpenStack-specific host configuration
*************************************
.. important:: * Configure SRIOV device plug in:
**This step is required only if the StarlingX OpenStack application ::
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in for NODE in worker-0 worker-1; do
support of installing the stx-openstack manifest and helm-charts later. system host-label-assign ${NODE} sriovdp=enabled
done
:: * If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both NUMA nodes:
for NODE in worker-0 worker-1; do ::
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group, for NODE in worker-0 worker-1; do
which is needed for stx-openstack nova ephemeral disks. system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
::
for NODE in worker-0 worker-1; do .. only:: starlingx
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') *************************************
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') OpenStack-specific host configuration
PARTITION_SIZE=10 *************************************
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') .. important::
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} **This step is required only if the StarlingX OpenStack application
done (stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for worker node is to use a single core on each
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
configured, if not run the following command.
::
for NODE in worker-0 worker-1; do
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
::
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
this host with the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
------------------- -------------------
Unlock worker nodes Unlock worker nodes

View File

@ -30,16 +30,16 @@
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized |OVS|: **To deploy the default containerized OVS:**
:: ::
system modify --vswitch_type none system modify --vswitch_type none
Do not run any vSwitch directly on the host, instead, use the containerized This does not run any vSwitch directly on the host, instead, it uses the containerized
|OVS| defined in the helm charts of stx-openstack manifest. |OVS| defined in the helm charts of stx-openstack manifest.
To deploy |OVS|-|DPDK|, run the following command: **To deploy OVS-DPDK, run the following command:**
:: ::
@ -53,32 +53,23 @@
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with
the following command: the following command:
::
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
For example:
:: ::
system host-memory-modify -f vswitch -1G 1 worker-0 0 system host-memory-modify -f vswitch -1G 1 worker-0 0
|VMs| created in an |OVS|-|DPDK| environment must be configured to use .. important::
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with |VMs| created in an |OVS|-|DPDK| environment must be configured to use
the command: huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
:: Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with
the command:
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor> ::
For example: system host-memory-modify -f application -1G 10 worker-0 0
system host-memory-modify -f application -1G 10 worker-1 1
::
system host-memory-modify worker-0 0 -1G 10
.. note:: .. note::
@ -86,24 +77,24 @@
locking and unlocking all compute-labeled worker nodes (and/or AIO locking and unlocking all compute-labeled worker nodes (and/or AIO
controllers) to apply the change. controllers) to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for stx-openstack nova ephemeral disks.
:: ::
export NODE=controller-0 export NODE=controller-0
echo ">>> Getting root disk info" echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local" echo ">>>> Configuring nova-local"
NOVA_SIZE=34 NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2 sleep 2
.. incl-config-controller-0-openstack-specific-aio-simplex-end: .. incl-config-controller-0-openstack-specific-aio-simplex-end:

View File

@ -244,7 +244,7 @@ OpenStack-specific host configuration
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized |OVS|: To deploy the default containerized OVS|:
:: ::
@ -333,7 +333,8 @@ Unlock controller-0 in order to bring it into service:
system host-unlock controller-0 system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine. service. This can take 5-10 minutes, depending on the performance of the host
machine.
------------------------------------------------- -------------------------------------------------
Install software on controller-1 and worker nodes Install software on controller-1 and worker nodes
@ -377,26 +378,27 @@ Install software on controller-1 and worker nodes
system host-update 3 personality=worker hostname=worker-0 system host-update 3 personality=worker hostname=worker-0
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to Repeat for worker-1. Power on worker-1 and wait for the new host
be discovered by checking 'system host-list': (hostname=None) to be discovered by checking 'system host-list':
:: ::
system host-update 4 personality=worker hostname=worker-1 system host-update 4 personality=worker hostname=worker-1
For rook storage, there is no storage personality. Some hosts with worker personality For rook storage, there is no storage personality. Some hosts with worker
providers storage service. Here we still named these worker host storage-x. personality providers storage service. Here we still named these worker host
Repeat for storage-0 and storage-1. Power on storage-0, storage-1 and wait for the storage-x. Repeat for storage-0 and storage-1. Power on storage-0, storage-1
new host (hostname=None) to be discovered by checking 'system host-list': and wait for the new host (hostname=None) to be discovered by checking
'system host-list':
:: ::
system host-update 5 personality=worker hostname=storage-0 system host-update 5 personality=worker hostname=storage-0
system host-update 6 personality=worker hostname=storage-1 system host-update 6 personality=worker hostname=storage-1
#. Wait for the software installation on controller-1, worker-0, and worker-1 to #. Wait for the software installation on controller-1, worker-0, and worker-1
complete, for all servers to reboot, and for all to show as locked/disabled/online in to complete, for all servers to reboot, and for all to show as
'system host-list'. locked/disabled/online in 'system host-list'.
:: ::
@ -419,8 +421,8 @@ Configure controller-1
.. incl-config-controller-1-start: .. incl-config-controller-1-start:
Configure the OAM and MGMT interfaces of controller-0 and specify the attached Configure the |OAM| and MGMT interfaces of controller-0 and specify the attached
networks. Use the OAM and MGMT port names, for example eth0, that are applicable networks. Use the |OAM| and MGMT port names, for example eth0, that are applicable
to your deployment environment. to your deployment environment.
(Note that the MGMT interface is partially set up automatically by the network (Note that the MGMT interface is partially set up automatically by the network
@ -522,8 +524,8 @@ Configure worker nodes
system host-label-assign ${NODE} sriovdp=enabled system host-label-assign ${NODE} sriovdp=enabled
done done
* If planning on running DPDK in containers on this host, configure the number * If planning on running |DPDK| in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes: of 1G Huge pages required on both |NUMA| nodes:
:: ::
@ -616,8 +618,9 @@ Unlock worker nodes in order to bring them into service:
system host-unlock $NODE system host-unlock $NODE
done done
The worker nodes will reboot in order to apply configuration changes and come into The worker nodes will reboot in order to apply configuration changes and come
service. This can take 5-10 minutes, depending on the performance of the host machine. into service. This can take 5-10 minutes, depending on the performance of the
host machine.
----------------------- -----------------------
Configure storage nodes Configure storage nodes
@ -625,7 +628,8 @@ Configure storage nodes
#. Assign the cluster-host network to the MGMT interface for the storage nodes. #. Assign the cluster-host network to the MGMT interface for the storage nodes.
Note that the MGMT interfaces are partially set up by the network install procedure. Note that the MGMT interfaces are partially set up by the network install
procedure.
:: ::
@ -653,7 +657,8 @@ Unlock storage nodes in order to bring them into service:
done done
The storage nodes will reboot in order to apply configuration changes and come The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the host machine. into service. This can take 5-10 minutes, depending on the performance of the
host machine.
------------------------------------------------- -------------------------------------------------
Install Rook application manifest and helm-charts Install Rook application manifest and helm-charts