Update use of 'compute' node to 'worker' node
Per recent updates, updating use of 'compute' node to be 'worker' node. StarlingX uses worker nodes, and a 'compute' node is a specialization of a worker node (OpenStack compute label applied). - Update narrative text to use worker node - Update shell scripts to use NODE instead of COMPUTE Updated non-versioned content and R3 installation guides. Change-Id: Ia3c5d354468f18efb79c823e5bfddf17e34998a9 Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
parent
c9421eabfc
commit
13f33b85ef
@ -124,10 +124,11 @@ Host/controller file system configuration
|
|||||||
Host/controller file system configuration commands manage file systems on hosts.
|
Host/controller file system configuration commands manage file systems on hosts.
|
||||||
These commands primarily support the ability to resize the file systems.
|
These commands primarily support the ability to resize the file systems.
|
||||||
|
|
||||||
Use :command:`host-fs-*` commands to manage un-synchronized file systems on controller and
|
Use :command:`host-fs-*` commands to manage un-synchronized file systems on
|
||||||
compute nodes.
|
controller and worker nodes.
|
||||||
|
|
||||||
Use :command:`controllerfs-*` commands to manage drbd-synchronized file systems on controller
|
Use :command:`controllerfs-*` commands to manage drbd-synchronized file systems
|
||||||
|
on controller
|
||||||
nodes.
|
nodes.
|
||||||
|
|
||||||
``host-fs-list``
|
``host-fs-list``
|
||||||
|
@ -9,7 +9,7 @@ Overview
|
|||||||
.. include:: ../desc_aio_duplex.txt
|
.. include:: ../desc_aio_duplex.txt
|
||||||
|
|
||||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||||
worker/compute nodes (not shown in the diagram). Installation instructions for
|
worker nodes (not shown in the diagram). Installation instructions for
|
||||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||||
|
|
||||||
.. include:: ../ipv6_note.txt
|
.. include:: ../ipv6_note.txt
|
||||||
|
@ -1,26 +1,25 @@
|
|||||||
================================================
|
=================================
|
||||||
Extend Capacity with Worker and/or Compute Nodes
|
Extend Capacity with Worker Nodes
|
||||||
================================================
|
=================================
|
||||||
|
|
||||||
This section describes the steps to extend capacity with worker and/or compute
|
This section describes the steps to extend capacity with worker nodes on a
|
||||||
nodes on a **StarlingX R3.0 bare metal All-in-one Duplex** deployment
|
**StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
|
||||||
configuration.
|
|
||||||
|
|
||||||
.. contents::
|
.. contents::
|
||||||
:local:
|
:local:
|
||||||
:depth: 1
|
:depth: 1
|
||||||
|
|
||||||
---------------------------------
|
--------------------------------
|
||||||
Install software on compute nodes
|
Install software on worker nodes
|
||||||
---------------------------------
|
--------------------------------
|
||||||
|
|
||||||
#. Power on the compute servers and force them to network boot with the
|
#. Power on the worker node servers and force them to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
|
|
||||||
#. As the compute servers boot, a message appears on their console instructing
|
#. As the worker nodes boot, a message appears on their console instructing
|
||||||
you to configure the personality of the node.
|
you to configure the personality of the node.
|
||||||
|
|
||||||
#. On the console of controller-0, list hosts to see newly discovered compute
|
#. On the console of controller-0, list hosts to see newly discovered worker node
|
||||||
hosts (hostname=None):
|
hosts (hostname=None):
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -35,18 +34,19 @@ Install software on compute nodes
|
|||||||
| 4 | None | None | locked | disabled | offline |
|
| 4 | None | None | locked | disabled | offline |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
#. Using the host id, set the personality of this host to 'controller':
|
#. Using the host id, set the personality of this host to 'worker':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates the install of software on compute nodes.
|
This initiates the install of software on worker nodes.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. Wait for the install of software on the computes to complete, the computes to
|
#. Wait for the install of software on the worker nodes to complete, for the
|
||||||
reboot and to both show as locked/disabled/online in 'system host-list'.
|
worker nodes to reboot, and for both to show as locked/disabled/online in
|
||||||
|
'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -56,26 +56,26 @@ Install software on compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -117,11 +117,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -130,10 +130,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -146,12 +146,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -162,31 +162,31 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot to apply configuration changes and come into
|
The worker nodes will reboot to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host
|
service. This can take 5-10 minutes, depending on the performance of the host
|
||||||
machine.
|
machine.
|
||||||
|
|
||||||
|
@ -198,13 +198,13 @@ Configure controller-0
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -217,10 +217,10 @@ Configure controller-0
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||||
to the `sdb` disk:
|
to the `sdb` disk:
|
||||||
@ -344,13 +344,13 @@ Configure controller-1
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -363,10 +363,10 @@ Configure controller-1
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-1 for Ceph:
|
#. Add an OSD on controller-1 for Ceph:
|
||||||
|
|
||||||
@ -401,19 +401,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
|
@ -228,13 +228,13 @@ Configure controller-0
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -247,10 +247,10 @@ Configure controller-0
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||||
to the `sdb` disk:
|
to the `sdb` disk:
|
||||||
@ -316,7 +316,7 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||||
vSwitch cores for computes.
|
vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||||
command:
|
command:
|
||||||
@ -329,7 +329,7 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify -f vswitch -1G 1 compute-0 0
|
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||||
|
|
||||||
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||||
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||||
@ -344,32 +344,32 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify compute-0 0 -1G 10
|
system host-memory-modify worker-0 0 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all computes (and/or AIO Controllers) to
|
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||||
apply the change.
|
controllers) to apply the change.
|
||||||
|
|
||||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
which is needed for stx-openstack nova ephemeral disks.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||||
|
@ -18,7 +18,7 @@ The recommended minimum hardware requirements for bare metal servers for various
|
|||||||
host types are:
|
host types are:
|
||||||
|
|
||||||
+-------------------------+-----------------------------+-----------------------------+
|
+-------------------------+-----------------------------+-----------------------------+
|
||||||
| Minimum Requirement | Controller Node | Compute Node |
|
| Minimum Requirement | Controller Node | Worker Node |
|
||||||
+=========================+=============================+=============================+
|
+=========================+=============================+=============================+
|
||||||
| Number of servers | 2 | 2-10 |
|
| Number of servers | 2 | 2-10 |
|
||||||
+-------------------------+-----------------------------+-----------------------------+
|
+-------------------------+-----------------------------+-----------------------------+
|
||||||
|
@ -242,7 +242,7 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||||
vSwitch cores for computes.
|
vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||||
command:
|
command:
|
||||||
@ -255,7 +255,7 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify -f vswitch -1G 1 compute-0 0
|
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||||
|
|
||||||
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||||
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||||
@ -270,13 +270,13 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-memory-modify compute-0 0 -1G 10
|
system host-memory-modify worker-0 0 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all computes (and/or AIO Controllers) to
|
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||||
apply the change.
|
controllers) to apply the change.
|
||||||
|
|
||||||
.. incl-config-controller-0-storage-end:
|
.. incl-config-controller-0-storage-end:
|
||||||
|
|
||||||
@ -293,9 +293,9 @@ Unlock controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
Install software on controller-1 and compute nodes
|
Install software on controller-1 and worker nodes
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
|
|
||||||
#. Power on the controller-1 server and force it to network boot with the
|
#. Power on the controller-1 server and force it to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
@ -325,25 +325,24 @@ Install software on controller-1 and compute nodes
|
|||||||
This initiates the install of software on controller-1.
|
This initiates the install of software on controller-1.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
Set the personality to 'worker' and assign a unique hostname for each.
|
||||||
hostname for each.
|
|
||||||
|
|
||||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for compute-1. Power on compute-1 and wait for the new host (hostname=None) to
|
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1-1
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||||
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
||||||
'system host-list'.
|
'system host-list'.
|
||||||
|
|
||||||
@ -356,8 +355,8 @@ Install software on controller-1 and compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -417,20 +416,20 @@ machine.
|
|||||||
|
|
||||||
.. incl-unlock-controller-1-end:
|
.. incl-unlock-controller-1-end:
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Add the third Ceph monitor to compute-0:
|
#. Add the third Ceph monitor to a worker node:
|
||||||
|
|
||||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||||
controller-1.)
|
controller-1.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ceph-mon-add compute-0
|
system ceph-mon-add worker-0
|
||||||
|
|
||||||
#. Wait for the compute node monitor to complete configuration:
|
#. Wait for the worker node monitor to complete configuration:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -442,21 +441,21 @@ Configure compute nodes
|
|||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -472,8 +471,8 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
* If planning on running DPDK in containers on this host, configure the number
|
||||||
@ -481,9 +480,9 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
system host-memory-modify ${NODE} 0 -1G 100
|
||||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
system host-memory-modify ${NODE} 1 -1G 100
|
||||||
done
|
done
|
||||||
|
|
||||||
For both Kubernetes and OpenStack:
|
For both Kubernetes and OpenStack:
|
||||||
@ -502,11 +501,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -515,10 +514,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -531,12 +530,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -547,30 +546,30 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
--------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
----------------------------
|
----------------------------
|
||||||
|
@ -18,7 +18,7 @@ The recommended minimum hardware requirements for bare metal servers for various
|
|||||||
host types are:
|
host types are:
|
||||||
|
|
||||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||||
| Minimum Requirement | Controller Node | Storage Node | Compute Node |
|
| Minimum Requirement | Controller Node | Storage Node | Worker Node |
|
||||||
+=====================+===========================+=======================+=======================+
|
+=====================+===========================+=======================+=======================+
|
||||||
| Number of servers | 2 | 2-9 | 2-100 |
|
| Number of servers | 2 | 2-9 | 2-100 |
|
||||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||||
|
@ -54,9 +54,9 @@ Unlock controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
Install software on controller-1, storage nodes, and compute nodes
|
Install software on controller-1, storage nodes, and worker nodes
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
#. Power on the controller-1 server and force it to network boot with the
|
#. Power on the controller-1 server and force it to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
@ -107,28 +107,27 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
This initiates the software installation on storage-0 and storage-1.
|
This initiates the software installation on storage-0 and storage-1.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
Set the personality to 'worker' and assign a unique hostname for each.
|
||||||
hostname for each.
|
|
||||||
|
|
||||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 5 personality=worker hostname=compute-0
|
system host-update 5 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for compute-1. Power on compute-1 and wait for the new host
|
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||||
(hostname=None) to be discovered by checking 'system host-list':
|
(hostname=None) to be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 6 personality=worker hostname=compute-1
|
system host-update 6 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates the install of software on compute-0 and compute-1.
|
This initiates the install of software on worker-0 and worker-1.
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||||
compute-0, and compute-1 to complete, for all servers to reboot, and for all to
|
worker-0, and worker-1 to complete, for all servers to reboot, and for all to
|
||||||
show as locked/disabled/online in 'system host-list'.
|
show as locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -141,8 +140,8 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | storage-0 | storage | locked | disabled | online |
|
| 3 | storage-0 | storage | locked | disabled | online |
|
||||||
| 4 | storage-1 | storage | locked | disabled | online |
|
| 4 | storage-1 | storage | locked | disabled | online |
|
||||||
| 5 | compute-0 | compute | locked | disabled | online |
|
| 5 | worker-0 | worker | locked | disabled | online |
|
||||||
| 6 | compute-1 | compute | locked | disabled | online |
|
| 6 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -172,8 +171,8 @@ Configure storage nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
|
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
|
||||||
@ -222,22 +221,22 @@ The storage nodes will reboot in order to apply configuration changes and come
|
|||||||
into service. This can take 5-10 minutes, depending on the performance of the
|
into service. This can take 5-10 minutes, depending on the performance of the
|
||||||
host machine.
|
host machine.
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -253,8 +252,8 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
* If planning on running DPDK in containers on this host, configure the number
|
||||||
@ -262,9 +261,9 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
system host-memory-modify ${NODE} 0 -1G 100
|
||||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
system host-memory-modify ${NODE} 1 -1G 100
|
||||||
done
|
done
|
||||||
|
|
||||||
For both Kubernetes and OpenStack:
|
For both Kubernetes and OpenStack:
|
||||||
@ -283,11 +282,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -296,10 +295,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -312,12 +311,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -328,30 +327,30 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come
|
The worker nodes will reboot in order to apply configuration changes and come
|
||||||
into service. This can take 5-10 minutes, depending on the performance of the
|
into service. This can take 5-10 minutes, depending on the performance of the
|
||||||
host machine.
|
host machine.
|
||||||
|
|
||||||
|
@ -56,8 +56,8 @@ standard configuration, either:
|
|||||||
|
|
||||||
|
|
||||||
This guide assumes that you have a standard deployment installed and configured
|
This guide assumes that you have a standard deployment installed and configured
|
||||||
with 2x controllers and at least 1x compute node, with the StarlingX OpenStack
|
with 2x controllers and at least 1x compute-labeled worker node, with the
|
||||||
application (stx-openstack) applied.
|
StarlingX OpenStack application (stx-openstack) applied.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
@ -23,15 +23,15 @@ For controller nodes:
|
|||||||
* Additional NIC port on both controller nodes for connecting to the
|
* Additional NIC port on both controller nodes for connecting to the
|
||||||
ironic-provisioning-net.
|
ironic-provisioning-net.
|
||||||
|
|
||||||
For compute nodes:
|
For worker nodes:
|
||||||
|
|
||||||
* If using a flat data network for the Ironic provisioning network, an additional
|
* If using a flat data network for the Ironic provisioning network, an additional
|
||||||
NIC port on one of the compute nodes is required.
|
NIC port on one of the worker nodes is required.
|
||||||
|
|
||||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
||||||
simply add the new data network to an existing interface on the compute node.
|
simply add the new data network to an existing interface on the worker node.
|
||||||
|
|
||||||
* Additional switch ports / configuration for new ports on controller, compute,
|
* Additional switch ports / configuration for new ports on controller, worker,
|
||||||
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
||||||
|
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
@ -113,14 +113,14 @@ Configure Ironic network on deployed standard StarlingX
|
|||||||
# Unlock the node to apply changes
|
# Unlock the node to apply changes
|
||||||
system host-unlock controller-1
|
system host-unlock controller-1
|
||||||
|
|
||||||
#. Configure the new interface (for Ironic) on one of the compute nodes and
|
#. Configure the new interface (for Ironic) on one of the compute-labeled worker
|
||||||
assign it to the Ironic data network. This example uses the data network
|
nodes and assign it to the Ironic data network. This example uses the data
|
||||||
`ironic-data` that was named in a previous step.
|
network `ironic-data` that was named in a previous step.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system interface-datanetwork-assign compute-0 eno1 ironic-data
|
system interface-datanetwork-assign worker-0 eno1 ironic-data
|
||||||
system host-if-modify -n ironicdata -c data compute-0 eno1
|
system host-if-modify -n ironicdata -c data worker-0 eno1
|
||||||
|
|
||||||
****************************
|
****************************
|
||||||
Generate user Helm overrides
|
Generate user Helm overrides
|
||||||
|
@ -1,16 +1,16 @@
|
|||||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
||||||
availability (HA) servers with each server providing all three cloud functions
|
availability (HA) servers with each server providing all three cloud functions
|
||||||
(controller, compute, and storage).
|
(controller, worker, and storage).
|
||||||
|
|
||||||
An AIO-DX configuration provides the following benefits:
|
An AIO-DX configuration provides the following benefits:
|
||||||
|
|
||||||
* Only a small amount of cloud processing and storage power is required
|
* Only a small amount of cloud processing and storage power is required
|
||||||
* Application consolidation using multiple virtual machines on a single pair of
|
* Application consolidation using multiple containers or virtual machines on a
|
||||||
physical servers
|
single pair of physical servers
|
||||||
* High availability (HA) services run on the controller function across two
|
* High availability (HA) services run on the controller function across two
|
||||||
physical servers in either active/active or active/standby mode
|
physical servers in either active/active or active/standby mode
|
||||||
* A storage back end solution using a two-node CEPH deployment across two servers
|
* A storage back end solution using a two-node CEPH deployment across two servers
|
||||||
* Virtual machines scheduled on both compute functions
|
* Containers or virtual machines scheduled on both worker functions
|
||||||
* Protection against overall server hardware fault, where
|
* Protection against overall server hardware fault, where
|
||||||
|
|
||||||
* All controller HA services go active on the remaining healthy server
|
* All controller HA services go active on the remaining healthy server
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
||||||
functions (controller, compute, and storage) on a single server with the
|
functions (controller, worker, and storage) on a single server with the
|
||||||
following benefits:
|
following benefits:
|
||||||
|
|
||||||
* Requires only a small amount of cloud processing and storage power
|
* Requires only a small amount of cloud processing and storage power
|
||||||
* Application consolidation using multiple virtual machines on a single pair of
|
* Application consolidation using multiple containers or virtual machines on a
|
||||||
physical servers
|
single pair of physical servers
|
||||||
* A storage backend solution using a single-node CEPH deployment
|
* A storage backend solution using a single-node CEPH deployment
|
||||||
|
|
||||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
||||||
|
@ -1,19 +1,19 @@
|
|||||||
The Standard with Controller Storage deployment option provides two high
|
The Standard with Controller Storage deployment option provides two high
|
||||||
availability (HA) controller nodes and a pool of up to 10 compute nodes.
|
availability (HA) controller nodes and a pool of up to 10 worker nodes.
|
||||||
|
|
||||||
A Standard with Controller Storage configuration provides the following benefits:
|
A Standard with Controller Storage configuration provides the following benefits:
|
||||||
|
|
||||||
* A pool of up to 10 compute nodes
|
* A pool of up to 10 worker nodes
|
||||||
* High availability (HA) services run across the controller nodes in either
|
* High availability (HA) services run across the controller nodes in either
|
||||||
active/active or active/standby mode
|
active/active or active/standby mode
|
||||||
* A storage back end solution using a two-node CEPH deployment across two
|
* A storage back end solution using a two-node CEPH deployment across two
|
||||||
controller servers
|
controller servers
|
||||||
* Protection against overall controller and compute node failure, where
|
* Protection against overall controller and worker node failure, where
|
||||||
|
|
||||||
* On overall controller node failure, all controller HA services go active on
|
* On overall controller node failure, all controller HA services go active on
|
||||||
the remaining healthy controller node
|
the remaining healthy controller node
|
||||||
* On overall compute node failure, virtual machines and containers are
|
* On overall worker node failure, virtual machines and containers are
|
||||||
recovered on the remaining healthy compute nodes
|
recovered on the remaining healthy worker nodes
|
||||||
|
|
||||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
||||||
:scale: 50%
|
:scale: 50%
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
The Standard with Dedicated Storage deployment option is a standard installation
|
The Standard with Dedicated Storage deployment option is a standard installation
|
||||||
with independent controller, compute, and storage nodes.
|
with independent controller, worker, and storage nodes.
|
||||||
|
|
||||||
A Standard with Dedicated Storage configuration provides the following benefits:
|
A Standard with Dedicated Storage configuration provides the following benefits:
|
||||||
|
|
||||||
* A pool of up to 100 compute nodes
|
* A pool of up to 100 worker nodes
|
||||||
* A 2x node high availability (HA) controller cluster with HA services running
|
* A 2x node high availability (HA) controller cluster with HA services running
|
||||||
across the controller nodes in either active/active or active/standby mode
|
across the controller nodes in either active/active or active/standby mode
|
||||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
||||||
|
@ -192,13 +192,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -211,10 +211,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph:
|
#. Add an OSD on controller-0 for Ceph:
|
||||||
|
|
||||||
@ -333,13 +333,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -352,10 +352,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-1 for Ceph:
|
#. Add an OSD on controller-1 for Ceph:
|
||||||
|
|
||||||
@ -390,19 +390,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock controller-1
|
Unlock controller-1
|
||||||
|
@ -182,13 +182,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -201,10 +201,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph:
|
#. Add an OSD on controller-0 for Ceph:
|
||||||
|
|
||||||
@ -248,19 +248,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||||
|
@ -223,9 +223,9 @@ Unlock virtual controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
Install software on controller-1 and compute nodes
|
Install software on controller-1 and worker nodes
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
|
|
||||||
#. On the host, power on the controller-1 virtual server,
|
#. On the host, power on the controller-1 virtual server,
|
||||||
'controllerstorage-controller-1'. It will automatically attempt to network
|
'controllerstorage-controller-1'. It will automatically attempt to network
|
||||||
@ -282,7 +282,7 @@ Install software on controller-1 and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for 'controllerstorage-worker-1'. On the host:
|
Repeat for 'controllerstorage-worker-1'. On the host:
|
||||||
|
|
||||||
@ -295,9 +295,9 @@ Install software on controller-1 and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||||
complete, for all virtual servers to reboot, and for all to show as
|
complete, for all virtual servers to reboot, and for all to show as
|
||||||
locked/disabled/online in 'system host-list'.
|
locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
@ -309,8 +309,8 @@ Install software on controller-1 and compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -365,22 +365,22 @@ service. This can take 5-10 minutes, depending on the performance of the host ma
|
|||||||
|
|
||||||
.. incl-unlock-controller-1-virt-controller-storage-end:
|
.. incl-unlock-controller-1-virt-controller-storage-end:
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
On virtual controller-0:
|
On virtual controller-0:
|
||||||
|
|
||||||
#. Add the third Ceph monitor to compute-0:
|
#. Add the third Ceph monitor to a worker node:
|
||||||
|
|
||||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||||
controller-1.)
|
controller-1.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ceph-mon-add compute-0
|
system ceph-mon-add worker-0
|
||||||
|
|
||||||
#. Wait for the compute node monitor to complete configuration:
|
#. Wait for the worker node monitor to complete configuration:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -392,21 +392,21 @@ On virtual controller-0:
|
|||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||||
|
|
||||||
Note that the MGMT interfaces are partially set up automatically by the
|
Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.
|
network install procedure.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes.
|
#. Configure data interfaces for worker nodes.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
@ -433,11 +433,11 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -446,10 +446,10 @@ On virtual controller-0:
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -462,12 +462,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest/helm-charts later:
|
support of installing the stx-openstack manifest/helm-charts later:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -478,32 +478,32 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||||
|
|
||||||
Unlock virtual compute nodes to bring them into service:
|
Unlock virtual worker nodes to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||||
|
@ -71,9 +71,9 @@ Unlock virtual controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
Install software on controller-1, storage nodes, and compute nodes
|
Install software on controller-1, storage nodes, and worker nodes
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
#. On the host, power on the controller-1 virtual server,
|
#. On the host, power on the controller-1 virtual server,
|
||||||
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
||||||
@ -162,7 +162,7 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 5 personality=worker hostname=compute-0
|
system host-update 5 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||||
|
|
||||||
@ -175,12 +175,12 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
ssystem host-update 6 personality=worker hostname=compute-1
|
ssystem host-update 6 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates software installation on compute-0 and compute-1.
|
This initiates software installation on worker-0 and worker-1.
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||||
compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all
|
worker-0, and worker-1 to complete, for all virtual servers to reboot, and for all
|
||||||
to show as locked/disabled/online in 'system host-list'.
|
to show as locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -193,8 +193,8 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | storage-0 | storage | locked | disabled | online |
|
| 3 | storage-0 | storage | locked | disabled | online |
|
||||||
| 4 | storage-1 | storage | locked | disabled | online |
|
| 4 | storage-1 | storage | locked | disabled | online |
|
||||||
| 5 | compute-0 | compute | locked | disabled | online |
|
| 5 | worker-0 | worker | locked | disabled | online |
|
||||||
| 6 | compute-1 | compute | locked | disabled | online |
|
| 6 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -225,8 +225,8 @@ On virtual controller-0:
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Add OSDs to storage-0:
|
#. Add OSDs to storage-0:
|
||||||
@ -272,24 +272,24 @@ Unlock virtual storage nodes in order to bring them into service:
|
|||||||
The storage nodes will reboot in order to apply configuration changes and come
|
The storage nodes will reboot in order to apply configuration changes and come
|
||||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
On virtual controller-0:
|
On virtual controller-0:
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||||
|
|
||||||
Note that the MGMT interfaces are partially set up automatically by the
|
Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.
|
network install procedure.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes.
|
#. Configure data interfaces for worker nodes.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
@ -319,11 +319,11 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -332,10 +332,10 @@ On virtual controller-0:
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -348,12 +348,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest/helm-charts later:
|
support of installing the stx-openstack manifest/helm-charts later:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -364,20 +364,20 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
.. include:: controller_storage_install_kubernetes.rst
|
.. include:: controller_storage_install_kubernetes.rst
|
||||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||||
|
@ -478,7 +478,7 @@ Build installer
|
|||||||
---------------
|
---------------
|
||||||
|
|
||||||
To get your StarlingX ISO ready to use, you must create the initialization
|
To get your StarlingX ISO ready to use, you must create the initialization
|
||||||
files used to boot the ISO, additional controllers, and compute nodes.
|
files used to boot the ISO, additional controllers, and worker nodes.
|
||||||
|
|
||||||
**NOTE:** You only need this procedure during your first build and
|
**NOTE:** You only need this procedure during your first build and
|
||||||
every time you upgrade the kernel.
|
every time you upgrade the kernel.
|
||||||
|
@ -355,8 +355,8 @@ configuration.
|
|||||||
controller-0:/$ sudo sw-patch query-hosts
|
controller-0:/$ sudo sw-patch query-hosts
|
||||||
Hostname IP Address Patch Current Reboot Required Release State
|
Hostname IP Address Patch Current Reboot Required Release State
|
||||||
============ ============== ============= =============== ====== =====
|
============ ============== ============= =============== ====== =====
|
||||||
compute-0 192.178.204.7 Yes No 19.09 idle
|
worker-0 192.178.204.7 Yes No 19.09 idle
|
||||||
compute-1 192.178.204.9 Yes No 19.09 idle
|
worker-1 192.178.204.9 Yes No 19.09 idle
|
||||||
controller-0 192.178.204.3 Yes No 19.09 idle
|
controller-0 192.178.204.3 Yes No 19.09 idle
|
||||||
controller-1 192.178.204.4 Yes No 19.09 idle
|
controller-1 192.178.204.4 Yes No 19.09 idle
|
||||||
storage-0 192.178.204.12 Yes No 19.09 idle
|
storage-0 192.178.204.12 Yes No 19.09 idle
|
||||||
@ -390,8 +390,8 @@ configuration.
|
|||||||
controller-0:~$ sudo sw-patch query-hosts
|
controller-0:~$ sudo sw-patch query-hosts
|
||||||
Hostname IP Address Patch Current Reboot Required Release State
|
Hostname IP Address Patch Current Reboot Required Release State
|
||||||
============ ============== ============= =============== ====== =====
|
============ ============== ============= =============== ====== =====
|
||||||
compute-0 192.178.204.7 No No 19.09 idle
|
worker-0 192.178.204.7 No No 19.09 idle
|
||||||
compute-1 192.178.204.9 No No 19.09 idle
|
worker-1 192.178.204.9 No No 19.09 idle
|
||||||
controller-0 192.178.204.3 No No 19.09 idle
|
controller-0 192.178.204.3 No No 19.09 idle
|
||||||
controller-1 192.178.204.4 No No 19.09 idle
|
controller-1 192.178.204.4 No No 19.09 idle
|
||||||
storage-0 192.178.204.12 No No 19.09 idle
|
storage-0 192.178.204.12 No No 19.09 idle
|
||||||
@ -416,8 +416,8 @@ configuration.
|
|||||||
controller-0:~$ sudo sw-patch query-hosts
|
controller-0:~$ sudo sw-patch query-hosts
|
||||||
Hostname IP Address Patch Current Reboot Required Release State
|
Hostname IP Address Patch Current Reboot Required Release State
|
||||||
============ ============== ============= =============== ====== =====
|
============ ============== ============= =============== ====== =====
|
||||||
compute-0 192.178.204.7 No No 19.09 idle
|
worker-0 192.178.204.7 No No 19.09 idle
|
||||||
compute-1 192.178.204.9 No No 19.09 idle
|
worker-1 192.178.204.9 No No 19.09 idle
|
||||||
controller-0 192.178.204.3 Yes No 19.09 idle
|
controller-0 192.178.204.3 Yes No 19.09 idle
|
||||||
controller-1 192.178.204.4 No No 19.09 idle
|
controller-1 192.178.204.4 No No 19.09 idle
|
||||||
storage-0 192.178.204.12 No No 19.09 idle
|
storage-0 192.178.204.12 No No 19.09 idle
|
||||||
@ -430,10 +430,10 @@ configuration.
|
|||||||
controller-0:~$ sudo sw-patch host-install controller-1
|
controller-0:~$ sudo sw-patch host-install controller-1
|
||||||
....
|
....
|
||||||
Installation was successful.
|
Installation was successful.
|
||||||
controller-0:~$ sudo sw-patch host-install compute-0
|
controller-0:~$ sudo sw-patch host-install worker-0
|
||||||
....
|
....
|
||||||
Installation was successful.
|
Installation was successful.
|
||||||
controller-0:~$ sudo sw-patch host-install compute-1
|
controller-0:~$ sudo sw-patch host-install worker-1
|
||||||
....
|
....
|
||||||
Installation was successful.
|
Installation was successful.
|
||||||
controller-0:~$ sudo sw-patch host-install storage-0
|
controller-0:~$ sudo sw-patch host-install storage-0
|
||||||
@ -459,8 +459,8 @@ configuration.
|
|||||||
controller-0:~$ sudo sw-patch query-hosts
|
controller-0:~$ sudo sw-patch query-hosts
|
||||||
Hostname IP Address Patch Current Reboot Required Release State
|
Hostname IP Address Patch Current Reboot Required Release State
|
||||||
============ ============== ============ =============== ======= =====
|
============ ============== ============ =============== ======= =====
|
||||||
compute-0 192.178.204.7 Yes No 19.09 idle
|
worker-0 192.178.204.7 Yes No 19.09 idle
|
||||||
compute-1 192.178.204.9 Yes No 19.09 idle
|
worker-1 192.178.204.9 Yes No 19.09 idle
|
||||||
controller-0 192.178.204.3 Yes No 19.09 idle
|
controller-0 192.178.204.3 Yes No 19.09 idle
|
||||||
controller-1 192.178.204.4 Yes No 19.09 idle
|
controller-1 192.178.204.4 Yes No 19.09 idle
|
||||||
storage-0 192.178.204.12 Yes No 19.09 idle
|
storage-0 192.178.204.12 Yes No 19.09 idle
|
||||||
|
@ -19,12 +19,12 @@ All-in-one Duplex (*Duplex* or *AIO-DX*)
|
|||||||
Standard with Controller Storage
|
Standard with Controller Storage
|
||||||
This configuration allows for 1 or 2 controller nodes that also provide
|
This configuration allows for 1 or 2 controller nodes that also provide
|
||||||
storage for the edge cloud. The configuration also allows for between 1 and
|
storage for the edge cloud. The configuration also allows for between 1 and
|
||||||
99 compute nodes to run application workloads. This configuration works best
|
99 worker nodes to run application workloads. This configuration works best
|
||||||
for edge clouds with smaller storage needs.
|
for edge clouds with smaller storage needs.
|
||||||
|
|
||||||
Standard with Dedicated Storage
|
Standard with Dedicated Storage
|
||||||
This configuration has dedicated storage nodes in addition to the controller
|
This configuration has dedicated storage nodes in addition to the controller
|
||||||
and compute nodes. This configuration is used for edge clouds that require
|
and worker nodes. This configuration is used for edge clouds that require
|
||||||
larger amounts of storage.
|
larger amounts of storage.
|
||||||
|
|
||||||
Standard with Ironic
|
Standard with Ironic
|
||||||
|
@ -6,21 +6,23 @@ The following definitions describe key concepts and terminology that are
|
|||||||
commonly used in the StarlingX community and in this documentation.
|
commonly used in the StarlingX community and in this documentation.
|
||||||
|
|
||||||
All-in-one Controller Node
|
All-in-one Controller Node
|
||||||
A single physical node that provides a controller function, compute function,
|
A single physical node that provides a controller function, worker function,
|
||||||
and storage function.
|
and storage function.
|
||||||
|
|
||||||
Bare Metal
|
Bare Metal
|
||||||
A node running without hypervisors (for example, application workloads run
|
A node running without hypervisors (for example, application workloads run
|
||||||
directly on the operating system which runs directly on the hardware).
|
directly on the operating system which runs directly on the hardware).
|
||||||
|
|
||||||
Compute or Worker
|
Worker
|
||||||
A node within a StarlingX edge cloud that is dedicated to running application
|
A node within a StarlingX edge cloud that is dedicated to running application
|
||||||
workloads. There can be 0 to 99 compute nodes in a StarlingX edge
|
workloads. There can be 0 to 99 worker nodes in a StarlingX edge cloud.
|
||||||
cloud.
|
|
||||||
|
|
||||||
- Runs virtual switch for realizing virtual networks.
|
- Runs virtual switch for realizing virtual networks.
|
||||||
- Provides L3 routing and NET services.
|
- Provides L3 routing and NET services.
|
||||||
|
|
||||||
|
In a configuration running OpenStack, a worker node is labeled as 'compute' and
|
||||||
|
may be referred to as a compute node.
|
||||||
|
|
||||||
Controller
|
Controller
|
||||||
A node within a StarlingX edge cloud that runs the cloud management software
|
A node within a StarlingX edge cloud that runs the cloud management software
|
||||||
(*control plane*). There can be either one or two controller nodes in a
|
(*control plane*). There can be either one or two controller nodes in a
|
||||||
@ -36,7 +38,7 @@ Data Network(s)
|
|||||||
Networks on which the OpenStack / Neutron provider networks are realized and
|
Networks on which the OpenStack / Neutron provider networks are realized and
|
||||||
become the VM tenant networks.
|
become the VM tenant networks.
|
||||||
|
|
||||||
Only compute type and all-in-one type nodes are required to be connected to
|
Only worker-type and all-in-one-type nodes are required to be connected to
|
||||||
the data network(s). These node types require one or more interface(s) on the
|
the data network(s). These node types require one or more interface(s) on the
|
||||||
data network(s).
|
data network(s).
|
||||||
|
|
||||||
|
@ -24,11 +24,11 @@ You can pre-install the ``intel-gpu-plugin`` daemonset as follows:
|
|||||||
#. Assign the ``intelgpu`` label to each node that should have the Intel GPU
|
#. Assign the ``intelgpu`` label to each node that should have the Intel GPU
|
||||||
plugin enabled. This will make any GPU devices on a given node available for
|
plugin enabled. This will make any GPU devices on a given node available for
|
||||||
scheduling to containers. The following example assigns the ``intelgpu``
|
scheduling to containers. The following example assigns the ``intelgpu``
|
||||||
label to the compute-0 node.
|
label to the worker-0 node.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
$ NODE=compute-0
|
$ NODE=worker-0
|
||||||
$ system host-lock $NODE
|
$ system host-lock $NODE
|
||||||
$ system host-label-assign $NODE intelgpu=enabled
|
$ system host-label-assign $NODE intelgpu=enabled
|
||||||
$ system host-unlock $NODE
|
$ system host-unlock $NODE
|
||||||
|
@ -29,11 +29,11 @@ section describes the steps to enable the Intel QAT device plugin for
|
|||||||
discovering and advertising QAT VF resources to Kubernetes host.
|
discovering and advertising QAT VF resources to Kubernetes host.
|
||||||
|
|
||||||
#. Verify QuickAssist SR-IOV virtual functions are configured on a specified
|
#. Verify QuickAssist SR-IOV virtual functions are configured on a specified
|
||||||
node after StarlingX is installed. This example uses the ``compute-0`` node.
|
node after StarlingX is installed. This example uses the worker-0 node.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
$ ssh compute-0
|
$ ssh worker-0
|
||||||
$ for i in 0442 0443 37c9 19e3; do lspci -d 8086:$i; done
|
$ for i in 0442 0443 37c9 19e3; do lspci -d 8086:$i; done
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -41,11 +41,11 @@ discovering and advertising QAT VF resources to Kubernetes host.
|
|||||||
The Intel QAT device plugin only supports QAT VF resources in the current
|
The Intel QAT device plugin only supports QAT VF resources in the current
|
||||||
release.
|
release.
|
||||||
|
|
||||||
#. Assign the ``intelqat`` label to the node (compute-0 in this example).
|
#. Assign the ``intelqat`` label to the node (worker-0 in this example).
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
$ NODE=compute-0
|
$ NODE=worker-0
|
||||||
$ system host-lock $NODE
|
$ system host-lock $NODE
|
||||||
$ system host-label-assign $NODE intelqat=enabled
|
$ system host-label-assign $NODE intelqat=enabled
|
||||||
$ system host-unlock $NODE
|
$ system host-unlock $NODE
|
||||||
|
Loading…
Reference in New Issue
Block a user