Refresh R4 base install guides
Refresh the baseline install guide content for R4 with terminology updates and misc updates/corrections from R3. Change-Id: If25aecef711882d3f65d697fcae412d4a1337a1b Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
parent
1d6fc2d7bd
commit
a5541b8cd3
@ -9,7 +9,7 @@ Overview
|
|||||||
.. include:: ../desc_aio_duplex.txt
|
.. include:: ../desc_aio_duplex.txt
|
||||||
|
|
||||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||||
worker/compute nodes (not shown in the diagram). Installation instructions for
|
worker nodes (not shown in the diagram). Installation instructions for
|
||||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||||
|
|
||||||
.. include:: ../ipv6_note.txt
|
.. include:: ../ipv6_note.txt
|
||||||
|
@ -1,26 +1,25 @@
|
|||||||
================================================
|
=================================
|
||||||
Extend Capacity with Worker and/or Compute Nodes
|
Extend Capacity with Worker Nodes
|
||||||
================================================
|
=================================
|
||||||
|
|
||||||
This section describes the steps to extend capacity with worker and/or compute
|
This section describes the steps to extend capacity with worker nodes on a
|
||||||
nodes on a **StarlingX R4.0 bare metal All-in-one Duplex** deployment
|
**StarlingX R4.0 bare metal All-in-one Duplex** deployment configuration.
|
||||||
configuration.
|
|
||||||
|
|
||||||
.. contents::
|
.. contents::
|
||||||
:local:
|
:local:
|
||||||
:depth: 1
|
:depth: 1
|
||||||
|
|
||||||
---------------------------------
|
--------------------------------
|
||||||
Install software on compute nodes
|
Install software on worker nodes
|
||||||
---------------------------------
|
--------------------------------
|
||||||
|
|
||||||
#. Power on the compute servers and force them to network boot with the
|
#. Power on the worker node servers and force them to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
|
|
||||||
#. As the compute servers boot, a message appears on their console instructing
|
#. As the worker nodes boot, a message appears on their console instructing
|
||||||
you to configure the personality of the node.
|
you to configure the personality of the node.
|
||||||
|
|
||||||
#. On the console of controller-0, list hosts to see newly discovered compute
|
#. On the console of controller-0, list hosts to see newly discovered worker node
|
||||||
hosts (hostname=None):
|
hosts (hostname=None):
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -35,18 +34,19 @@ Install software on compute nodes
|
|||||||
| 4 | None | None | locked | disabled | offline |
|
| 4 | None | None | locked | disabled | offline |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
#. Using the host id, set the personality of this host to 'controller':
|
#. Using the host id, set the personality of this host to 'worker':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates the install of software on compute nodes.
|
This initiates the install of software on worker nodes.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. Wait for the install of software on the computes to complete, the computes to
|
#. Wait for the install of software on the worker nodes to complete, for the
|
||||||
reboot and to both show as locked/disabled/online in 'system host-list'.
|
worker nodes to reboot, and for both to show as locked/disabled/online in
|
||||||
|
'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -56,26 +56,26 @@ Install software on compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -117,11 +117,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -130,10 +130,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -146,12 +146,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -162,31 +162,31 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot to apply configuration changes and come into
|
The worker nodes will reboot to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host
|
service. This can take 5-10 minutes, depending on the performance of the host
|
||||||
machine.
|
machine.
|
||||||
|
|
||||||
|
@ -212,13 +212,13 @@ Configure controller-0
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -231,10 +231,10 @@ Configure controller-0
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||||
to the `sdb` disk:
|
to the `sdb` disk:
|
||||||
@ -362,13 +362,13 @@ Configure controller-1
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -381,10 +381,10 @@ Configure controller-1
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-1 for Ceph:
|
#. Add an OSD on controller-1 for Ceph:
|
||||||
|
|
||||||
@ -423,19 +423,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
|
@ -158,8 +158,8 @@ Configure controller-0
|
|||||||
source /etc/platform/openrc
|
source /etc/platform/openrc
|
||||||
|
|
||||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||||
as "oam". Use the OAM port name, for example eth0, that is applicable to your
|
as "oam". Use the OAM port name that is applicable to your deployment
|
||||||
deployment environment:
|
environment, for example eth0:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -219,13 +219,13 @@ Configure controller-0
|
|||||||
|
|
||||||
DATA0IF=<DATA-0-PORT>
|
DATA0IF=<DATA-0-PORT>
|
||||||
DATA1IF=<DATA-1-PORT>
|
DATA1IF=<DATA-1-PORT>
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -238,10 +238,10 @@ Configure controller-0
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||||
to the `sdb` disk:
|
to the `sdb` disk:
|
||||||
@ -286,7 +286,9 @@ OpenStack-specific host configuration
|
|||||||
manifest.
|
manifest.
|
||||||
* Shares the core(s) assigned to the platform.
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
If you require better performance, OVS-DPDK should be used:
|
If you require better performance, OVS-DPDK (OVS with the Data Plane
|
||||||
|
Development Kit, which is supported only on bare metal hardware) should be
|
||||||
|
used:
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
* Runs directly on the host (it is not containerized).
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||||
@ -300,8 +302,7 @@ OpenStack-specific host configuration
|
|||||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||||
OVS defined in the helm charts of stx-openstack manifest.
|
OVS defined in the helm charts of stx-openstack manifest.
|
||||||
|
|
||||||
To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
|
To deploy OVS-DPDK, run the following command:
|
||||||
supported only on bare metal hardware), run the following command:
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -310,35 +311,60 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||||
vSwitch cores for computes.
|
vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using OVS-DPDK, virtual machines must be configured to use a flavor with
|
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||||
property: hw:mem_page_size=large
|
command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||||
|
|
||||||
|
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||||
|
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||||
|
|
||||||
|
Configure the huge pages for VMs in an OVS-DPDK environment with the command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify worker-0 0 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all computes (and/or AIO Controllers) to
|
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||||
apply the change.
|
controllers) to apply the change.
|
||||||
|
|
||||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||||
which is needed for stx-openstack nova ephemeral disks.
|
which is needed for stx-openstack nova ephemeral disks.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||||
|
@ -18,7 +18,7 @@ The recommended minimum hardware requirements for bare metal servers for various
|
|||||||
host types are:
|
host types are:
|
||||||
|
|
||||||
+-------------------------+-----------------------------+-----------------------------+
|
+-------------------------+-----------------------------+-----------------------------+
|
||||||
| Minimum Requirement | Controller Node | Compute Node |
|
| Minimum Requirement | Controller Node | Worker Node |
|
||||||
+=========================+=============================+=============================+
|
+=========================+=============================+=============================+
|
||||||
| Number of servers | 2 | 2-10 |
|
| Number of servers | 2 | 2-10 |
|
||||||
+-------------------------+-----------------------------+-----------------------------+
|
+-------------------------+-----------------------------+-----------------------------+
|
||||||
|
@ -231,7 +231,9 @@ OpenStack-specific host configuration
|
|||||||
manifest.
|
manifest.
|
||||||
* Shares the core(s) assigned to the platform.
|
* Shares the core(s) assigned to the platform.
|
||||||
|
|
||||||
If you require better performance, OVS-DPDK should be used:
|
If you require better performance, OVS-DPDK (OVS with the Data Plane
|
||||||
|
Development Kit, which is supported only on bare metal hardware) should be
|
||||||
|
used:
|
||||||
|
|
||||||
* Runs directly on the host (it is not containerized).
|
* Runs directly on the host (it is not containerized).
|
||||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||||
@ -245,8 +247,7 @@ OpenStack-specific host configuration
|
|||||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||||
OVS defined in the helm charts of stx-openstack manifest.
|
OVS defined in the helm charts of stx-openstack manifest.
|
||||||
|
|
||||||
To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
|
To deploy OVS-DPDK, run the following command:
|
||||||
supported only on bare metal hardware), run the following command:
|
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -255,16 +256,41 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||||
vSwitch cores for computes.
|
vSwitch cores for compute-labeled worker nodes.
|
||||||
|
|
||||||
When using OVS-DPDK, Virtual Machines must be configured to use a flavor with
|
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||||
property: hw:mem_page_size=large.
|
command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||||
|
|
||||||
|
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||||
|
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||||
|
|
||||||
|
Configure the huge pages for VMs in an OVS-DPDK environment with the command:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
|
||||||
|
|
||||||
|
For example:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
system host-memory-modify worker-0 0 -1G 10
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
After controller-0 is unlocked, changing vswitch_type requires
|
After controller-0 is unlocked, changing vswitch_type requires
|
||||||
locking and unlocking all computes (and/or AIO controllers) to
|
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||||
apply the change.
|
controllers) to apply the change.
|
||||||
|
|
||||||
.. incl-config-controller-0-storage-end:
|
.. incl-config-controller-0-storage-end:
|
||||||
|
|
||||||
@ -281,9 +307,9 @@ Unlock controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
Install software on controller-1 and compute nodes
|
Install software on controller-1 and worker nodes
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
|
|
||||||
#. Power on the controller-1 server and force it to network boot with the
|
#. Power on the controller-1 server and force it to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
@ -313,25 +339,24 @@ Install software on controller-1 and compute nodes
|
|||||||
This initiates the install of software on controller-1.
|
This initiates the install of software on controller-1.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
Set the personality to 'worker' and assign a unique hostname for each.
|
||||||
hostname for each.
|
|
||||||
|
|
||||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for compute-1. Power on compute-1 and wait for the new host (hostname=None) to
|
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1-1
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||||
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
||||||
'system host-list'.
|
'system host-list'.
|
||||||
|
|
||||||
@ -344,8 +369,8 @@ Install software on controller-1 and compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -405,20 +430,20 @@ machine.
|
|||||||
|
|
||||||
.. incl-unlock-controller-1-end:
|
.. incl-unlock-controller-1-end:
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Add the third Ceph monitor to compute-0:
|
#. Add the third Ceph monitor to a worker node:
|
||||||
|
|
||||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||||
controller-1.)
|
controller-1.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ceph-mon-add compute-0
|
system ceph-mon-add worker-0
|
||||||
|
|
||||||
#. Wait for the compute node monitor to complete configuration:
|
#. Wait for the worker node monitor to complete configuration:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -430,21 +455,21 @@ Configure compute nodes
|
|||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -460,8 +485,8 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
* If planning on running DPDK in containers on this host, configure the number
|
||||||
@ -469,9 +494,9 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
system host-memory-modify ${NODE} 0 -1G 100
|
||||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
system host-memory-modify ${NODE} 1 -1G 100
|
||||||
done
|
done
|
||||||
|
|
||||||
For both Kubernetes and OpenStack:
|
For both Kubernetes and OpenStack:
|
||||||
@ -490,11 +515,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -503,10 +528,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -519,12 +544,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -535,30 +560,30 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
--------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
----------------------------
|
----------------------------
|
||||||
|
@ -18,7 +18,7 @@ The recommended minimum hardware requirements for bare metal servers for various
|
|||||||
host types are:
|
host types are:
|
||||||
|
|
||||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||||
| Minimum Requirement | Controller Node | Storage Node | Compute Node |
|
| Minimum Requirement | Controller Node | Storage Node | Worker Node |
|
||||||
+=====================+===========================+=======================+=======================+
|
+=====================+===========================+=======================+=======================+
|
||||||
| Number of servers | 2 | 2-9 | 2-100 |
|
| Number of servers | 2 | 2-9 | 2-100 |
|
||||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||||
|
@ -60,9 +60,9 @@ Unlock controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
Install software on controller-1, storage nodes, and compute nodes
|
Install software on controller-1, storage nodes, and worker nodes
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
#. Power on the controller-1 server and force it to network boot with the
|
#. Power on the controller-1 server and force it to network boot with the
|
||||||
appropriate BIOS boot options for your particular server.
|
appropriate BIOS boot options for your particular server.
|
||||||
@ -113,28 +113,27 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
This initiates the software installation on storage-0 and storage-1.
|
This initiates the software installation on storage-0 and storage-1.
|
||||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
Set the personality to 'worker' and assign a unique hostname for each.
|
||||||
hostname for each.
|
|
||||||
|
|
||||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||||
be discovered by checking 'system host-list':
|
be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 5 personality=worker hostname=compute-0
|
system host-update 5 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for compute-1. Power on compute-1 and wait for the new host
|
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||||
(hostname=None) to be discovered by checking 'system host-list':
|
(hostname=None) to be discovered by checking 'system host-list':
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 6 personality=worker hostname=compute-1
|
system host-update 6 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates the install of software on compute-0 and compute-1.
|
This initiates the install of software on worker-0 and worker-1.
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||||
compute-0, and compute-1 to complete, for all servers to reboot, and for all to
|
worker-0, and worker-1 to complete, for all servers to reboot, and for all to
|
||||||
show as locked/disabled/online in 'system host-list'.
|
show as locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -147,8 +146,8 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | storage-0 | storage | locked | disabled | online |
|
| 3 | storage-0 | storage | locked | disabled | online |
|
||||||
| 4 | storage-1 | storage | locked | disabled | online |
|
| 4 | storage-1 | storage | locked | disabled | online |
|
||||||
| 5 | compute-0 | compute | locked | disabled | online |
|
| 5 | worker-0 | worker | locked | disabled | online |
|
||||||
| 6 | compute-1 | compute | locked | disabled | online |
|
| 6 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -178,8 +177,8 @@ Configure storage nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
|
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
|
||||||
@ -228,22 +227,22 @@ The storage nodes will reboot in order to apply configuration changes and come
|
|||||||
into service. This can take 5-10 minutes, depending on the performance of the
|
into service. This can take 5-10 minutes, depending on the performance of the
|
||||||
host machine.
|
host machine.
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||||
|
|
||||||
(Note that the MGMT interfaces are partially set up automatically by the
|
(Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.)
|
network install procedure.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||||
example eth0, that are applicable to your deployment environment.
|
example eth0, that are applicable to your deployment environment.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
@ -259,8 +258,8 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
system host-label-assign ${NODE} sriovdp=enabled
|
||||||
done
|
done
|
||||||
|
|
||||||
* If planning on running DPDK in containers on this host, configure the number
|
* If planning on running DPDK in containers on this host, configure the number
|
||||||
@ -268,9 +267,9 @@ Configure compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
system host-memory-modify ${NODE} 0 -1G 100
|
||||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
system host-memory-modify ${NODE} 1 -1G 100
|
||||||
done
|
done
|
||||||
|
|
||||||
For both Kubernetes and OpenStack:
|
For both Kubernetes and OpenStack:
|
||||||
@ -289,11 +288,11 @@ Configure compute nodes
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -302,10 +301,10 @@ Configure compute nodes
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -318,12 +317,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest and helm-charts later.
|
support of installing the stx-openstack manifest and helm-charts later.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -334,30 +333,30 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
Unlock compute nodes in order to bring them into service:
|
Unlock worker nodes in order to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come
|
The worker nodes will reboot in order to apply configuration changes and come
|
||||||
into service. This can take 5-10 minutes, depending on the performance of the
|
into service. This can take 5-10 minutes, depending on the performance of the
|
||||||
host machine.
|
host machine.
|
||||||
|
|
||||||
|
@ -56,8 +56,8 @@ standard configuration, either:
|
|||||||
|
|
||||||
|
|
||||||
This guide assumes that you have a standard deployment installed and configured
|
This guide assumes that you have a standard deployment installed and configured
|
||||||
with 2x controllers and at least 1x compute node, with the StarlingX OpenStack
|
with 2x controllers and at least 1x compute-labeled worker node, with the
|
||||||
application (stx-openstack) applied.
|
StarlingX OpenStack application (stx-openstack) applied.
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
@ -23,15 +23,15 @@ For controller nodes:
|
|||||||
* Additional NIC port on both controller nodes for connecting to the
|
* Additional NIC port on both controller nodes for connecting to the
|
||||||
ironic-provisioning-net.
|
ironic-provisioning-net.
|
||||||
|
|
||||||
For compute nodes:
|
For worker nodes:
|
||||||
|
|
||||||
* If using a flat data network for the Ironic provisioning network, an additional
|
* If using a flat data network for the Ironic provisioning network, an additional
|
||||||
NIC port on one of the compute nodes is required.
|
NIC port on one of the worker nodes is required.
|
||||||
|
|
||||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
||||||
simply add the new data network to an existing interface on the compute node.
|
simply add the new data network to an existing interface on the worker node.
|
||||||
|
|
||||||
* Additional switch ports / configuration for new ports on controller, compute,
|
* Additional switch ports / configuration for new ports on controller, worker,
|
||||||
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
||||||
|
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
@ -113,14 +113,14 @@ Configure Ironic network on deployed standard StarlingX
|
|||||||
# Unlock the node to apply changes
|
# Unlock the node to apply changes
|
||||||
system host-unlock controller-1
|
system host-unlock controller-1
|
||||||
|
|
||||||
#. Configure the new interface (for Ironic) on one of the compute nodes and
|
#. Configure the new interface (for Ironic) on one of the compute-labeled worker
|
||||||
assign it to the Ironic data network. This example uses the data network
|
nodes and assign it to the Ironic data network. This example uses the data
|
||||||
`ironic-data` that was named in a previous step.
|
network `ironic-data` that was named in a previous step.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system interface-datanetwork-assign compute-0 eno1 ironic-data
|
system interface-datanetwork-assign worker-0 eno1 ironic-data
|
||||||
system host-if-modify -n ironicdata -c data compute-0 eno1
|
system host-if-modify -n ironicdata -c data worker-0 eno1
|
||||||
|
|
||||||
****************************
|
****************************
|
||||||
Generate user Helm overrides
|
Generate user Helm overrides
|
||||||
|
@ -1,16 +1,16 @@
|
|||||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
||||||
availability (HA) servers with each server providing all three cloud functions
|
availability (HA) servers with each server providing all three cloud functions
|
||||||
(controller, compute, and storage).
|
(controller, worker, and storage).
|
||||||
|
|
||||||
An AIO-DX configuration provides the following benefits:
|
An AIO-DX configuration provides the following benefits:
|
||||||
|
|
||||||
* Only a small amount of cloud processing and storage power is required
|
* Only a small amount of cloud processing and storage power is required
|
||||||
* Application consolidation using multiple virtual machines on a single pair of
|
* Application consolidation using multiple containers or virtual machines on a
|
||||||
physical servers
|
single pair of physical servers
|
||||||
* High availability (HA) services run on the controller function across two
|
* High availability (HA) services run on the controller function across two
|
||||||
physical servers in either active/active or active/standby mode
|
physical servers in either active/active or active/standby mode
|
||||||
* A storage back end solution using a two-node CEPH deployment across two servers
|
* A storage back end solution using a two-node CEPH deployment across two servers
|
||||||
* Virtual machines scheduled on both compute functions
|
* Containers or virtual machines scheduled on both worker functions
|
||||||
* Protection against overall server hardware fault, where
|
* Protection against overall server hardware fault, where
|
||||||
|
|
||||||
* All controller HA services go active on the remaining healthy server
|
* All controller HA services go active on the remaining healthy server
|
||||||
|
@ -1,10 +1,10 @@
|
|||||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
||||||
functions (controller, compute, and storage) on a single server with the
|
functions (controller, worker, and storage) on a single server with the
|
||||||
following benefits:
|
following benefits:
|
||||||
|
|
||||||
* Requires only a small amount of cloud processing and storage power
|
* Requires only a small amount of cloud processing and storage power
|
||||||
* Application consolidation using multiple virtual machines on a single pair of
|
* Application consolidation using multiple containers or virtual machines on a
|
||||||
physical servers
|
single pair of physical servers
|
||||||
* A storage backend solution using a single-node CEPH deployment
|
* A storage backend solution using a single-node CEPH deployment
|
||||||
|
|
||||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
||||||
|
@ -1,19 +1,19 @@
|
|||||||
The Standard with Controller Storage deployment option provides two high
|
The Standard with Controller Storage deployment option provides two high
|
||||||
availability (HA) controller nodes and a pool of up to 10 compute nodes.
|
availability (HA) controller nodes and a pool of up to 10 worker nodes.
|
||||||
|
|
||||||
A Standard with Controller Storage configuration provides the following benefits:
|
A Standard with Controller Storage configuration provides the following benefits:
|
||||||
|
|
||||||
* A pool of up to 10 compute nodes
|
* A pool of up to 10 worker nodes
|
||||||
* High availability (HA) services run across the controller nodes in either
|
* High availability (HA) services run across the controller nodes in either
|
||||||
active/active or active/standby mode
|
active/active or active/standby mode
|
||||||
* A storage back end solution using a two-node CEPH deployment across two
|
* A storage back end solution using a two-node CEPH deployment across two
|
||||||
controller servers
|
controller servers
|
||||||
* Protection against overall controller and compute node failure, where
|
* Protection against overall controller and worker node failure, where
|
||||||
|
|
||||||
* On overall controller node failure, all controller HA services go active on
|
* On overall controller node failure, all controller HA services go active on
|
||||||
the remaining healthy controller node
|
the remaining healthy controller node
|
||||||
* On overall compute node failure, virtual machines and containers are
|
* On overall worker node failure, virtual machines and containers are
|
||||||
recovered on the remaining healthy compute nodes
|
recovered on the remaining healthy worker nodes
|
||||||
|
|
||||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
||||||
:scale: 50%
|
:scale: 50%
|
||||||
|
@ -1,9 +1,9 @@
|
|||||||
The Standard with Dedicated Storage deployment option is a standard installation
|
The Standard with Dedicated Storage deployment option is a standard installation
|
||||||
with independent controller, compute, and storage nodes.
|
with independent controller, worker, and storage nodes.
|
||||||
|
|
||||||
A Standard with Dedicated Storage configuration provides the following benefits:
|
A Standard with Dedicated Storage configuration provides the following benefits:
|
||||||
|
|
||||||
* A pool of up to 100 compute nodes
|
* A pool of up to 100 worker nodes
|
||||||
* A 2x node high availability (HA) controller cluster with HA services running
|
* A 2x node high availability (HA) controller cluster with HA services running
|
||||||
across the controller nodes in either active/active or active/standby mode
|
across the controller nodes in either active/active or active/standby mode
|
||||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
||||||
|
@ -206,7 +206,7 @@ At the subcloud location:
|
|||||||
|
|
||||||
At the System Controller:
|
At the System Controller:
|
||||||
|
|
||||||
1. Create a ``bootstrap-values.yml`` overrides file for the subcloud, for
|
1. Create a ``bootstrap-values.yml`` override file for the subcloud. For
|
||||||
example:
|
example:
|
||||||
|
|
||||||
.. code:: yaml
|
.. code:: yaml
|
||||||
@ -215,15 +215,21 @@ At the SystemController:
|
|||||||
name: "subcloud1"
|
name: "subcloud1"
|
||||||
description: "Ottawa Site"
|
description: "Ottawa Site"
|
||||||
location: "YOW"
|
location: "YOW"
|
||||||
|
|
||||||
management_subnet: 192.168.101.0/24
|
management_subnet: 192.168.101.0/24
|
||||||
management_start_address: 192.168.101.2
|
management_start_address: 192.168.101.2
|
||||||
management_end_address: 192.168.101.50
|
management_end_address: 192.168.101.50
|
||||||
management_gateway_address: 192.168.101.1
|
management_gateway_address: 192.168.101.1
|
||||||
|
|
||||||
external_oam_subnet: 10.10.10.0/24
|
external_oam_subnet: 10.10.10.0/24
|
||||||
external_oam_gateway_address: 10.10.10.1
|
external_oam_gateway_address: 10.10.10.1
|
||||||
external_oam_floating_address: 10.10.10.12
|
external_oam_floating_address: 10.10.10.12
|
||||||
|
|
||||||
systemcontroller_gateway_address: 192.168.204.101
|
systemcontroller_gateway_address: 192.168.204.101
|
||||||
|
|
||||||
|
.. important:: The `management_*` entries in the override file are required
|
||||||
|
for all installation types, including AIO-Simplex.
|
||||||
|
|
||||||
2. Add the subcloud using the CLI command below:
|
2. Add the subcloud using the CLI command below:
|
||||||
|
|
||||||
.. code:: sh
|
.. code:: sh
|
||||||
@ -273,7 +279,10 @@ At the SystemController:
|
|||||||
- For Standard with dedicated storage nodes:
|
- For Standard with dedicated storage nodes:
|
||||||
`Bare metal Standard with Dedicated Storage Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage.html>`_
|
`Bare metal Standard with Dedicated Storage Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage.html>`_
|
||||||
|
|
||||||
5. Add routes from the subcloud to the controller management network:
|
On the active controller for each subcloud:
|
||||||
|
|
||||||
|
#. Add a route from the subcloud to the controller management network to enable
|
||||||
|
the subcloud to go online. For each host in the subcloud:
|
||||||
|
|
||||||
.. code:: sh
|
.. code:: sh
|
||||||
|
|
||||||
@ -286,4 +295,3 @@ At the SystemController:
|
|||||||
|
|
||||||
system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
|
system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
|
||||||
|
|
||||||
Repeat this step for each host of the subcloud.
|
|
||||||
|
@ -15,21 +15,17 @@ StarlingX Kubernetes platform:
|
|||||||
Install application manifest and helm-charts
|
Install application manifest and helm-charts
|
||||||
--------------------------------------------
|
--------------------------------------------
|
||||||
|
|
||||||
#. Get the StarlingX OpenStack application (stx-openstack) manifest and helm-charts.
|
#. Get the latest StarlingX OpenStack application (stx-openstack) manifest and
|
||||||
This can be from a private StarlingX build or, as shown below, from the public
|
helm-charts. This can be from a private StarlingX build or from the public
|
||||||
Cengen StarlingX build off ``master`` branch:
|
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||||
|
|
||||||
::
|
|
||||||
|
|
||||||
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/helm-charts/stx-openstack-1.0-17-centos-stable-latest.tgz
|
|
||||||
|
|
||||||
#. Load the stx-openstack application's package into StarlingX. The tarball
|
#. Load the stx-openstack application's package into StarlingX. The tarball
|
||||||
package contains stx-openstack's Airship Armada manifest and stx-openstack's
|
package contains stx-openstack's Airship Armada manifest and stx-openstack's
|
||||||
set of helm charts:
|
set of helm charts. For example:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system application-upload stx-openstack-1.0-17-centos-stable-latest.tgz
|
system application-upload stx-openstack-<version>-centos-stable-latest.tgz
|
||||||
|
|
||||||
This will:
|
This will:
|
||||||
|
|
||||||
|
@ -19,10 +19,6 @@ Physical host requirements and setup
|
|||||||
Prepare virtual environment and servers
|
Prepare virtual environment and servers
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
The following steps explain how to prepare the virtual environment and servers
|
|
||||||
on a physical host for a StarlingX R4.0 virtual All-in-one Duplex deployment
|
|
||||||
configuration.
|
|
||||||
|
|
||||||
#. Prepare virtual environment.
|
#. Prepare virtual environment.
|
||||||
|
|
||||||
Set up the virtual platform networks for virtual deployment:
|
Set up the virtual platform networks for virtual deployment:
|
||||||
|
@ -206,13 +206,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -225,10 +225,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph:
|
#. Add an OSD on controller-0 for Ceph:
|
||||||
|
|
||||||
@ -351,13 +351,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -370,10 +370,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-1 for Ceph:
|
#. Add an OSD on controller-1 for Ceph:
|
||||||
|
|
||||||
@ -412,19 +412,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-1
|
export NODE=controller-1
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
|
|
||||||
-------------------
|
-------------------
|
||||||
Unlock controller-1
|
Unlock controller-1
|
||||||
|
@ -19,10 +19,6 @@ Physical host requirements and setup
|
|||||||
Prepare virtual environment and servers
|
Prepare virtual environment and servers
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
The following steps explain how to prepare the virtual environment and servers
|
|
||||||
on a physical host for a StarlingX R4.0 virtual All-in-one Simplex deployment
|
|
||||||
configuration.
|
|
||||||
|
|
||||||
#. Prepare virtual environment.
|
#. Prepare virtual environment.
|
||||||
|
|
||||||
Set up the virtual platform networks for virtual deployment:
|
Set up the virtual platform networks for virtual deployment:
|
||||||
|
@ -197,13 +197,13 @@ On virtual controller-0:
|
|||||||
|
|
||||||
DATA0IF=eth1000
|
DATA0IF=eth1000
|
||||||
DATA1IF=eth1001
|
DATA1IF=eth1001
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
PHYSNET0='physnet0'
|
PHYSNET0='physnet0'
|
||||||
PHYSNET1='physnet1'
|
PHYSNET1='physnet1'
|
||||||
SPL=/tmp/tmp-system-port-list
|
SPL=/tmp/tmp-system-port-list
|
||||||
SPIL=/tmp/tmp-system-host-if-list
|
SPIL=/tmp/tmp-system-host-if-list
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -216,10 +216,10 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
|
|
||||||
#. Add an OSD on controller-0 for Ceph:
|
#. Add an OSD on controller-0 for Ceph:
|
||||||
|
|
||||||
@ -267,19 +267,19 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
export COMPUTE=controller-0
|
export NODE=controller-0
|
||||||
|
|
||||||
echo ">>> Getting root disk info"
|
echo ">>> Getting root disk info"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||||
|
|
||||||
echo ">>>> Configuring nova-local"
|
echo ">>>> Configuring nova-local"
|
||||||
NOVA_SIZE=34
|
NOVA_SIZE=34
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
sleep 2
|
sleep 2
|
||||||
|
|
||||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||||
|
@ -20,10 +20,6 @@ Physical host requirements and setup
|
|||||||
Prepare virtual environment and servers
|
Prepare virtual environment and servers
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
The following steps explain how to prepare the virtual environment and servers
|
|
||||||
on a physical host for a StarlingX R4.0 virtual Standard with Controller Storage
|
|
||||||
deployment configuration.
|
|
||||||
|
|
||||||
#. Prepare virtual environment.
|
#. Prepare virtual environment.
|
||||||
|
|
||||||
Set up virtual platform networks for virtual deployment:
|
Set up virtual platform networks for virtual deployment:
|
||||||
|
@ -237,9 +237,9 @@ Unlock virtual controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
Install software on controller-1 and compute nodes
|
Install software on controller-1 and worker nodes
|
||||||
--------------------------------------------------
|
-------------------------------------------------
|
||||||
|
|
||||||
#. On the host, power on the controller-1 virtual server,
|
#. On the host, power on the controller-1 virtual server,
|
||||||
'controllerstorage-controller-1'. It will automatically attempt to network
|
'controllerstorage-controller-1'. It will automatically attempt to network
|
||||||
@ -296,7 +296,7 @@ Install software on controller-1 and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 3 personality=worker hostname=compute-0
|
system host-update 3 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for 'controllerstorage-worker-1'. On the host:
|
Repeat for 'controllerstorage-worker-1'. On the host:
|
||||||
|
|
||||||
@ -309,9 +309,9 @@ Install software on controller-1 and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 4 personality=worker hostname=compute-1
|
system host-update 4 personality=worker hostname=worker-1
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||||
complete, for all virtual servers to reboot, and for all to show as
|
complete, for all virtual servers to reboot, and for all to show as
|
||||||
locked/disabled/online in 'system host-list'.
|
locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
@ -323,8 +323,8 @@ Install software on controller-1 and compute nodes
|
|||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | compute-0 | compute | locked | disabled | online |
|
| 3 | worker-0 | worker | locked | disabled | online |
|
||||||
| 4 | compute-1 | compute | locked | disabled | online |
|
| 4 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -379,22 +379,22 @@ service. This can take 5-10 minutes, depending on the performance of the host ma
|
|||||||
|
|
||||||
.. incl-unlock-controller-1-virt-controller-storage-end:
|
.. incl-unlock-controller-1-virt-controller-storage-end:
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
On virtual controller-0:
|
On virtual controller-0:
|
||||||
|
|
||||||
#. Add the third Ceph monitor to compute-0:
|
#. Add the third Ceph monitor to a worker node:
|
||||||
|
|
||||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||||
controller-1.)
|
controller-1.)
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system ceph-mon-add compute-0
|
system ceph-mon-add worker-0
|
||||||
|
|
||||||
#. Wait for the compute node monitor to complete configuration:
|
#. Wait for the worker node monitor to complete configuration:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -406,21 +406,21 @@ On virtual controller-0:
|
|||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||||
+--------------------------------------+-------+--------------+------------+------+
|
+--------------------------------------+-------+--------------+------------+------+
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||||
|
|
||||||
Note that the MGMT interfaces are partially set up automatically by the
|
Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.
|
network install procedure.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes.
|
#. Configure data interfaces for worker nodes.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
@ -447,11 +447,11 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -460,10 +460,10 @@ On virtual controller-0:
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -476,12 +476,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest/helm-charts later:
|
support of installing the stx-openstack manifest/helm-charts later:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -492,32 +492,32 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||||
|
|
||||||
Unlock virtual compute nodes to bring them into service:
|
Unlock virtual worker nodes to bring them into service:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-unlock $COMPUTE
|
system host-unlock $NODE
|
||||||
done
|
done
|
||||||
|
|
||||||
The compute nodes will reboot in order to apply configuration changes and come into
|
The worker nodes will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||||
|
@ -20,10 +20,6 @@ Physical host requirements and setup
|
|||||||
Prepare virtual environment and servers
|
Prepare virtual environment and servers
|
||||||
---------------------------------------
|
---------------------------------------
|
||||||
|
|
||||||
The following steps explain how to prepare the virtual environment and servers
|
|
||||||
on a physical host for a StarlingX R4.0 virtual Standard with Dedicated Storage
|
|
||||||
deployment configuration.
|
|
||||||
|
|
||||||
#. Prepare virtual environment.
|
#. Prepare virtual environment.
|
||||||
|
|
||||||
Set up virtual platform networks for virtual deployment:
|
Set up virtual platform networks for virtual deployment:
|
||||||
|
@ -77,9 +77,9 @@ Unlock virtual controller-0 in order to bring it into service:
|
|||||||
Controller-0 will reboot in order to apply configuration changes and come into
|
Controller-0 will reboot in order to apply configuration changes and come into
|
||||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
Install software on controller-1, storage nodes, and compute nodes
|
Install software on controller-1, storage nodes, and worker nodes
|
||||||
------------------------------------------------------------------
|
-----------------------------------------------------------------
|
||||||
|
|
||||||
#. On the host, power on the controller-1 virtual server,
|
#. On the host, power on the controller-1 virtual server,
|
||||||
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
||||||
@ -168,7 +168,7 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
system host-update 5 personality=worker hostname=compute-0
|
system host-update 5 personality=worker hostname=worker-0
|
||||||
|
|
||||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||||
|
|
||||||
@ -181,12 +181,12 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
ssystem host-update 6 personality=worker hostname=compute-1
|
ssystem host-update 6 personality=worker hostname=worker-1
|
||||||
|
|
||||||
This initiates software installation on compute-0 and compute-1.
|
This initiates software installation on worker-0 and worker-1.
|
||||||
|
|
||||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||||
compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all
|
worker-0, and worker-1 to complete, for all virtual servers to reboot, and for all
|
||||||
to show as locked/disabled/online in 'system host-list'.
|
to show as locked/disabled/online in 'system host-list'.
|
||||||
|
|
||||||
::
|
::
|
||||||
@ -199,8 +199,8 @@ Install software on controller-1, storage nodes, and compute nodes
|
|||||||
| 2 | controller-1 | controller | locked | disabled | online |
|
| 2 | controller-1 | controller | locked | disabled | online |
|
||||||
| 3 | storage-0 | storage | locked | disabled | online |
|
| 3 | storage-0 | storage | locked | disabled | online |
|
||||||
| 4 | storage-1 | storage | locked | disabled | online |
|
| 4 | storage-1 | storage | locked | disabled | online |
|
||||||
| 5 | compute-0 | compute | locked | disabled | online |
|
| 5 | worker-0 | worker | locked | disabled | online |
|
||||||
| 6 | compute-1 | compute | locked | disabled | online |
|
| 6 | worker-1 | worker | locked | disabled | online |
|
||||||
+----+--------------+-------------+----------------+-------------+--------------+
|
+----+--------------+-------------+----------------+-------------+--------------+
|
||||||
|
|
||||||
----------------------
|
----------------------
|
||||||
@ -231,8 +231,8 @@ On virtual controller-0:
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in storage-0 storage-1; do
|
for NODE in storage-0 storage-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Add OSDs to storage-0:
|
#. Add OSDs to storage-0:
|
||||||
@ -278,24 +278,24 @@ Unlock virtual storage nodes in order to bring them into service:
|
|||||||
The storage nodes will reboot in order to apply configuration changes and come
|
The storage nodes will reboot in order to apply configuration changes and come
|
||||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||||
|
|
||||||
-----------------------
|
----------------------
|
||||||
Configure compute nodes
|
Configure worker nodes
|
||||||
-----------------------
|
----------------------
|
||||||
|
|
||||||
On virtual controller-0:
|
On virtual controller-0:
|
||||||
|
|
||||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||||
|
|
||||||
Note that the MGMT interfaces are partially set up automatically by the
|
Note that the MGMT interfaces are partially set up automatically by the
|
||||||
network install procedure.
|
network install procedure.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
system interface-network-assign $NODE mgmt0 cluster-host
|
||||||
done
|
done
|
||||||
|
|
||||||
#. Configure data interfaces for compute nodes.
|
#. Configure data interfaces for worker nodes.
|
||||||
|
|
||||||
.. important::
|
.. important::
|
||||||
|
|
||||||
@ -325,11 +325,11 @@ On virtual controller-0:
|
|||||||
system datanetwork-add ${PHYSNET0} vlan
|
system datanetwork-add ${PHYSNET0} vlan
|
||||||
system datanetwork-add ${PHYSNET1} vlan
|
system datanetwork-add ${PHYSNET1} vlan
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring interface for: $COMPUTE"
|
echo "Configuring interface for: $NODE"
|
||||||
set -ex
|
set -ex
|
||||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||||
@ -338,10 +338,10 @@ On virtual controller-0:
|
|||||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||||
set +ex
|
set +ex
|
||||||
done
|
done
|
||||||
|
|
||||||
@ -354,12 +354,12 @@ OpenStack-specific host configuration
|
|||||||
**This step is required only if the StarlingX OpenStack application
|
**This step is required only if the StarlingX OpenStack application
|
||||||
(stx-openstack) will be installed.**
|
(stx-openstack) will be installed.**
|
||||||
|
|
||||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||||
support of installing the stx-openstack manifest/helm-charts later:
|
support of installing the stx-openstack manifest/helm-charts later:
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for NODE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
system host-label-assign $NODE openstack-compute-node=enabled
|
system host-label-assign $NODE openstack-compute-node=enabled
|
||||||
system host-label-assign $NODE openvswitch=enabled
|
system host-label-assign $NODE openvswitch=enabled
|
||||||
system host-label-assign $NODE sriov=enabled
|
system host-label-assign $NODE sriov=enabled
|
||||||
@ -370,20 +370,20 @@ OpenStack-specific host configuration
|
|||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
for COMPUTE in compute-0 compute-1; do
|
for NODE in worker-0 worker-1; do
|
||||||
echo "Configuring Nova local for: $COMPUTE"
|
echo "Configuring Nova local for: $NODE"
|
||||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||||
PARTITION_SIZE=10
|
PARTITION_SIZE=10
|
||||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||||
system host-lvg-add ${COMPUTE} nova-local
|
system host-lvg-add ${NODE} nova-local
|
||||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||||
done
|
done
|
||||||
|
|
||||||
--------------------
|
-------------------
|
||||||
Unlock compute nodes
|
Unlock worker nodes
|
||||||
--------------------
|
-------------------
|
||||||
|
|
||||||
.. include:: controller_storage_install_kubernetes.rst
|
.. include:: controller_storage_install_kubernetes.rst
|
||||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||||
|
@ -67,9 +67,6 @@ Set up the host with the following steps:
|
|||||||
apparmor-profile modules.
|
apparmor-profile modules.
|
||||||
|
|
||||||
|
|
||||||
#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn
|
#. Get the StarlingX ISO from the
|
||||||
StarlingX build off 'master' branch, as shown below:
|
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||||
|
Alternately, you can use an ISO from a private StarlingX build.
|
||||||
::
|
|
||||||
|
|
||||||
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user