OpenStack install updates
Updates to set up of disk partition for nova-local volume groups. Turned on syntax highlighting in bash code blocks for readability (especially instrutions in comments) Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: I0b437d2a00e119e12a6c7bca0d1f3700ec54b81d
This commit is contained in:
parent
a9aef3a5ff
commit
dbc0a82c4a
@ -36,7 +36,7 @@ Install software on worker nodes
|
||||
|
||||
#. Using the host id, set the personality of this host to 'worker':
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
@ -78,7 +78,7 @@ Configure worker nodes
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -98,7 +98,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -114,7 +114,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -131,7 +131,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -153,7 +153,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -168,22 +168,32 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
@ -192,7 +202,7 @@ Configure worker nodes
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -241,7 +251,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -278,7 +288,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -288,7 +298,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -307,7 +317,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
|
@ -347,7 +347,6 @@ Configure controller-0
|
||||
|
||||
::
|
||||
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-0 0
|
||||
|
||||
@ -362,26 +361,30 @@ Configure controller-0
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
@ -657,7 +660,7 @@ Configure controller-1
|
||||
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
|
||||
if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-1
|
||||
@ -667,7 +670,7 @@ Configure controller-1
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-1 0
|
||||
@ -682,7 +685,7 @@ Configure controller-1
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-1 0
|
||||
@ -694,21 +697,25 @@ Configure controller-1
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-1.
|
||||
@ -722,7 +729,7 @@ Configure controller-1
|
||||
|
||||
* Configure the data interfaces for controller-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
@ -768,7 +775,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for controller-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
@ -802,7 +809,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
@ -810,7 +817,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||
system host-memory-modify -f application controller-1 0 -1G 10
|
||||
@ -827,7 +834,7 @@ For host-based Ceph:
|
||||
|
||||
#. Add an |OSD| on controller-1 for host-based Ceph:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
@ -850,7 +857,7 @@ For host-based Ceph:
|
||||
#. Assign Rook host labels to controller-1 in support of installing the
|
||||
rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||
@ -862,7 +869,7 @@ Unlock controller-1
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
@ -897,13 +904,14 @@ machine.
|
||||
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|
||||
|OSD|.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
|
@ -109,8 +109,8 @@ Bootstrap system on controller-0
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment
|
||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||
ADDRESSing applicable to your deployment environment.
|
||||
configuration as shown in the example below. Use the |OAM| IP SUBNET and
|
||||
IP ADDRESSing applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
@ -134,8 +134,9 @@ Bootstrap system on controller-0
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default values
|
||||
will pull all container images required for the |prod-p| from Docker hub.
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub.
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
@ -220,9 +221,9 @@ The newly installed controller needs to be configured.
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| interface of controller-0 and specify the attached network
|
||||
as "oam". Use the |OAM| port name that is applicable to your deployment
|
||||
environment, for example eth0:
|
||||
#. Configure the |OAM| interface of controller-0 and specify the attached
|
||||
network as "oam". Use the |OAM| port name that is applicable to your
|
||||
deployment environment, for example eth0:
|
||||
|
||||
::
|
||||
|
||||
@ -290,7 +291,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
|
||||
Default recommendation for an AIO-controller is to use a single core
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS|-|DPDK| vswitch.
|
||||
|
||||
::
|
||||
@ -298,10 +299,11 @@ The newly installed controller needs to be configured.
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
|
||||
where vswitch is running on this host, with the following command:
|
||||
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
@ -312,10 +314,10 @@ The newly installed controller needs to be configured.
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
|
||||
the commands:
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on
|
||||
this host with the commands:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-0 0
|
||||
@ -333,21 +335,26 @@ The newly installed controller needs to be configured.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
@ -359,7 +366,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
* Configure the data interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -406,7 +413,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -448,7 +455,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application controller-0 0 -1G 10
|
||||
@ -567,7 +574,8 @@ machine.
|
||||
system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
|
@ -62,7 +62,7 @@ Bootstrap system on controller-0
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
@ -111,7 +111,7 @@ Bootstrap system on controller-0
|
||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||
ADDRESSing applicable to your deployment environment.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
@ -135,8 +135,9 @@ Bootstrap system on controller-0
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default values
|
||||
will pull all container images required for the |prod-p| from Docker hub.
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub.
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
@ -147,7 +148,7 @@ Bootstrap system on controller-0
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: yaml
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
@ -186,7 +187,7 @@ Bootstrap system on controller-0
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
@ -194,9 +195,9 @@ Bootstrap system on controller-0
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r5>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r5>` for information on additional Ansible
|
||||
bootstrap configurations for advanced Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
@ -228,7 +229,7 @@ Configure controller-0
|
||||
Use the |OAM| port name that is applicable to your deployment environment,
|
||||
for example eth0:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
@ -240,7 +241,7 @@ Configure controller-0
|
||||
Use the MGMT port name that is applicable to your deployment environment,
|
||||
for example eth1:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: bash
|
||||
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
|
||||
@ -326,8 +327,8 @@ Configure controller-0
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
|
||||
Once vswitch_type is set to OVS-|DPDK|, any subsequent AIO-controller or
|
||||
worker nodes created will default to automatically assigning 1 vSwitch
|
||||
Once vswitch_type is set to OVS-|DPDK|, any subsequent |AIO|-controller
|
||||
or worker nodes created will default to automatically assigning 1 vSwitch
|
||||
core for |AIO| controllers and 2 vSwitch cores for compute-labeled worker
|
||||
nodes.
|
||||
|
||||
@ -438,7 +439,7 @@ Configure controller-1
|
||||
Use the |OAM| port name that is applicable to your deployment environment,
|
||||
for example eth0:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
@ -529,7 +530,7 @@ Configure worker nodes
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -549,7 +550,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -565,7 +566,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -582,7 +583,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -604,7 +605,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -619,17 +620,27 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
@ -643,7 +654,7 @@ Configure worker nodes
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -692,7 +703,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -729,7 +740,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -739,7 +750,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -758,7 +769,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
@ -773,7 +784,7 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
|
||||
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-0
|
||||
|
||||
@ -789,7 +800,7 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
|
||||
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-1
|
||||
|
||||
|
@ -194,7 +194,7 @@ Configure storage nodes
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -202,7 +202,7 @@ Configure storage nodes
|
||||
|
||||
#. Add |OSDs| to storage-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-0
|
||||
|
||||
@ -218,7 +218,7 @@ Configure storage nodes
|
||||
|
||||
#. Add |OSDs| to storage-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-1
|
||||
|
||||
@ -238,7 +238,7 @@ Unlock storage nodes
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
@ -259,7 +259,7 @@ Configure worker nodes
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -279,7 +279,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -295,7 +295,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -312,7 +312,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -334,7 +334,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -349,17 +349,27 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
@ -373,7 +383,7 @@ Configure worker nodes
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -422,7 +432,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -459,7 +469,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -469,7 +479,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -488,7 +498,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
|
@ -78,7 +78,7 @@ Configure worker nodes
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -98,7 +98,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -114,7 +114,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -131,7 +131,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -153,7 +153,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -168,32 +168,42 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
@ -234,14 +244,14 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -278,7 +288,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -288,7 +298,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -307,7 +317,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
|
@ -148,7 +148,7 @@ Bootstrap system on controller-0
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: yaml
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
@ -187,7 +187,7 @@ Bootstrap system on controller-0
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
@ -225,7 +225,7 @@ Configure controller-0
|
||||
Use the |OAM| port name that is applicable to your deployment environment,
|
||||
for example eth0:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
@ -237,7 +237,7 @@ Configure controller-0
|
||||
Use the MGMT port name that is applicable to your deployment environment,
|
||||
for example eth1:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
@ -289,7 +289,8 @@ Configure controller-0
|
||||
should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||
function.
|
||||
|
||||
**To deploy the default containerized OVS:**
|
||||
|
||||
@ -310,8 +311,7 @@ Configure controller-0
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS|-|DPDK| vswitch.
|
||||
|
||||
::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
@ -325,7 +325,7 @@ Configure controller-0
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
@ -340,7 +340,7 @@ Configure controller-0
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with
|
||||
the commands:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
@ -357,22 +357,27 @@ Configure controller-0
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
@ -385,7 +390,7 @@ Configure controller-0
|
||||
|
||||
* Configure the data interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -431,7 +436,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -473,7 +478,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application controller-0 0 -1G 10
|
||||
@ -521,8 +526,8 @@ For host-based Ceph:
|
||||
system host-stor-list controller-0
|
||||
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add controller-0 osd <disk-uuid>
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add controller-0 osd <disk-uuid>
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
@ -652,7 +657,7 @@ Configure controller-1
|
||||
for |OVS|-|DPDK| vswitch. This should have been automatically configured,
|
||||
if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-1
|
||||
@ -662,7 +667,7 @@ Configure controller-1
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-1 0
|
||||
@ -689,21 +694,26 @@ Configure controller-1
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-1.
|
||||
@ -717,7 +727,7 @@ Configure controller-1
|
||||
|
||||
* Configure the data interfaces for controller-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
@ -763,7 +773,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for controller-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
@ -805,7 +815,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||
system host-memory-modify -f application controller-1 0 -1G 10
|
||||
@ -822,7 +832,7 @@ For host-based Ceph:
|
||||
|
||||
#. Add an |OSD| on controller-1 for host-based Ceph:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
@ -898,7 +908,8 @@ machine.
|
||||
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
|
@ -333,21 +333,27 @@ The newly installed controller needs to be configured.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
@ -359,7 +365,7 @@ The newly installed controller needs to be configured.
|
||||
|
||||
* Configure the data interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -406,7 +412,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the pci-sriov interfaces for controller-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
@ -448,7 +454,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application controller-0 0 -1G 10
|
||||
@ -567,14 +573,15 @@ machine.
|
||||
system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: controller-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: controller-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
::
|
||||
|
||||
|
@ -135,8 +135,9 @@ Bootstrap system on controller-0
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default values
|
||||
will pull all container images required for the |prod-p| from Docker hub.
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub.
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
@ -147,7 +148,7 @@ Bootstrap system on controller-0
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: yaml
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
@ -186,7 +187,7 @@ Bootstrap system on controller-0
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block::
|
||||
.. code-block:: bash
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
@ -228,7 +229,7 @@ Configure controller-0
|
||||
Use the |OAM| port name that is applicable to your deployment environment,
|
||||
for example eth0:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
@ -240,7 +241,7 @@ Configure controller-0
|
||||
Use the MGMT port name that is applicable to your deployment environment,
|
||||
for example eth1:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
@ -432,7 +433,7 @@ Configure controller-1
|
||||
Use the |OAM| port name that is applicable to your deployment environment,
|
||||
for example eth0:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
@ -523,7 +524,7 @@ Configure worker nodes
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -543,7 +544,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -559,7 +560,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -576,7 +577,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -598,7 +599,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -613,17 +614,27 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
@ -637,7 +648,7 @@ Configure worker nodes
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -680,13 +691,13 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block::
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -723,7 +734,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -733,7 +744,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -752,7 +763,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
@ -767,7 +778,7 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
|
||||
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-0
|
||||
|
||||
@ -783,7 +794,7 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
|
||||
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-1
|
||||
|
||||
|
@ -194,7 +194,7 @@ Configure storage nodes
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -202,7 +202,7 @@ Configure storage nodes
|
||||
|
||||
#. Add |OSDs| to storage-0.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-0
|
||||
|
||||
@ -218,19 +218,19 @@ Configure storage nodes
|
||||
|
||||
#. Add |OSDs| to storage-1.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-1
|
||||
HOST=storage-1
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
@ -238,7 +238,7 @@ Unlock storage nodes
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
@ -259,7 +259,7 @@ Configure worker nodes
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
@ -279,7 +279,7 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
@ -295,7 +295,7 @@ Configure worker nodes
|
||||
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
|
||||
configured, if not run the following command.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -312,7 +312,7 @@ Configure worker nodes
|
||||
each |NUMA| node where vswitch is running on this host, with the
|
||||
following command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -334,7 +334,7 @@ Configure worker nodes
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
|
||||
this host with the command:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -349,17 +349,27 @@ Configure worker nodes
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} --nowrap | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
PARTITION_SIZE=34 # Use default of 34G for this nova-local partition
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
@ -373,7 +383,7 @@ Configure worker nodes
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -416,13 +426,13 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
::
|
||||
.. code-block:: none
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
@ -459,7 +469,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
@ -469,7 +479,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
@ -488,7 +498,7 @@ Unlock worker nodes
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
|
Loading…
x
Reference in New Issue
Block a user