Merge Virtual and Bare Metal install docs
Incorporate Virtual content in BM AIO-DX install proc using synchronized tabs. Make self-referential include paths relative Move virtual includes to conventional folder for shared content Convert link to root of Install docs to external. This is required because link source page is used in context where internal ref is not available Address review comments from patchset 5 Integrate change on AIO-SX Integrate Std with Storage Integrate Dedicated Storage Share includes to avoid indentation formatting errors (greybars) in DS builds Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: Ie04c5f8a065b5e2bf87176515bb1131b75a4fcf3
This commit is contained in:
parent
282c525955
commit
287cd4dc39
File diff suppressed because it is too large
Load Diff
@ -1,6 +1,3 @@
|
||||
|
||||
.. Greg updates required for -High Security Vulnerability Document Updates
|
||||
|
||||
.. _aio_simplex_install_kubernetes_r7:
|
||||
|
||||
=================================================
|
||||
@ -32,7 +29,86 @@ Overview
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: begin-min-hw-reqs-sx
|
||||
:end-before: end-min-hw-reqs-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
The following sections describe system requirements and host setup for
|
||||
a workstation hosting virtual machine(s) where StarlingX will be
|
||||
deployed; i.e., a |VM| for each StarlingX node (controller,
|
||||
AIO-controller, worker or storage node).
|
||||
|
||||
.. rubric:: **Hardware requirements**
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* **Processor:** x86_64 only supported architecture with BIOS enabled
|
||||
hardware virtualization extensions
|
||||
|
||||
* **Cores:** 8
|
||||
|
||||
* **Memory:** 32GB RAM
|
||||
|
||||
* **Hard Disk:** 500GB HDD
|
||||
|
||||
* **Network:** One network adapter with active Internet connection
|
||||
|
||||
.. rubric:: **Software requirements**
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
|
||||
|
||||
All other required packages will be installed by scripts in the
|
||||
StarlingX tools repository.
|
||||
|
||||
.. rubric:: **Host setup**
|
||||
|
||||
Set up the host with the following steps:
|
||||
|
||||
#. Update OS:
|
||||
|
||||
::
|
||||
|
||||
apt-get update
|
||||
|
||||
#. Clone the StarlingX tools repository:
|
||||
|
||||
::
|
||||
|
||||
apt-get install -y git
|
||||
cd $HOME
|
||||
git clone https://opendev.org/starlingx/virtual-deployment.git
|
||||
|
||||
#. Install required packages:
|
||||
|
||||
::
|
||||
|
||||
cd $HOME/virtual-deployment/libvirt
|
||||
bash install_packages.sh
|
||||
apt install -y apparmor-profiles
|
||||
apt-get install -y ufw
|
||||
ufw disable
|
||||
ufw status
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
On Ubuntu 16.04, if apparmor-profile modules were installed as
|
||||
shown in the example above, you must reboot the server to fully
|
||||
install the apparmor-profile modules.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: begin-min-hw-reqs-sx
|
||||
:end-before: end-min-hw-reqs-sx
|
||||
|
||||
@ -40,7 +116,32 @@ Minimum hardware requirements
|
||||
Installation Prerequisites
|
||||
--------------------------
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
:start-after: begin-install-prereqs
|
||||
:end-before: end-install-prereqs
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Several pre-requisites must be completed prior to starting the |prod|
|
||||
installation.
|
||||
|
||||
Before attempting to install |prod|, ensure that you have the the
|
||||
|prod| host installer ISO image file.
|
||||
|
||||
Get the latest |prod| ISO from the `StarlingX mirror
|
||||
<https://mirror.starlingx.cengn.ca/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`__.
|
||||
Alternately, you can get an older release ISO from `here <https://mirror.starlingx.cengn.ca/mirror/starlingx/release/>`__.
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
:start-after: begin-install-prereqs
|
||||
:end-before: end-install-prereqs
|
||||
|
||||
@ -48,7 +149,59 @@ Installation Prerequisites
|
||||
Prepare Servers for Installation
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: start-prepare-servers-common
|
||||
:end-before: end-prepare-servers-common
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and
|
||||
host power-on use KVM / virsh for virtual machine and VM
|
||||
management technology. For an alternative virtualization
|
||||
environment, see: :ref:`Install StarlingX in VirtualBox
|
||||
<install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server
|
||||
definition for:
|
||||
|
||||
* simplex-controller-0
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'simplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c simplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based
|
||||
GUI for the virt-manager application will not start. The
|
||||
virt-manager GUI is not absolutely required and you can safely
|
||||
ignore errors and continue.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: start-prepare-servers-common
|
||||
:end-before: end-prepare-servers-common
|
||||
|
||||
@ -56,7 +209,48 @@ Prepare Servers for Installation
|
||||
Install Software on Controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-aio-start
|
||||
:end-before: incl-install-software-controller-0-aio-end
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
In the last step of :ref:`aio_simplex_environ`, the controller-0
|
||||
virtual server 'simplex-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select
|
||||
the appropriate installer menu options to start the non-interactive
|
||||
install of StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first
|
||||
installer menu selection. Use ESC to navigate to previous menus, to
|
||||
ensure you are at the first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console simplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for
|
||||
the server to reboot. This can take 5-10 minutes, depending on the
|
||||
performance of the host machine.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-aio-start
|
||||
:end-before: incl-install-software-controller-0-aio-end
|
||||
|
||||
@ -79,22 +273,25 @@ Bootstrap system on controller-0
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will |DHCP| out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a |DHCP| server
|
||||
is present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
.. only:: starlingx
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
.. tabs::
|
||||
|
||||
::
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-aio-sx-install-verify-ip-connectivity
|
||||
:end-before: end-aio-sx-install-verify-ip-connectivity
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Not applicable.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-aio-sx-install-verify-ip-connectivity
|
||||
:end-before: end-aio-sx-install-verify-ip-connectivity
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
@ -115,6 +312,7 @@ Bootstrap system on controller-0
|
||||
configuration override files for hosts. For example:
|
||||
``$HOME/<hostname>.yml``.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
@ -152,7 +350,7 @@ Bootstrap system on controller-0
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
In either of the above options, the bootstrap playbook's default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub
|
||||
|
||||
@ -220,9 +418,9 @@ Bootstrap system on controller-0
|
||||
- 1.2.3.4
|
||||
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r7>` for information on additional Ansible
|
||||
bootstrap configurations for advanced Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
@ -248,27 +446,71 @@ The newly installed controller needs to be configured.
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| interface of controller-0 and specify the attached
|
||||
network as "oam". The following example configures the OAM interface on a
|
||||
physical untagged ethernet port, use |OAM| port name that is applicable to
|
||||
your deployment environment, for example eth0:
|
||||
network as "oam".
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-oam-interface-sx
|
||||
:end-before: end-config-controller-0-oam-interface-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
::
|
||||
|
||||
~(keystone_admin)$ OAM_IF=<OAM-PORT>
|
||||
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
|
||||
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-oam-interface-sx
|
||||
:end-before: end-config-controller-0-oam-interface-sx
|
||||
|
||||
#. Configure |NTP| servers for network time synchronization:
|
||||
|
||||
::
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-ntp-interface-sx
|
||||
:end-before: end-config-controller-0-ntp-interface-sx
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||
<ptp-server-config-index>`.
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
|
||||
Configuration <ptp-server-config-index>`.
|
||||
|
||||
.. end-config-controller-0-ntp-interface-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock
|
||||
skew alarms. Also, the virtual instances clock is synchronized
|
||||
with the host clock, so it is not absolutely required to
|
||||
configure |NTP| in this step.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-ntp-interface-sx
|
||||
:end-before: end-config-controller-0-ntp-interface-sx
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
@ -310,15 +552,32 @@ The newly installed controller needs to be configured.
|
||||
:end-before: ref1-end
|
||||
|
||||
#. **For OpenStack only:** Due to the additional OpenStack services running
|
||||
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||
required, 6 platform cores are recommended.
|
||||
on the |AIO| controller platform cores, additional platform cores may be
|
||||
required.
|
||||
|
||||
Increase the number of platform cores with the following commands:
|
||||
.. only:: starlingx
|
||||
|
||||
.. code-block:: bash
|
||||
.. tabs::
|
||||
|
||||
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-add-cores-sx
|
||||
:end-before: end-config-controller-0-OS-add-cores-sx
|
||||
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
The |VMs| being used for hosts only have 4 cores; 2 for platform
|
||||
and 2 for |VMs|. There are no additional cores available for
|
||||
platform in this scenario.
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-add-cores-sx
|
||||
:end-before: end-config-controller-0-OS-add-cores-sx
|
||||
|
||||
#. Due to the additional OpenStack services' containers running on the
|
||||
controller host, the size of the Docker filesystem needs to be
|
||||
@ -354,18 +613,23 @@ The newly installed controller needs to be configured.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Runs in a container; defined within the helm charts of
|
||||
|prefix|-openstack manifest.
|
||||
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||
Plane Development Kit, which is supported only on bare metal hardware)
|
||||
should be used:
|
||||
If you require better performance, |OVS-DPDK| (|OVS| with the
|
||||
Data Plane Development Kit, which is supported only on bare
|
||||
metal hardware) should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||
* Runs directly on the host (it is not containerized). Requires
|
||||
that at least 1 core be assigned/dedicated to the vSwitch
|
||||
function.
|
||||
|
||||
To deploy the default containerized |OVS|:
|
||||
@ -374,104 +638,62 @@ The newly installed controller needs to be configured.
|
||||
|
||||
~(keystone_admin)$ system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of
|
||||
This does not run any vSwitch directly on the host, instead, it
|
||||
uses the containerized |OVS| defined in the helm charts of
|
||||
|prefix|-openstack manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-vswitch-sx
|
||||
:end-before: end-config-controller-0-OS-vswitch-sx
|
||||
|
||||
.. parsed-literal::
|
||||
.. group-tab:: Virtual
|
||||
|
||||
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
|
||||
The default vSwitch is the containerized |OVS| that is packaged
|
||||
with the ``stx-openstack`` manifest/helm-charts. |prod| provides
|
||||
the option to use OVS-DPDK on the host, however, in the virtual
|
||||
environment OVS-DPDK is not supported, only |OVS| is supported.
|
||||
Therefore, simply use the default |OVS| vSwitch here.
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vSwitch.
|
||||
.. only:: partner
|
||||
|
||||
.. code-block:: bash
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-vswitch-sx
|
||||
:end-before: end-config-controller-0-OS-vswitch-sx
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
|
||||
#. **For OpenStack only:** Add an instances filesystem or set up a disk
|
||||
based nova-local volume group, which is needed for |prefix|-openstack
|
||||
nova ephemeral disks.
|
||||
|
||||
.. note::
|
||||
.. only:: starlingx
|
||||
|
||||
Both cannot exist at the same time.
|
||||
.. tabs::
|
||||
|
||||
Add an 'instances' filesystem
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-add-fs-sx
|
||||
:end-before: end-config-controller-0-OS-add-fs-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Set up an "instances" filesystem, which is needed for
|
||||
stx-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
~(keystone_admin)$ system host-fs-add ${NODE} instances=34
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
system host-fs-add ${NODE} instances=<size>
|
||||
|
||||
OR add a 'nova-local' volume group
|
||||
.. only:: partner
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
system host-pv-add ${NODE} nova-local <DISK_UUID>
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-add-fs-sx
|
||||
:end-before: end-config-controller-0-OS-add-fs-sx
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vSwitch interfaces used by vSwitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
@ -481,34 +703,52 @@ The newly installed controller needs to be configured.
|
||||
|
||||
* Configure the data interfaces for controller-0.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-data-interface-sx
|
||||
:end-before: end-config-controller-0-OS-data-interface-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ NODE=controller-0
|
||||
~(keystone_admin)$ DATA0IF=eth1000
|
||||
~(keystone_admin)$ DATA1IF=eth1001
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
~(keystone_admin)$ PHYSNET0='physnet0'
|
||||
~(keystone_admin)$ PHYSNET1='physnet1'
|
||||
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
|
||||
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
|
||||
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
~(keystone_admin)$ system host-port-list ${NODE}
|
||||
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE}
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-data-interface-sx
|
||||
:end-before: end-config-controller-0-OS-data-interface-sx
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
@ -526,7 +766,7 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the |PCI|-SRIOV interfaces for controller-0.
|
||||
#. Configure the |PCI|-SRIOV interfaces for controller-0.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -558,26 +798,32 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||
#. **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
.. only:: starlingx
|
||||
|
||||
::
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-k8s-sriov-sx
|
||||
:end-before: end-config-controller-0-OS-k8s-sriov-sx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If you are planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
.. only:: partner
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G 10
|
||||
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: begin-config-controller-0-OS-k8s-sriov-sx
|
||||
:end-before: end-config-controller-0-OS-k8s-sriov-sx
|
||||
|
||||
|
||||
***************************************************************
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -27,7 +27,23 @@ Overview
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: begin-min-hw-reqs-ded
|
||||
:end-before: end-min-hw-reqs-ded
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: begin-min-hw-reqs-ded
|
||||
:end-before: end-min-hw-reqs-ded
|
||||
|
||||
@ -37,16 +53,101 @@ Minimum hardware requirements
|
||||
Installation Prerequisites
|
||||
--------------------------
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
:start-after: begin-install-prereqs
|
||||
:end-before: end-install-prereqs
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Several pre-requisites must be completed prior to starting the |prod|
|
||||
installation.
|
||||
|
||||
Before attempting to install |prod|, ensure that you have the the
|
||||
|prod| host installer ISO image file.
|
||||
|
||||
Get the latest |prod| ISO from the `StarlingX mirror
|
||||
<https://mirror.starlingx.cengn.ca/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`__.
|
||||
Alternately, you can get an older release ISO from `here
|
||||
<https://mirror.starlingx.cengn.ca/mirror/starlingx/release/>`__.
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/installation-prereqs.rest
|
||||
:start-after: begin-install-prereqs
|
||||
:end-before: end-install-prereqs
|
||||
|
||||
.. _dedicated-install-prep-servers:
|
||||
|
||||
--------------------------------
|
||||
Prepare Servers for Installation
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: start-prepare-servers-common
|
||||
:end-before: end-prepare-servers-common
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and
|
||||
host power-on use KVM / virsh for virtual machine and |VM|
|
||||
management technology. For an alternative virtualization
|
||||
environment, see: :ref:`Install StarlingX in VirtualBox
|
||||
<install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server
|
||||
definition for:
|
||||
|
||||
* dedicatedstorage-controller-0
|
||||
* dedicatedstorage-controller-1
|
||||
* dedicatedstorage-storage-0
|
||||
* dedicatedstorage-storage-1
|
||||
* dedicatedstorage-worker-0
|
||||
* dedicatedstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'dedicatedstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based
|
||||
GUI for the virt-manager application will not start. The
|
||||
virt-manager GUI is not absolutely required and you can safely
|
||||
ignore errors and continue.
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
:start-after: start-prepare-servers-common
|
||||
:end-before: end-prepare-servers-common
|
||||
|
||||
@ -54,7 +155,48 @@ Prepare Servers for Installation
|
||||
Install Software on Controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-standard-start
|
||||
:end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
In the last step of :ref:`dedicated-install-prep-servers`, the
|
||||
controller-0 virtual server 'dedicatedstorage-controller-0' was
|
||||
started by the :command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select
|
||||
the appropriate installer menu options to start the non-interactive
|
||||
install of StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first
|
||||
installer menu selection. Use ESC to navigate to previous menus, to
|
||||
ensure you are at the first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for
|
||||
the server to reboot. This can take 5-10 minutes depending on the
|
||||
performance of the host machine.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-standard-start
|
||||
:end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
@ -62,17 +204,54 @@ Install Software on Controller-0
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start:
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end:
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/incl-bootstrap-sys-controller-0-standard.rest
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: /shared/_includes/incl-bootstrap-controller-0-virt-controller-storage-start.rest
|
||||
:start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/incl-bootstrap-sys-controller-0-standard.rest
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-storage-start:
|
||||
:end-before: incl-config-controller-0-storage-end:
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-0-storage.rest
|
||||
:start-after: incl-config-controller-0-storage-start
|
||||
:end-before: incl-config-controller-0-storage-end
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-0-virt-controller-storage.rest
|
||||
:start-after: incl-config-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-0-storage.rest
|
||||
:start-after: incl-config-controller-0-storage-start
|
||||
:end-before: incl-config-controller-0-storage-end
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
@ -98,14 +277,37 @@ machine.
|
||||
Install software on controller-1, storage nodes, and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
.. only:: starlingx
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
.. tabs::
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-install-sw-cont-1-stor-and-wkr-nodes
|
||||
:end-before: end-install-sw-cont-1-stor-and-wkr-nodes
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'dedicatedstorage-controller-1'. It will automatically attempt to
|
||||
network boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-1
|
||||
|
||||
#. As controller-1 VM boots, a message appears on its console
|
||||
instructing you to configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
@ -117,54 +319,82 @@ Install software on controller-1, storage nodes, and worker nodes
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
This initiates software installation on controller-1. This can take
|
||||
5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the storage-0 and
|
||||
storage-1 servers. Set the personality to 'storage' and assign a unique
|
||||
hostname for each.
|
||||
#. While waiting on the previous step to complete, start up and set
|
||||
the personality for 'dedicatedstorage-storage-0' and
|
||||
'dedicatedstorage-storage-1'. Set the personality to 'storage' and
|
||||
assign a unique hostname for each.
|
||||
|
||||
For example, power on storage-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
For example, start 'dedicatedstorage-storage-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
``system host-list`` on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for storage-1. Power on storage-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
Repeat for 'dedicatedstorage-storage-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
``system host-list`` on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates the software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
This initiates software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
#. While waiting on the previous step to complete, start up and set
|
||||
the personality for 'dedicatedstorage-worker-0' and
|
||||
'dedicatedstorage-worker-1'. Set the personality to 'worker' and
|
||||
assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
For example, start 'dedicatedstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
``system host-list`` on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
``system host-list`` on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker-0 and worker-1.
|
||||
This initiates software installation on worker-0 and worker-1.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
@ -173,9 +403,10 @@ Install software on controller-1, storage nodes, and worker nodes
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all servers to reboot, and for all to
|
||||
show as locked/disabled/online in 'system host-list'.
|
||||
#. Wait for the software installation on controller-1, storage-0,
|
||||
storage-1, worker-0, and worker-1 to complete, for all virtual
|
||||
servers to reboot, and for all to show as locked/disabled/online in
|
||||
'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
@ -191,11 +422,35 @@ Install software on controller-1, storage nodes, and worker nodes
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-install-sw-cont-1-stor-and-wkr-nodes
|
||||
:end-before: end-install-sw-cont-1-stor-and-wkr-nodes
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-1.rest
|
||||
:start-after: incl-config-controller-1-start:
|
||||
:end-before: incl-config-controller-1-end:
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-1-virt-controller-storage.rest
|
||||
:start-after: incl-config-controller-1-virt-controller-storage-start
|
||||
:end-before: incl-config-controller-1-virt-controller-storage-end
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/incl-config-controller-1.rest
|
||||
:start-after: incl-config-controller-1-start:
|
||||
:end-before: incl-config-controller-1-end:
|
||||
|
||||
@ -203,7 +458,25 @@ Configure controller-1
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-start:
|
||||
:end-before: incl-unlock-controller-1-end:
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-start:
|
||||
:end-before: incl-unlock-controller-1-end:
|
||||
|
||||
@ -213,48 +486,66 @@ Unlock controller-1
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
|
||||
.. only:: starlingx
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
.. tabs::
|
||||
|
||||
.. code-block:: bash
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-config-storage-nodes
|
||||
:end-before: end-dedicated-config-storage-nodes
|
||||
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the
|
||||
storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network
|
||||
install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add |OSDs| to storage-0.
|
||||
#. Add |OSDs| to storage-0:
|
||||
|
||||
.. code-block:: bash
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
system host-stor-list $HOST
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
#. Add |OSDs| to storage-1:
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
#. Add |OSDs| to storage-1.
|
||||
|
||||
.. code-block:: bash
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
system host-stor-list $HOST
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
.. only:: partner
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-config-storage-nodes
|
||||
:end-before: end-dedicated-config-storage-nodes
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
@ -276,32 +567,92 @@ host machine.
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. The MGMT interfaces are partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
.. only:: starlingx
|
||||
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
.. tabs::
|
||||
|
||||
.. code-block:: bash
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-stor-config-workers
|
||||
:end-before: end-dedicated-stor-config-workers
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the
|
||||
worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by
|
||||
the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
This step is required only if the StarlingX OpenStack
|
||||
application (|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and
|
||||
there is no virtual NIC supporting |SRIOV|. For that reason,
|
||||
data interfaces are not applicable in the virtual environment
|
||||
for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
Configure the datanetworks in sysinv, prior to referencing it in
|
||||
the :command:`system host-if-modify` command.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
.. rubric:: OpenStack-specific host configuration
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
@ -312,235 +663,72 @@ Configure worker nodes
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
#. **For OpenStack only:** Set up a 'instances' filesystem,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
If using |OVS-DPDK| vSwitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use node two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
|
||||
numa-node. This should have been automatically configured, if not run
|
||||
the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
echo "Configuring 'instances' for Nova ephemeral storage: $NODE"
|
||||
system host-fs-add ${NODE} instances=10
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
.. only:: partner
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
|
||||
based nova-local volume group, which is needed for |prefix|-openstack
|
||||
nova ephemeral disks. NOTE: both cannot exist ast the same time
|
||||
|
||||
Add an 'instances' filesystem
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-fs-add ${NODE} instances=<size>
|
||||
done
|
||||
|
||||
OR add a 'nova-local' volume group
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used. Assume /dev/sdb is unused
|
||||
# on all workers
|
||||
DISK_UUID=$(system host-disk-list ${NODE} | awk '/sdb/{print $2}')
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
system host-pv-add ${NODE} nova-local ${DISK_UUID}
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class
|
||||
interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-stor-config-workers
|
||||
:end-before: end-dedicated-stor-config-workers
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
*****************************************
|
||||
|
||||
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
.. only:: starlingx
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
.. tabs::
|
||||
|
||||
.. only:: openstack
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-conf-pci-sriov-interfaces
|
||||
:end-before: end-dedicated-conf-pci-sriov-interfaces
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
Not applicable.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not created already, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-conf-pci-sriov-interfaces
|
||||
:end-before: end-dedicated-conf-pci-sriov-interfaces
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
.. only:: starlingx
|
||||
|
||||
.. code-block:: bash
|
||||
.. tabs::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-unlock-workers
|
||||
:end-before: end-dedicated-unlock-workers
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/dedicated_storage_install_kubernetes.rest
|
||||
:start-after: begin-dedicated-unlock-workers
|
||||
:end-before: end-dedicated-unlock-workers
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _aio_duplex_environ:
|
||||
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
@ -13,7 +15,7 @@ for a StarlingX |this-ver| virtual All-in-one Duplex deployment configuration.
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _aio_simplex_environ:
|
||||
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
@ -13,7 +15,7 @@ for a StarlingX |this-ver| virtual All-in-one Simplex deployment configuration.
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
|
@ -19,7 +19,7 @@ This section describes how to prepare the physical host and virtual
|
||||
environment for an automated StarlingX |this-ver| virtual deployment in
|
||||
VirtualBox.
|
||||
|
||||
.. include:: automated_setup.txt
|
||||
.. include:: /shared/_includes/automated_setup.txt
|
||||
|
||||
---------------------------
|
||||
Installation Configurations
|
||||
|
@ -14,7 +14,7 @@ configuration.
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
|
@ -14,7 +14,7 @@ configuration.
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
|
@ -1,3 +1,5 @@
|
||||
.. _install_virtualbox:
|
||||
|
||||
===============================
|
||||
Install StarlingX in VirtualBox
|
||||
===============================
|
||||
@ -278,15 +280,18 @@ Console usage:
|
||||
For details on how to specify installation parameters such as rootfs device
|
||||
and console port, see :ref:`config_install_parms_r7`.
|
||||
|
||||
Follow the :ref:`StarlingX Installation and Deployment Guides <index-install-e083ca818006>`
|
||||
.. Link to root of install guides here *must* be external
|
||||
|
||||
Follow the `StarlingX Installation and Deployment Guides
|
||||
<https://docs.starlingx.io/deploy_install_guides/index-install-e083ca818006.html>`_
|
||||
to continue.
|
||||
|
||||
* Ensure that boot priority on all VMs is changed using the commands in the "Set
|
||||
the boot priority" step above.
|
||||
* Ensure that boot priority on all |VMs| is changed using the commands in the
|
||||
"Set the boot priority" step above.
|
||||
* In an AIO-DX and standard configuration, additional
|
||||
hosts must be booted using controller-0 (rather than ``bootimage.iso`` file).
|
||||
* On Virtual Box, click F12 immediately when the VM starts to select a different
|
||||
boot option. Select the ``lan`` option to force a network boot.
|
||||
* On Virtual Box, click F12 immediately when the |VM| starts to select a
|
||||
different boot option. Select the ``lan`` option to force a network boot.
|
||||
|
||||
.. _config_install_parms_r7:
|
||||
|
||||
|
@ -14,7 +14,7 @@ configuration.
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
.. include:: /shared/_includes/physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
|
@ -1,4 +1,5 @@
|
||||
|
||||
|
||||
.. Greg updates required for -High Security Vulnerability Document Updates
|
||||
|
||||
.. pja1558616715987
|
||||
@ -295,9 +296,9 @@ subcloud, the subcloud installation process has two phases:
|
||||
|
||||
~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert
|
||||
|
||||
.. pre-include:: /_includes/installing-a-subcloud-without-redfish-platform-management-service.rest
|
||||
:start-after: begin-prepare-files-to-copy-deployment-config
|
||||
:end-before: end-prepare-files-to-copy-deployment-config
|
||||
|
||||
|
||||
|
||||
|
||||
#. At the Central Cloud / system controller, monitor the progress of the
|
||||
subcloud bootstraping and deployment by using the deploy status field of
|
||||
|
@ -15,8 +15,43 @@ This section describes the steps to extend capacity with worker nodes on a
|
||||
Install software on worker nodes
|
||||
--------------------------------
|
||||
|
||||
#. Power on the worker node servers and force them to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. Power on the worker node servers.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-install-sw-on-workers-power-on-dx
|
||||
:end-before: end-install-sw-on-workers-power-on-dx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
#. On the host, power on the worker-0 and worker-1 virtual servers.
|
||||
|
||||
They will automatically attempt to network boot over the
|
||||
management network:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ virsh start duplex-worker-0
|
||||
$ virsh start duplex-worker-1
|
||||
|
||||
#. Attach to the consoles of worker-0 and worker-1.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ virsh console duplex-worker-0
|
||||
$ virsh console duplex-worker-1~(keystone_admin)$
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-install-sw-on-workers-power-on-dx
|
||||
:end-before: end-install-sw-on-workers-power-on-dx
|
||||
|
||||
#. As the worker nodes boot, a message appears on their console instructing
|
||||
you to configure the personality of the node.
|
||||
@ -26,7 +61,7 @@ Install software on worker nodes
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
~(keystone_admin)$ system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
@ -40,8 +75,8 @@ Install software on worker nodes
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
~(keystone_admin)$ system host-update 3 personality=worker hostname=worker-0
|
||||
~(keystone_admin)$ system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker nodes.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
@ -59,7 +94,7 @@ Install software on worker nodes
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
~(keystone_admin)$ system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
@ -97,79 +132,53 @@ Configure worker nodes
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes
|
||||
in support of installing the |prefix|-openstack manifest and helm-charts
|
||||
later.
|
||||
|
||||
.. parsed-literal::
|
||||
.. only:: starlingx
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-labels-dx
|
||||
:end-before: end-os-specific-host-config-labels-dx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
No additional steps are required.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-labels-dx
|
||||
:end-before: end-os-specific-host-config-labels-dx
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
.. only:: starlingx
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
|
||||
This should have been automatically configured, if not run the following
|
||||
command.
|
||||
.. tabs::
|
||||
|
||||
.. code-block:: bash
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-vswitch-dx
|
||||
:end-before: end-os-specific-host-config-vswitch-dx
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
.. group-tab:: Virtual
|
||||
|
||||
done
|
||||
No additional configuration is required for the OVS vswitch in
|
||||
virtual environment.
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
.. only:: partner
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application VMs require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-vswitch-dx
|
||||
:end-before: end-os-specific-host-config-vswitch-dx
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, assuming 1G huge page size is being used on this host, with
|
||||
the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
@ -177,16 +186,16 @@ Configure worker nodes
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
~(keystone_admin)$ system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
# 'system host-show ${NODE} | fgrep rootfs' )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
@ -197,10 +206,10 @@ Configure worker nodes
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
~(keystone_admin)$ system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
@ -211,40 +220,64 @@ Configure worker nodes
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||
A compute-labeled worker host **MUST** have at least one Data class
|
||||
interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-data-dx
|
||||
:end-before: end-os-specific-host-config-data-dx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
~(keystone_admin)$ export NODE=worker-0
|
||||
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
~(keystone_admin)$ export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
~(keystone_admin)$ DATA0IF=eth1000
|
||||
~(keystone_admin)$ DATA1IF=eth1001
|
||||
~(keystone_admin)$ PHYSNET0='physnet0'
|
||||
~(keystone_admin)$ PHYSNET1='physnet1'
|
||||
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
|
||||
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
|
||||
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF | awk '{print $8}')
|
||||
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF | awk '{print $8}')
|
||||
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-data-dx
|
||||
:end-before: end-os-specific-host-config-data-dx
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
@ -267,40 +300,52 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
~(keystone_admin)$ export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
~(keystone_admin)$ export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
~(keystone_admin)$ system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not already created, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
.. only:: starlingx
|
||||
|
||||
.. tabs::
|
||||
|
||||
.. group-tab:: Bare Metal
|
||||
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-sriov-dx
|
||||
:end-before: end-os-specific-host-config-sriov-dx
|
||||
|
||||
.. group-tab:: Virtual
|
||||
|
||||
Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -308,21 +353,12 @@ Optionally Configure PCI-SRIOV Interfaces
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
.. only:: partner
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
|
||||
:start-after: begin-os-specific-host-config-sriov-dx
|
||||
:end-before: end-os-specific-host-config-sriov-dx
|
||||
|
||||
|
||||
-------------------
|
||||
@ -342,3 +378,4 @@ service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. end-aio-duplex-extend
|
||||
|
||||
|
555
doc/source/shared/_includes/aio_duplex_install_kubernetes.rest
Normal file
555
doc/source/shared/_includes/aio_duplex_install_kubernetes.rest
Normal file
@ -0,0 +1,555 @@
|
||||
|
||||
.. begin-aio-dx-install-verify-ip-connectivity
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap
|
||||
playbook. The StarlingX boot image will |DHCP| out all interfaces
|
||||
so the server may have obtained an IP address and have external IP
|
||||
connectivity if a |DHCP| server is present in your environment.
|
||||
Verify this using the :command:`ip addr` and :command:`ping
|
||||
8.8.8.8` command.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route.
|
||||
Use the ``PORT``, ``IP-ADDRESS``/``SUBNET-LENGTH`` and
|
||||
``GATEWAY-IP-ADDRESS`` applicable to your deployment environment.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8
|
||||
|
||||
.. end-aio-dx-install-verify-ip-connectivity
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-oam-interface-dx
|
||||
|
||||
The following example configures the |OAM| interface on a physical
|
||||
untagged ethernet port, use |OAM| port name that is applicable to
|
||||
your deployment environment, for example eth0:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ OAM_IF=<OAM-PORT>
|
||||
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
|
||||
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
.. end-config-controller-0-oam-interface-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-ntp-interface-dx
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
|
||||
Configuration <ptp-server-config-index>`.
|
||||
|
||||
.. end-config-controller-0-ntp-interface-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-k8s-sriov-dx
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If you are planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required on
|
||||
both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G 10
|
||||
|
||||
.. end-config-controller-0-OS-k8s-sriov-dx
|
||||
|
||||
|
||||
.. begin-power-on-controller-1-server-dx
|
||||
|
||||
Power on the controller-1 server and force it to network boot with
|
||||
the appropriate BIOS boot options for your particular server.
|
||||
|
||||
.. end-power-on-controller-1-server-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-1-server-oam-dx
|
||||
|
||||
The following example configures the |OAM| interface on a physical untagged
|
||||
ethernet port, use the |OAM| port name that is applicable to your
|
||||
deployment environment, for example eth0:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ OAM_IF=<OAM-PORT>
|
||||
~(keystone_admin)$ system host-if-modify controller-1 $OAM_IF -c platform
|
||||
~(keystone_admin)$ interface-network-assign controller-1 $OAM_IF oam
|
||||
|
||||
.. end-config-controller-1-server-oam-dx
|
||||
|
||||
|
||||
.. begin-config-k8s-sriov-controller-1-dx
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-1 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-1 1 -1G 10
|
||||
|
||||
.. end-config-k8s-sriov-controller-1-dx
|
||||
|
||||
|
||||
.. begin-install-sw-on-workers-power-on-dx
|
||||
|
||||
Power on the worker node servers and force them to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
.. end-install-sw-on-workers-power-on-dx
|
||||
|
||||
|
||||
|
||||
.. begin-os-specific-host-config-sriov-dx
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application containers on
|
||||
this host, configure the number of 1G Huge pages required on both |NUMA|
|
||||
nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application $NODE 1 -1G 10
|
||||
done
|
||||
|
||||
.. end-os-specific-host-config-sriov-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-add-cores-dx
|
||||
|
||||
A minimum of 4 platform cores are required, 6 platform cores are
|
||||
recommended.
|
||||
|
||||
Increase the number of platform cores with the following
|
||||
commands. This example assigns 6 cores on processor/numa-node 0
|
||||
|
||||
on controller-0 to platform.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
|
||||
|
||||
.. end-config-controller-0-OS-add-cores-dx
|
||||
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-vswitch-dx
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vSwitch.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes
|
||||
created will default to automatically assigning 1 vSwitch core
|
||||
for AIO controllers and 2 vSwitch cores (both on numa-node 0;
|
||||
physical NICs are typically on first numa-node) for
|
||||
compute-labeled worker nodes.
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
``hw:mem_page_size=large``
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
.. end-config-controller-0-OS-vswitch-dx
|
||||
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-add-fs-dx
|
||||
|
||||
.. note::
|
||||
|
||||
Both cannot exist at the same time.
|
||||
|
||||
Add an 'instances' filesystem
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
|
||||
|
||||
Or add a 'nova-local' volume group:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
|
||||
# Create ‘nova-local’ local volume group
|
||||
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
~(keystone_admin)$ system host-disk-list ${NODE}
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
|
||||
|
||||
.. end-config-controller-0-OS-add-fs-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-data-interface-dx
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ NODE=controller-0
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
~(keystone_admin)$ system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-config-controller-0-OS-data-interface-dx
|
||||
|
||||
|
||||
|
||||
.. begin-increase-cores-controller-1-dx
|
||||
|
||||
Increase the number of platform cores with the following commands:
|
||||
|
||||
.. code-block::
|
||||
|
||||
# assign 6 cores on processor/numa-node 0 on controller-1 to platform
|
||||
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-1
|
||||
|
||||
.. end-increase-cores-controller-1-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-vswitch-controller-1-dx
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vSwitch. This should have been automatically configured,
|
||||
if not run the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch
|
||||
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-1
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application VMs require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 0
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 1
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
``hw:mem_page_size=large``.
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, assuming 1G huge page size is being used on this host, with
|
||||
the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 1
|
||||
|
||||
.. end-config-vswitch-controller-1-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-fs-controller-1-dx
|
||||
|
||||
.. note::
|
||||
Both cannot exist at the same time.
|
||||
|
||||
* Add an 'instances' filesystem:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-1
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
|
||||
|
||||
**Or**
|
||||
|
||||
* Add a 'nova-local' volume group:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-1
|
||||
|
||||
# Create ‘nova-local’ local volume group
|
||||
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
~(keystone_admin)$ system host-disk-list ${NODE}
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
|
||||
|
||||
.. end-config-fs-controller-1-dx
|
||||
|
||||
|
||||
|
||||
.. begin-config-data-interfaces-controller-1-dx
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
# List inventoried host's ports and identify ports to be used as 'data' interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as 'data' class interfaces, MTU of 1500 and named data#
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-config-data-interfaces-controller-1-dx
|
||||
|
||||
|
||||
|
||||
.. begin-os-specific-host-config-data-dx
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
~(keystone_admin)$ export NODE=worker-0
|
||||
|
||||
# and then repeat with
|
||||
~(keystone_admin)$ export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as `data` interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
~(keystone_admin)$ system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-os-specific-host-config-data-dx
|
||||
|
||||
|
||||
.. begin-os-specific-host-config-labels-dx
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
.. end-os-specific-host-config-labels-dx
|
||||
|
||||
|
||||
.. begin-os-specific-host-config-vswitch-dx
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
Default recommendation for worker node is to use two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical |NICs| are
|
||||
typically on first numa-node. This should have been
|
||||
automatically configured, if not run the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
|
||||
memory on each |NUMA| node on the host. It is recommended to
|
||||
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|
||||
|NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge
|
||||
page size is supported on any one host. If your application VMs
|
||||
require 2M huge pages, then configure 500x 2M huge pages (-2M
|
||||
500) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
done
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured
|
||||
to use huge pages to enable networking and must use a flavor
|
||||
with property: ``hw:mem_page_size=large``.
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK|
|
||||
environment on this host, assuming 1G huge page size is being
|
||||
used on this host, with the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 0
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 1
|
||||
done
|
||||
|
||||
.. end-os-specific-host-config-vswitch-dx
|
225
doc/source/shared/_includes/aio_simplex_install_kubernetes.rest
Normal file
225
doc/source/shared/_includes/aio_simplex_install_kubernetes.rest
Normal file
@ -0,0 +1,225 @@
|
||||
|
||||
|
||||
.. begin-aio-sx-install-verify-ip-connectivity
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap
|
||||
playbook. The StarlingX boot image will |DHCP| out all interfaces
|
||||
so the server may have obtained an IP address and have external IP
|
||||
connectivity if a |DHCP| server is present in your environment.
|
||||
Verify this using the :command:`ip addr` and :command:`ping
|
||||
8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route.
|
||||
Use the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8
|
||||
|
||||
.. end-aio-sx-install-verify-ip-connectivity
|
||||
|
||||
|
||||
.. begin-config-controller-0-oam-interface-sx
|
||||
|
||||
The following example configures the |OAM| interface on a physical
|
||||
untagged ethernet port, use |OAM| port name that is applicable to
|
||||
your deployment environment, for example eth0:
|
||||
|
||||
::
|
||||
|
||||
~(keystone_admin)$ OAM_IF=<OAM-PORT>
|
||||
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
|
||||
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see
|
||||
:ref:`Node Interfaces <node-interfaces-index>`.
|
||||
|
||||
.. end-config-controller-0-oam-interface-sx
|
||||
|
||||
|
||||
.. begin-config-controller-0-ntp-interface-sx
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
|
||||
Configuration <ptp-server-config-index>`.
|
||||
|
||||
.. end-config-controller-0-ntp-interface-sx
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-k8s-sriov-sx
|
||||
|
||||
#. Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
|
||||
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
#. |optional| If you are planning on running |DPDK| in
|
||||
Kubernetes hosted application containers on this host,
|
||||
configure the number of 1G Huge pages required on both |NUMA|
|
||||
nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G
|
||||
|
||||
.. end-config-controller-0-OS-k8s-sriov-sx
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-add-cores-sx
|
||||
|
||||
A minimum of 4 platform cores are required, 6 platform cores are
|
||||
recommended.
|
||||
|
||||
Increase the number of platform cores with the following
|
||||
commands. This example assigns 6 cores on processor/numa-node 0
|
||||
on controller-0 to platform.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
|
||||
|
||||
.. end-config-controller-0-OS-add-cores-sx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-vswitch-sx
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a
|
||||
single core for |OVS-DPDK| vSwitch.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
|
||||
memory on each |NUMA| node on the host. It is recommended to
|
||||
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|
||||
|NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge
|
||||
page size is supported on any one host. If your application
|
||||
|VMs| require 2M huge pages, then configure 500x 2M huge pages
|
||||
(-2M 500) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured
|
||||
to use huge pages to enable networking and must use a flavor
|
||||
with property: hw:mem_page_size=large
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK|
|
||||
environment on this host, the following commands are an
|
||||
example that assumes that 1G huge page size is being used on
|
||||
this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type
|
||||
requires locking and unlocking controller-0 to apply the
|
||||
change.
|
||||
|
||||
.. end-config-controller-0-OS-vswitch-sx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-add-fs-sx
|
||||
|
||||
.. note::
|
||||
|
||||
Both cannot exist at the same time.
|
||||
|
||||
Add an 'instances' filesystem
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
|
||||
|
||||
Or add a 'nova-local' volume group:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ export NODE=controller-0
|
||||
|
||||
# Create ‘nova-local’ local volume group
|
||||
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
~(keystone_admin)$ system host-disk-list ${NODE}
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
|
||||
|
||||
.. end-config-controller-0-OS-add-fs-sx
|
||||
|
||||
|
||||
|
||||
.. begin-config-controller-0-OS-data-interface-sx
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
~(keystone_admin)$ NODE=controller-0
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
~(keystone_admin)$ system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
~(keystone_admin)$ system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
~(keystone_admin)$ DATANET0='datanet0'
|
||||
~(keystone_admin)$ DATANET1='datanet1'
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
|
||||
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-config-controller-0-OS-data-interface-sx
|
@ -0,0 +1,477 @@
|
||||
.. start-install-sw-on-controller-0-and-workers-standard-with-storage
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1. This can take
|
||||
5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker
|
||||
nodes. Set the personality to 'worker' and assign a unique hostname
|
||||
for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host
|
||||
(hostname=None) to be discovered by checking ``system host-list``:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and
|
||||
worker-1 to complete, for all servers to reboot, and for all to show
|
||||
as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
.. end-install-sw-on-controller-0-and-workers-standard-with-storage
|
||||
|
||||
|
||||
|
||||
.. start-config-worker-nodes-std-with-storage-bare-metal
|
||||
|
||||
.. start-config-worker-nodes-std-with-storage-bm-and-virt
|
||||
|
||||
#. Add the third Ceph monitor to a worker node:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to
|
||||
controller-0 and controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add worker-0
|
||||
|
||||
#. Wait for the worker node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker
|
||||
nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by
|
||||
the network install procedure.)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. end-config-worker-nodes-std-with-storage-bm-and-virt
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker
|
||||
nodes in support of installing the |prefix|-openstack manifest and
|
||||
helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
done
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable
|
||||
it by using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the
|
||||
vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on
|
||||
first numa-node. This should have been automatically configured, if
|
||||
not run the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
|
||||
memory on each |NUMA| node on the host. It is recommended to
|
||||
configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge
|
||||
page size is supported on any one host. If your application |VMs|
|
||||
require 2M huge pages, then configure 500x 2M huge pages (-2M 500)
|
||||
for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to
|
||||
use huge pages to enable networking and must use a flavor with
|
||||
the property ``hw:mem_page_size=large``
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment
|
||||
on this host, the following commands are an example that assumes
|
||||
that 1G huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Add an instances filesystem OR Set up a
|
||||
disk based nova-local volume group, which is needed for
|
||||
|prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. note::
|
||||
|
||||
Both cannot exist at the same time.
|
||||
|
||||
Add an 'instances' filesystem:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-fs-add ${NODE} instances=<size>
|
||||
done
|
||||
|
||||
OR add a 'nova-local' volume group
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used. Assume /dev/sdb is unused
|
||||
# on all workers
|
||||
DISK_UUID=$(system host-disk-list ${NODE} | awk '/sdb/{print $2}')
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
system host-pv-add ${NODE} nova-local ${DISK_UUID}
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to
|
||||
provide |VM| virtio vNIC connectivity to OpenStack Neutron Tenant
|
||||
Networks on the underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data
|
||||
class interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-config-worker-nodes-std-with-storage-bare-metal
|
||||
|
||||
|
||||
|
||||
.. start-config-pci-sriov-interfaces-standard-storage
|
||||
|
||||
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using
|
||||
|SRIOV| vNICs in hosted application |VMs|. Note that pci-sriov
|
||||
interfaces can have the same Data Networks assigned to them as
|
||||
vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not already created, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only:** To enable using |SRIOV| network attachments
|
||||
for the above interfaces in Kubernetes hosted application
|
||||
containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages
|
||||
required on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
|
||||
.. end-config-pci-sriov-interfaces-standard-storage
|
||||
|
||||
|
||||
|
||||
.. start-add-ceph-osds-to-controllers-std-storage
|
||||
|
||||
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the
|
||||
`sdb` disk:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-0
|
||||
|
||||
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the
|
||||
`sdb` disk:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-1
|
||||
|
||||
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
.. end-add-ceph-osds-to-controllers-std-storage
|
||||
|
||||
|
||||
.. begin-openstack-specific-host-configs-bare-metal
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to
|
||||
controller-0 in support of installing the |prefix|-openstack
|
||||
manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the
|
||||
vSwitch.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as
|
||||
default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of
|
||||
the |prefix|-openstack manifest.
|
||||
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, |OVS-DPDK| (|OVS| with the
|
||||
Data Plane Development Kit, which is supported only on bare
|
||||
metal hardware) should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
|
||||
* Requires that at least 1 core be assigned/dedicated to the
|
||||
vSwitch function.
|
||||
|
||||
To deploy the default containerized |OVS|:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead,
|
||||
it uses the containerized |OVS| defined in the helm charts of
|
||||
|prefix|-openstack manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Once vswitch_type is set to |OVS-DPDK|, any subsequent
|
||||
|AIO|-controller or worker nodes created will default to
|
||||
automatically assigning 1 vSwitch core for |AIO| controllers and
|
||||
2 vSwitch cores (both on numa-node 0; physical |NICs| are
|
||||
typically on first numa-node) for compute-labeled worker nodes.
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
.. end-openstack-specific-host-configs-bare-metal
|
@ -0,0 +1,420 @@
|
||||
.. begin-install-sw-cont-1-stor-and-wkr-nodes
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with
|
||||
the appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing
|
||||
you to configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1. This can
|
||||
take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the
|
||||
storage-0 and storage-1 servers. Set the personality to 'storage'
|
||||
and assign a unique hostname for each.
|
||||
|
||||
For example, power on storage-0 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for storage-1. Power on storage-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates the software installation on storage-0 and
|
||||
storage-1. This can take 5-10 minutes, depending on the performance
|
||||
of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the
|
||||
worker nodes. Set the personality to 'worker' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker-0 and worker-1.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0,
|
||||
storage-1, worker-0, and worker-1 to complete, for all servers to
|
||||
reboot, and for all to show as locked/disabled/online in 'system
|
||||
host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | worker-0 | worker | locked | disabled | online |
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
.. end-install-sw-cont-1-stor-and-wkr-nodes
|
||||
|
||||
|
||||
|
||||
.. begin-dedicated-config-storage-nodes
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add |OSDs| to storage-0.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-0
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
#. Add |OSDs| to storage-1.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-1
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
.. end-dedicated-config-storage-nodes
|
||||
|
||||
|
||||
|
||||
.. begin-dedicated-stor-config-workers
|
||||
|
||||
#. The MGMT interfaces are partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vSwitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use node two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
|
||||
numa-node. This should have been automatically configured, if not run
|
||||
the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
|
||||
based nova-local volume group, which is needed for |prefix|-openstack
|
||||
nova ephemeral disks. NOTE: both cannot exist ast the same time
|
||||
|
||||
Add an 'instances' filesystem
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create ‘instances’ filesystem
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-fs-add ${NODE} instances=<size>
|
||||
done
|
||||
|
||||
OR add a 'nova-local' volume group
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of an unused DISK to to be added to the ‘nova-local’ volume
|
||||
# group. CEPH OSD Disks can NOT be used. Assume /dev/sdb is unused
|
||||
# on all workers
|
||||
DISK_UUID=$(system host-disk-list ${NODE} | awk '/sdb/{print $2}')
|
||||
|
||||
# Add the unused disk to the ‘nova-local’ volume group
|
||||
system host-pv-add ${NODE} nova-local ${DISK_UUID}
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class
|
||||
interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
.. end-dedicated-stor-config-workers
|
||||
|
||||
|
||||
|
||||
.. begin-dedicated-conf-pci-sriov-interfaces
|
||||
|
||||
**Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
This step is **optional** for Kubernetes. Do this step if using
|
||||
|SRIOV| network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using
|
||||
|SRIOV| vNICs in hosted application |VMs|. Note that pci-sriov
|
||||
interfaces can have the same Data Networks assigned to them as
|
||||
vswitch data interfaces.
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not created already, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
* **For Kubernetes only:** To enable using |SRIOV| network attachments
|
||||
for the above interfaces in Kubernetes hosted application
|
||||
containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages
|
||||
required on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
done
|
||||
|
||||
.. end-dedicated-conf-pci-sriov-interfaces
|
||||
|
||||
|
||||
|
||||
.. begin-dedicated-unlock-workers
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes
|
||||
and come into service. This can take 5-10 minutes, depending on the
|
||||
performance of the host machine.
|
||||
|
||||
.. end-dedicated-unlock-workers
|
@ -1,8 +1,5 @@
|
||||
.. incl-install-software-controller-0-aio-start
|
||||
|
||||
Installing software on controller-0 is the second step in the |prod|
|
||||
installation procedure.
|
||||
|
||||
.. note::
|
||||
|
||||
The disks and disk partitions need to be wiped before the install. Installing
|
||||
@ -33,6 +30,8 @@ installation procedure.
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
|prod| Installer Menus.
|
||||
|
||||
.. begin-install-software-controller-0-aio-virtual
|
||||
|
||||
#. Wait for the Install menus, and when prompted, make the following menu
|
||||
selections in the installer:
|
||||
|
||||
@ -73,6 +72,8 @@ installation procedure.
|
||||
When using the low latency kernel, you must use the serial console
|
||||
instead of the graphics console, as it causes RT performance issues.
|
||||
|
||||
.. end-install-software-controller-0-aio-virtual
|
||||
|
||||
.. include:: /_includes/install-patch-ctl-0.rest
|
||||
|
||||
.. incl-install-software-controller-0-aio-end
|
||||
|
@ -0,0 +1,108 @@
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change
|
||||
the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap
|
||||
playbook:
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap
|
||||
playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files
|
||||
for Ansible configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host:
|
||||
localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example:
|
||||
``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible
|
||||
bootstrap playbook using one of the following methods:
|
||||
|
||||
* Copy the ``default.yml`` file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as
|
||||
desired (use the commented instructions in the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in
|
||||
the example below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r7>` for information on additional
|
||||
Ansible bootstrap configurations for advanced Ansible bootstrap
|
||||
scenarios, such as Docker proxies when deploying behind a firewall,
|
||||
etc. Refer to :ref:`Docker Proxy Configuration
|
||||
<docker_proxy_config>` for details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete. This can take 5-10
|
||||
minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
|
@ -0,0 +1,170 @@
|
||||
.. incl-bootstrap-sys-controller-0-standard-start
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
|
||||
When logging in for the first time, you will be forced to change the
|
||||
password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap
|
||||
playbook. The StarlingX boot image will |DHCP| out all interfaces so
|
||||
the server may have obtained an IP address and have external IP
|
||||
connectivity if a |DHCP| server is present in your environment. Verify
|
||||
this using the :command:`ip addr` and :command:`ping 8.8.8.8`
|
||||
commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use
|
||||
the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable
|
||||
to your deployment environment.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap
|
||||
playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for
|
||||
Ansible configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host:
|
||||
localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example:
|
||||
``$HOME/<hostname>.yml``.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
.. note::
|
||||
|
||||
This Ansible Overrides file for the Bootstrap Playbook
|
||||
($HOME/localhost.yml) contains security sensitive information, use
|
||||
the :command:`ansible-vault create $HOME/localhost.yml` command to
|
||||
create it. You will be prompted for a password to protect/encrypt
|
||||
the file. Use the :command:`ansible-vault edit $HOME/localhost.yml`
|
||||
command if the file needs to be edited after it is created.
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your
|
||||
overrides.
|
||||
|
||||
The ``default.yml`` file lists all available parameters for
|
||||
bootstrap configuration with a brief description for each parameter
|
||||
in the file comments.
|
||||
|
||||
To use this method, run the :command:`ansible-vault create
|
||||
$HOME/localhost.yml` command and copy the contents of the
|
||||
``default.yml`` file into the ansible-vault editor, and edit the
|
||||
configurable values as required.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file with the
|
||||
:command:`ansible-vault create $HOME/localhost.yml` command and
|
||||
provide the minimum required parameters for the deployment
|
||||
configuration as shown in the example below. Use the OAM IP SUBNET
|
||||
and IP ADDRESSing applicable to your deployment environment.
|
||||
|
||||
.. include:: /_includes/min-bootstrap-overrides-non-simplex.rest
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p|
|
||||
from Docker hub.
|
||||
|
||||
If you have setup a private Docker registry to use for
|
||||
bootstrapping then you will need to add the following lines in
|
||||
$HOME/localhost.yml:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
url: myprivateregistry.abc.com:9001/quay.io
|
||||
docker.elastic.co:
|
||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||
gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/gcr.io
|
||||
ghcr.io:
|
||||
url: myprivateregistry.abc.com:9001/gcr.io
|
||||
k8s.gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/k8s.ghcr.io
|
||||
docker.io:
|
||||
url: myprivateregistry.abc.com:9001/docker.io
|
||||
registry.k8s.io:
|
||||
url: myprivateregistry.abc.com:9001/registry.k8s.io
|
||||
icr.io:
|
||||
url: myprivateregistry.abc.com:9001/icr.io
|
||||
defaults:
|
||||
type: docker
|
||||
username: <your_myprivateregistry.abc.com_username>
|
||||
password: <your_myprivateregistry.abc.com_password>
|
||||
|
||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||
# certificate as a Trusted CA
|
||||
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
|
||||
|
||||
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r7>`
|
||||
for more information.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
If a firewall is blocking access to Docker hub or your private
|
||||
registry from your StarlingX deployment, you will need to add
|
||||
the following lines in $HOME/localhost.yml (see :ref:`Docker
|
||||
Proxy Configuration <docker_proxy_config>` for more details
|
||||
about Docker proxy settings):
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r7>` for information on additional
|
||||
Ansible bootstrap configurations for advanced Ansible bootstrap
|
||||
scenarios.
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-end
|
@ -0,0 +1,75 @@
|
||||
.. incl-config-controller-0-storage-start
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| interface of controller-0 and specify the
|
||||
attached network as "oam".
|
||||
|
||||
The following example configures the |OAM| interface on a physical untagged
|
||||
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||
environment, for example eth0:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||
networks of both "mgmt" and "cluster-host".
|
||||
|
||||
The following example configures the MGMT interface on a physical untagged
|
||||
ethernet port, use the MGMT port name that is applicable to your deployment
|
||||
environment, for example eth1:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
|
||||
# De-provision loopback interface and
|
||||
# remove mgmt and cluster-host networks from loopback interface
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
|
||||
# Configure management interface and assign mgmt and cluster-host networks to it
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure |NTP| servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||
<ptp-server-config-index>`.
|
||||
|
||||
#. If required, configure Ceph storage backend:
|
||||
|
||||
A persistent storage backend is required if your application requires |PVCs|.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
.. important::
|
||||
|
||||
The StarlingX OpenStack application **requires** |PVCs|.
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirm
|
||||
|
||||
.. incl-config-controller-0-storage-end:
|
@ -0,0 +1,68 @@
|
||||
.. incl-config-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| and MGMT interfaces of controller-0 and specify
|
||||
the attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock
|
||||
skew alarms. Also, the virtual instance clock is synchronized
|
||||
with the host clock, so it is not absolutely required to
|
||||
configure NTP here.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
If you want to install the StarlingX Openstack application
|
||||
(|prefix|-openstack) this step is mandatory.
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure
|
||||
Docker to use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :ref:`docker_proxy_config` for
|
||||
details about Docker proxy setting
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-end:
|
@ -0,0 +1,28 @@
|
||||
.. incl-config-controller-1-virt-controller-storage-start:
|
||||
|
||||
Configure the |OAM| and MGMT interfaces of virtual controller-0 and
|
||||
specify the attached networks. Note that the MGMT interface is partially
|
||||
set up by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
.. rubric:: OpenStack-specific host configuration
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-end:
|
50
doc/source/shared/_includes/incl-config-controller-1.rest
Normal file
50
doc/source/shared/_includes/incl-config-controller-1.rest
Normal file
@ -0,0 +1,50 @@
|
||||
.. incl-config-controller-1-start:
|
||||
|
||||
#. Configure the |OAM| interface of controller-1 and specify the
|
||||
attached network of "oam".
|
||||
|
||||
The following example configures the |OAM| interface on a physical
|
||||
untagged ethernet port, use the |OAM| port name that is applicable
|
||||
to your deployment environment, for example eth0:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see
|
||||
:ref:`Node Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. The MGMT interface is partially set up by the network install
|
||||
procedure; configuring the port used for network install as the MGMT
|
||||
port and specifying the attached network of "mgmt".
|
||||
|
||||
Complete the MGMT interface configuration of controller-1 by
|
||||
specifying the attached network of "cluster-host".
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1
|
||||
in support of installing the |prefix|-openstack manifest and
|
||||
helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-end:
|
@ -12,8 +12,6 @@ installation.
|
||||
|
||||
Before attempting to install |prod|, ensure that you have the following:
|
||||
|
||||
.. _installation-pre-requisites-ul-uzl-rny-q3b:
|
||||
|
||||
|
||||
- The |prod-long| host installer ISO image file.
|
||||
|
||||
|
@ -1,9 +1,8 @@
|
||||
The following sections describe system requirements and host setup for a
|
||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
|
||||
|
||||
*********************
|
||||
Hardware requirements
|
||||
*********************
|
||||
|
||||
.. rubric:: Hardware requirements
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
@ -18,9 +17,8 @@ The host system should have at least:
|
||||
|
||||
* **Network:** One network adapter with active Internet connection
|
||||
|
||||
*********************
|
||||
Software requirements
|
||||
*********************
|
||||
|
||||
.. rubric:: Software requirements
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
@ -28,9 +26,8 @@ The host system should have at least:
|
||||
|
||||
All other required packages will be installed by scripts in the StarlingX tools repository.
|
||||
|
||||
**********
|
||||
Host setup
|
||||
**********
|
||||
|
||||
.. rubric:: Host setup
|
||||
|
||||
Set up the host with the following steps:
|
||||
|
||||
@ -68,5 +65,7 @@ Set up the host with the following steps:
|
||||
|
||||
|
||||
#. Get the latest StarlingX ISO from the
|
||||
`StarlingX mirror <https://mirror.starlingx.windriver.com/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`_.
|
||||
Alternately, you can get an older release ISO from `here <https://mirror.starlingx.windriver.com/mirror/starlingx/release/>`_.
|
||||
`StarlingX mirror
|
||||
<https://mirror.starlingx.windriver.com/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`_.
|
||||
Alternately, you can get an older release ISO from `here
|
||||
<https://mirror.starlingx.windriver.com/mirror/starlingx/release/>`_.
|
Loading…
x
Reference in New Issue
Block a user