docs/doc/source/shared/_includes/aio_simplex_install_kubernetes.rest
Ron Stone ed7de99b58 OS related links
Refactoor linking for DS use.

Change-Id: I6de5813fb7df5741e13d34163545d2402a4fd6c5
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
2024-07-02 18:09:02 +00:00

224 lines
7.2 KiB
ReStructuredText
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

.. begin-aio-sx-install-verify-ip-connectivity
External connectivity is required to run the Ansible bootstrap
playbook. The StarlingX boot image will |DHCP| out all interfaces
so the server may have obtained an IP address and have external IP
connectivity if a |DHCP| server is present in your environment.
Verify this using the :command:`ip addr` and :command:`ping
8.8.8.8` commands.
Otherwise, manually configure an IP address and default IP route.
Use the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS
applicable to your deployment environment.
::
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8
.. end-aio-sx-install-verify-ip-connectivity
.. begin-config-controller-0-oam-interface-sx
The following example configures the |OAM| interface on a physical
untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:
::
~(keystone_admin)$ OAM_IF=<OAM-PORT>
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see
|node-interfaces-index|.
.. end-config-controller-0-oam-interface-sx
.. begin-config-controller-0-ntp-interface-sx
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see |ptp-server-config-index|.
.. end-config-controller-0-ntp-interface-sx
.. begin-config-controller-0-OS-k8s-sriov-sx
#. Configure the Kubernetes |SRIOV| device plugin.
::
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
#. |optional| If you are planning on running |DPDK| in
Kubernetes hosted application containers on this host,
configure the number of 1G Huge pages required on both |NUMA|
nodes.
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G
.. end-config-controller-0-OS-k8s-sriov-sx
.. begin-config-controller-0-OS-add-cores-sx
A minimum of 4 platform cores are required, 6 platform cores are
recommended.
Increase the number of platform cores with the following
commands. This example assigns 6 cores on processor/numa-node 0
on controller-0 to platform.
.. code-block:: bash
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
.. end-config-controller-0-OS-add-cores-sx
.. begin-config-controller-0-OS-vswitch-sx
To deploy |OVS-DPDK|, run the following command:
.. parsed-literal::
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a
single core for |OVS-DPDK| vSwitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
memory on each |NUMA| node on the host. It is recommended to
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge
page size is supported on any one host. If your application
|VMs| require 2M huge pages, then configure 500x 2M huge pages
(-2M 500) for vSwitch memory on each |NUMA| node on the host.
.. code-block::
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured
to use huge pages to enable networking and must use a flavor
with property: hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK|
environment on this host, the following commands are an
example that assumes that 1G huge page size is being used on
this host:
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
.. note::
After controller-0 is unlocked, changing vswitch_type
requires locking and unlocking controller-0 to apply the
change.
.. end-config-controller-0-OS-vswitch-sx
.. begin-config-controller-0-OS-add-fs-sx
.. note::
Both cannot exist at the same time.
Add an 'instances' filesystem
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create instances filesystem
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
Or add a 'nova-local' volume group:
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create nova-local local volume group
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used
# List hosts disks and take note of UUID of disk to be used
~(keystone_admin)$ system host-disk-list ${NODE}
# Add the unused disk to the nova-local volume group
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
.. end-config-controller-0-OS-add-fs-sx
.. begin-config-controller-0-OS-data-interface-sx
.. code-block:: bash
~(keystone_admin)$ NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-config-controller-0-OS-data-interface-sx