Prep docs for r7 release

Cloned install r6 dir to r7
Dropped install r5 dir
Cooresponding path and link updates

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: If5752e207d99d0fc53940e4d66c7d5266675beb7
This commit is contained in:
Ron Stone 2022-08-29 14:11:18 -04:00
parent 126131ce63
commit 46d38b65c9
77 changed files with 855 additions and 502 deletions

View File

@ -1,7 +1,7 @@
Your Kubernetes cluster is now up and running. Your Kubernetes cluster is now up and running.
For instructions on how to access StarlingX Kubernetes see For instructions on how to access StarlingX Kubernetes see
:ref:`kubernetes_access`. :ref:`kubernetes_access_r7`.
For instructions on how to install and access StarlingX OpenStack see For instructions on how to install and access StarlingX OpenStack see
:ref:`index-install-r6-os-adc44604968c`. :ref:`index-install-r6-os-adc44604968c`.

View File

@ -61,7 +61,7 @@ one for servers with Legacy BIOS support and another for servers with |UEFI|
firmware. firmware.
During |PXE| boot configuration setup, as described in During |PXE| boot configuration setup, as described in
:ref:`configuring-a-pxe-boot-server-r6`, additional steps are required to :ref:`configuring-a-pxe-boot-server-r7`, additional steps are required to
collect configuration information and create a grub menu to install |prod| collect configuration information and create a grub menu to install |prod|
|deb-eval-release| AIO controller-0 function on the target server. |deb-eval-release| AIO controller-0 function on the target server.

View File

@ -12,26 +12,34 @@ Each guide provides instruction on a specific StarlingX configuration
Upcoming release (latest) Upcoming release (latest)
------------------------- -------------------------
StarlingX R6.0 is under development. StarlingX R8.0 is under development.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
r6_release/index-install-r6-8966076f0e81 r7_release/index-install-r7-8966076f0e81
----------------- -----------------
Supported release Supported release
----------------- -----------------
StarlingX R5.0.1 is the most recent supported release of StarlingX. StarlingX R7.0 is the most recent supported release of StarlingX.
Use the R5.0 Installation Guides to install R5.0.1. Use the R7.0 Installation Guides to install R7.0.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
r5_release/index-install-r5-ca4053cb3ab9 r7_release/index-install-r7-8966076f0e81
Use the R6.0 Installation Guides to install R6.0.
.. toctree::
:maxdepth: 1
r6_release/index-install-r6-8966076f0e81
----------------- -----------------

View File

@ -1,3 +1,5 @@
.. _kubernetes_access_r6:
================================ ================================
Access StarlingX Kubernetes R6.0 Access StarlingX Kubernetes R6.0
================================ ================================

View File

@ -0,0 +1,3 @@
{
"esbonio.sphinx.confDir": ""
}

View File

@ -1,4 +1,5 @@
.. _ansible_bootstrap_configs_r5:
.. _ansible_bootstrap_configs_r7:
================================ ================================
Ansible Bootstrap Configurations Ansible Bootstrap Configurations
@ -11,7 +12,7 @@ This section describes Ansible bootstrap configuration options.
:depth: 1 :depth: 1
.. _install-time-only-params-r5: .. _install-time-only-params-r7:
---------------------------- ----------------------------
Install-time-only parameters Install-time-only parameters
@ -83,6 +84,13 @@ Install-time-only parameters
* ``password`` * ``password``
* ``secure`` * ``secure``
* ``ghcr.io``
* ``url``
* ``username``
* ``password``
* ``secure``
* ``quay.io`` * ``quay.io``
* ``url`` * ``url``
@ -289,6 +297,10 @@ additionally specifies an alternate CA certificate.
url: my.k8sregistry.io url: my.k8sregistry.io
gcr.io: gcr.io:
url: my.gcrregistry.io url: my.gcrregistry.io
ghcr.io:
url: my.ghrcregistry.io
docker.elastic.co
url: my.dockerregistry.io
quay.io: quay.io:
url: my.quayregistry.io url: my.quayregistry.io
docker.io: docker.io:
@ -337,7 +349,7 @@ docker_no_proxy
- 1.2.3.4 - 1.2.3.4
- 5.6.7.8 - 5.6.7.8
.. _k8s-root-ca-cert-key-r5: .. _k8s-root-ca-cert-key-r7:
-------------------------------------- --------------------------------------
Kubernetes root CA certificate and key Kubernetes root CA certificate and key
@ -369,7 +381,7 @@ k8s_root_ca_key
CA certificate has an expiry of at least 5-10 years. CA certificate has an expiry of at least 5-10 years.
The administrator can also provide values to add to the Kubernetes API server The administrator can also provide values to add to the Kubernetes API server
certificate Subject Alternative Name list using the 'apiserver_cert_sans` certificate Subject Alternative Name list using the `apiserver_cert_sans`
override parameter. override parameter.
apiserver_cert_sans apiserver_cert_sans

View File

@ -1,6 +1,6 @@
.. jow1442253584837 .. jow1442253584837
.. _accessing-pxe-boot-server-files-for-a-custom-configuration: .. _accessing-pxe-boot-server-files-for-a-custom-configuration-r7:
======================================================= =======================================================
Access PXE Boot Server Files for a Custom Configuration Access PXE Boot Server Files for a Custom Configuration
@ -17,11 +17,11 @@ use the contents of the working directory to construct a |PXE| boot environment
according to your own requirements or preferences. according to your own requirements or preferences.
For more information about using a |PXE| boot server, see :ref:`Configure a For more information about using a |PXE| boot server, see :ref:`Configure a
PXE Boot Server <configuring-a-pxe-boot-server-r5>`. PXE Boot Server <configuring-a-pxe-boot-server-r7>`.
.. rubric:: |proc| .. rubric:: |proc|
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t: .. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t-r7:
#. Copy the ISO image from the source \(product DVD, USB device, or #. Copy the ISO image from the source \(product DVD, USB device, or
|dnload-loc|\) to a temporary location on the |PXE| boot server. |dnload-loc|\) to a temporary location on the |PXE| boot server.

View File

@ -1,6 +1,6 @@
.. ulc1552927930507 .. ulc1552927930507
.. _adding-hosts-in-bulk: .. _adding-hosts-in-bulk-r7:
================= =================
Add Hosts in Bulk Add Hosts in Bulk
@ -13,7 +13,7 @@ You can add an arbitrary number of hosts using a single CLI command.
#. Prepare an XML file that describes the hosts to be added. #. Prepare an XML file that describes the hosts to be added.
For more information, see :ref:`Bulk Host XML File Format For more information, see :ref:`Bulk Host XML File Format
<bulk-host-xml-file-format>`. <bulk-host-xml-file-format-r7>`.
You can also create the XML configuration file from an existing, running You can also create the XML configuration file from an existing, running
configuration using the :command:`system host-bulk-export` command. configuration using the :command:`system host-bulk-export` command.
@ -58,4 +58,4 @@ the personality.
.. seealso:: .. seealso::
:ref:`Bulk Host XML File Format <bulk-host-xml-file-format>` :ref:`Bulk Host XML File Format <bulk-host-xml-file-format-r7>`

View File

@ -1,19 +1,19 @@
.. pyp1552927946441 .. pyp1552927946441
.. _adding-hosts-using-the-host-add-command: .. _adding-hosts-using-the-host-add-command-r7:
==================================== ================================
Add Hosts Using the host-add Command Add Hosts Using the Command Line
==================================== ================================
You can add hosts to the system inventory using the command line. You can add hosts to the system inventory using the :command:`host-add` command.
.. rubric:: |context| .. rubric:: |context|
There are several ways to add hosts to |prod|; for an overview, see the There are several ways to add hosts to |prod|; for an overview, see the
StarlingX Installation Guides, StarlingX Installation Guides,
`https://docs.starlingx.io/deploy_install_guides/index.html `https://docs.starlingx.io/deploy_install_guides/index.html
<https://docs.starlingx.io/deploy_install_guides/index.html>`__ for your <https://docs.starlingx.io/deploy_install_guides/index.html>`_ for your
system. Instead of powering up each host and then defining its personality and system. Instead of powering up each host and then defining its personality and
other characteristics interactively, you can use the :command:`system host-add` other characteristics interactively, you can use the :command:`system host-add`
command to define hosts before you power them up. This can be useful for command to define hosts before you power them up. This can be useful for
@ -150,10 +150,23 @@ scripting an initial setup.
~(keystone_admin)]$ system host-add -n compute-0 -p worker -I 10.10.10.100 ~(keystone_admin)]$ system host-add -n compute-0 -p worker -I 10.10.10.100
#. Verify that the host has been added successfully.
Use the :command:`fm alarm-list` command to check if any alarms (major or
critical) events have occurred. You can also type :command:`fm event-list`
to see a log of events. For more information on alarms, see :ref:`Fault
Management Overview <fault-management-overview>`.
#. With **controller-0** running, start the host. #. With **controller-0** running, start the host.
The host is booted and configured with a personality. The host is booted and configured with a personality.
#. Verify that the host has started successfully.
The command :command:`system host-list` shows a list of hosts. The
added host should be available, enabled, and unlocked. You can also
check alarms and events again.
.. rubric:: |postreq| .. rubric:: |postreq|
After adding the host, you must provision it according to the requirements of After adding the host, you must provision it according to the requirements of

View File

@ -1,18 +1,18 @@
============================================== ==============================================
Bare metal All-in-one Duplex Installation R5.0 Bare metal All-in-one Duplex Installation R7.0
============================================== ==============================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_aio_duplex.txt .. include:: /shared/_includes/desc_aio_duplex.txt
The bare metal AIO-DX deployment configuration may be extended with up to four The bare metal AIO-DX deployment configuration may be extended with up to four
worker nodes (not shown in the diagram). Installation instructions for worker nodes (not shown in the diagram). Installation instructions for
these additional nodes are described in :doc:`aio_duplex_extend`. these additional nodes are described in :doc:`aio_duplex_extend`.
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -92,8 +92,8 @@ Configure worker nodes
.. important:: .. important::
These steps are required only if the StarlingX OpenStack application **These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed. (|prefix|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prefix|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
@ -102,6 +102,7 @@ Configure worker nodes
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label| system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done

View File

@ -3,7 +3,7 @@ Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal All-in-one Duplex** deployment configuration. **StarlingX R7.0 bare metal All-in-one Duplex** deployment configuration.
.. contents:: .. contents::
:local: :local:
@ -56,3 +56,14 @@ Prepare bare metal servers
-------------------------- --------------------------
.. include:: prep_servers.txt .. include:: prep_servers.txt
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in the following diagram.
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-duplex.png
:scale: 50%
:alt: All-in-one Duplex deployment configuration
*All-in-one Duplex deployment configuration*

View File

@ -1,5 +1,5 @@
.. _aio_duplex_install_kubernetes_r5: .. _aio_duplex_install_kubernetes_r7:
================================================ ================================================
Install Kubernetes Platform on All-in-one Duplex Install Kubernetes Platform on All-in-one Duplex
@ -12,7 +12,7 @@ Install Kubernetes Platform on All-in-one Duplex
.. only:: starlingx .. only:: starlingx
This section describes the steps to install the StarlingX Kubernetes This section describes the steps to install the StarlingX Kubernetes
platform on a **StarlingX R5.0 All-in-one Duplex** deployment platform on a **StarlingX R7.0 All-in-one Duplex** deployment
configuration. configuration.
.. contents:: .. contents::
@ -30,7 +30,7 @@ Install Kubernetes Platform on All-in-one Duplex
Install software on controller-0 Install software on controller-0
-------------------------------- --------------------------------
.. include:: /shared/_includes/r5_inc-install-software-on-controller.rest .. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start :start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end :end-before: incl-install-software-controller-0-aio-end
@ -91,7 +91,7 @@ Bootstrap system on controller-0
.. only:: starlingx .. only:: starlingx
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -138,6 +138,8 @@ Bootstrap system on controller-0
url: myprivateregistry.abc.com:9001/docker.elastic.co url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io: gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io url: myprivateregistry.abc.com:9001/gcr.io
ghcr.io:
url: myprivateregistry.abc.com:9001/ghcr.io
k8s.gcr.io: k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io: docker.io:
@ -151,7 +153,7 @@ Bootstrap system on controller-0
# certificate as a Trusted CA # certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r5>` See :ref:`Use a Private Docker Registry <use-private-docker-registry-r7>`
for more information. for more information.
.. only:: starlingx .. only:: starlingx
@ -177,7 +179,7 @@ Bootstrap system on controller-0
- 1.2.3.4 - 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r5>` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios.
@ -215,7 +217,8 @@ Configure controller-0
system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see :ref:`Node Interfaces <node-interfaces-index>`. To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. Configure the MGMT interface of controller-0 and specify the attached #. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host". networks of both "mgmt" and "cluster-host".
@ -296,6 +299,42 @@ Configure controller-0
# assign 6 cores on processor/numa-node 0 on controller-0 to platform # assign 6 cores on processor/numa-node 0 on controller-0 to platform
system host-cpu-modify -f platform -p0 6 controller-0 system host-cpu-modify -f platform -p0 6 controller-0
#. Due to the additional OpenStack services' containers running on the
controller host, the size of the Docker filesystem needs to be
increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk partition to cgts-vg.
# There must be at least 20GB of available space after the docker
# filesystem is increased.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk )
# Get device path of ROOT DISK
system host-show controller-0 --nowrap | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response
# Use a partition size such that you'll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to 'cgts-vg' local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added
# Increase docker filesystem to 60G
system host-fs-modify controller-0 docker=60
#. **For OpenStack only:** Configure the system setting for the vSwitch. #. **For OpenStack only:** Configure the system setting for the vSwitch.
@ -322,8 +361,8 @@ Configure controller-0
system modify --vswitch_type none system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of |prefix|-openstack the containerized |OVS| defined in the helm charts of
manifest. |prefix|-openstack manifest.
To deploy |OVS-DPDK|, run the following command: To deploy |OVS-DPDK|, run the following command:
@ -332,7 +371,7 @@ Configure controller-0
system modify --vswitch_type |ovs-dpdk| system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a single Default recommendation for an |AIO|-controller is to use a single
core |OVS-DPDK| vSwitch. core for |OVS-DPDK| vSwitch.
.. code-block:: bash .. code-block:: bash
@ -363,6 +402,7 @@ Configure controller-0
system host-memory-modify -f vswitch -1G 1 controller-0 1 system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important:: .. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
@ -375,6 +415,7 @@ Configure controller-0
.. code-block:: bash .. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 0 system host-memory-modify -f application -1G 10 controller-0 0
@ -387,6 +428,7 @@ Configure controller-0
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for |prefix|-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
@ -414,6 +456,7 @@ Configure controller-0
# root disk, if that is what you chose above. # root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required. # Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30 PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group # Add new partition to nova-local local volume group
@ -421,7 +464,7 @@ Configure controller-0
sleep 2 sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-0. #. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vSwitch interfaces used by vSwitch to provide Data class interfaces are vswitch interfaces used by vswitch to provide
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network. underlying assigned Data Network.
@ -471,10 +514,10 @@ Optionally Configure PCI-SRIOV Interfaces
This step is **optional** for OpenStack. Do this step if using |SRIOV| This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can
have the same Data Networks assigned to them as vSwitch data interfaces. have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the |PCI|-|SRIOV| interfaces for controller-0. * Configure the pci-sriov interfaces for controller-0.
.. code-block:: bash .. code-block:: bash
@ -566,7 +609,7 @@ For host-based Ceph:
# List OSD storage devices # List OSD storage devices
system host-stor-list controller-0 system host-stor-list controller-0
.. only:: starlingx .. only:: starlingx
For Rook container-based Ceph: For Rook container-based Ceph:
@ -595,7 +638,7 @@ Unlock controller-0
.. only:: openstack .. only:: openstack
* **For OpenStack only:** Due to the additional Openstack services * **For OpenStack Only** Due to the additional OpenStack services
containers running on the controller host, the size of the Docker containers running on the controller host, the size of the Docker
filesystem needs to be increased from the default size of 30G to 60G. filesystem needs to be increased from the default size of 30G to 60G.
@ -608,7 +651,9 @@ Unlock controller-0
system host-lvg-list controller-0 system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than # if existing docker fs size + cgts-vg available space is less than
# 60G, you will need to add a new disk partition to cgts-vg. # 80G, you will need to add a new disk partition to cgts-vg.
# There must be at least 20GB of available space after the docker
# filesystem is increased.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk ) # ( if not use another unused disk )
@ -624,7 +669,7 @@ Unlock controller-0
PARTITION_SIZE=30 PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group # Add new partition to 'cgts-vg' local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID> system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added sleep 2 # wait for partition to be added
@ -723,7 +768,7 @@ Configure controller-1
.. only:: starlingx .. only:: starlingx
:: .. parsed-literal::
system host-label-assign controller-1 openstack-control-plane=enabled system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled system host-label-assign controller-1 openstack-compute-node=enabled
@ -755,8 +800,8 @@ Configure controller-1
# assign 6 cores on processor/numa-node 0 on controller-1 to platform # assign 6 cores on processor/numa-node 0 on controller-1 to platform
system host-cpu-modify -f platform -p0 6 controller-1 system host-cpu-modify -f platform -p0 6 controller-1
#. Due to the additional Openstack services containers running on the #. Due to the additional openstack services' containers running on the
controller host, the size of the Docker filesystem needs to be controller host, the size of the docker filesystem needs to be
increased from the default size of 30G to 60G. increased from the default size of 30G to 60G.
.. code-block:: bash .. code-block:: bash
@ -779,12 +824,12 @@ Configure controller-1
# Get UUID of ROOT DISK by listing disks # Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0 system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response # Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G # Use a partition size such that you'll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30 PARTITION_SIZE=30
system system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group # Add new partition to 'cgts-vg' local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID> system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added sleep 2 # wait for partition to be added
@ -815,6 +860,7 @@ Configure controller-1
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host. memory on each |NUMA| node on the host.
.. code-block:: bash .. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch # assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
@ -850,10 +896,10 @@ Configure controller-1
export NODE=controller-1 export NODE=controller-1
# Create nova-local local volume group # Create 'nova-local' local volume group
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
# Get UUID of DISK to create PARTITION to be added to nova-local local volume group # Get UUID of DISK to create PARTITION to be added to 'nova-local' local volume group
# CEPH OSD Disks can NOT be used # CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk. # For best performance, do NOT use system/root disk, use a separate physical disk.
@ -873,7 +919,7 @@ Configure controller-1
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group # Add new partition to 'nova-local' local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID> system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2 sleep 2
@ -892,7 +938,7 @@ Configure controller-1
export NODE=controller-1 export NODE=controller-1
# List inventoried hosts ports and identify ports to be used as data interfaces, # List inventoried host's ports and identify ports to be used as 'data' interfaces,
# based on displayed linux port name, pci address and device type. # based on displayed linux port name, pci address and device type.
system host-port-list ${NODE} system host-port-list ${NODE}
@ -902,7 +948,7 @@ Configure controller-1
system host-if-list -a ${NODE} system host-if-list -a ${NODE}
# Modify configuration for these interfaces # Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data# # Configuring them as 'data' class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid> system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid> system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
@ -918,7 +964,7 @@ Configure controller-1
Optionally Configure PCI-SRIOV Interfaces Optionally Configure PCI-SRIOV Interfaces
***************************************** *****************************************
#. **Optionally**, configure pci-sriov interfaces for controller-1. #. **Optionally**, configure |PCI|-|SRIOV| interfaces for controller-1.
This step is **optional** for Kubernetes. Do this step if using |SRIOV| This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers. network attachments in hosted application containers.
@ -926,11 +972,11 @@ Optionally Configure PCI-SRIOV Interfaces
.. only:: openstack .. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV| This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that PCI-SRIOV interfaces can vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can
have the same Data Networks assigned to them as vSwitch data interfaces. have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-1. * Configure the |PCI|-|SRIOV| interfaces for controller-1.
.. code-block:: bash .. code-block:: bash
@ -940,7 +986,7 @@ Optionally Configure PCI-SRIOV Interfaces
# based on displayed linux port name, pci address and device type. # based on displayed linux port name, pci address and device type.
system host-port-list ${NODE} system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces, # List hosts auto-configured 'ethernet' interfaces,
# find the interfaces corresponding to the ports identified in previous step, and # find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID # take note of their UUID
system host-if-list -a ${NODE} system host-if-list -a ${NODE}
@ -959,6 +1005,7 @@ Optionally Configure PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0} system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1} system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes only:** To enable using |SRIOV| network attachments for * **For Kubernetes only:** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers: the above interfaces in Kubernetes hosted application containers:
@ -1001,7 +1048,7 @@ For host-based Ceph:
# List OSD storage devices # List OSD storage devices
system host-stor-list controller-1 system host-stor-list controller-1
.. only:: starlingx .. only:: starlingx
For Rook container-based Ceph: For Rook container-based Ceph:
@ -1114,10 +1161,9 @@ machine.
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt
.. only:: partner .. only:: partner
.. include:: /_includes/72hr-to-license.rest .. include:: /_includes/72hr-to-license.rest

View File

@ -1,14 +1,14 @@
=============================================== ===============================================
Bare metal All-in-one Simplex Installation R5.0 Bare metal All-in-one Simplex Installation R7.0
=============================================== ===============================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_aio_simplex.txt .. include:: /shared/_includes/desc_aio_simplex.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -1,9 +1,11 @@
.. _aio_simplex_hardware_r7:
===================== =====================
Hardware Requirements Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal All-in-one Simplex** deployment configuration. **StarlingX R7.0 bare metal All-in-one Simplex** deployment configuration.
.. contents:: .. contents::
:local: :local:
@ -16,11 +18,11 @@ Minimum hardware requirements
The recommended minimum hardware requirements for bare metal servers for various The recommended minimum hardware requirements for bare metal servers for various
host types are: host types are:
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Minimum Requirement | All-in-one Controller Node | | Minimum Requirement | All-in-one Controller Node |
+=========================+====================================================================+ +=========================+===========================================================+
| Number of servers | 1 | | Number of servers | 1 |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | | Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket | | | 8 cores/socket |
| | | | | |
@ -28,31 +30,42 @@ host types are:
| | | | | |
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | | | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
| | (low-power/low-cost option) | | | (low-power/low-cost option) |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Minimum memory | 64 GB | | Minimum memory | 64 GB |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) | | Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | | Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
| | - Recommended, but not required: 1 or more SSDs or NVMe | | | - Recommended, but not required: 1 or more SSDs or NVMe |
| | drives for Ceph journals (min. 1024 MiB per OSD | | | drives for Ceph journals (min. 1024 MiB per OSD |
| | journal) | | | journal) |
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K | | | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
| | RPM) for VM local ephemeral storage | | | RPM) for VM local ephemeral storage |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| Minimum network ports | - OAM: 1x1GE | | Minimum network ports | - OAM: 1x1GE |
| | - Data: 1 or more x 10GE | | | - Data: 1 or more x 10GE |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
| BIOS settings | - Hyper-Threading technology enabled | | BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled | | | - Virtualization technology enabled |
| | - VT for directed I/O enabled | | | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance | | | - CPU power and performance policy set to performance |
| | - CPU C state control disabled | | | - CPU C state control disabled |
| | - Plug & play BMC detection disabled | | | - Plug & play BMC detection disabled |
+-------------------------+--------------------------------------------------------------------+ +-------------------------+-----------------------------------------------------------+
-------------------------- --------------------------
Prepare bare metal servers Prepare bare metal servers
-------------------------- --------------------------
.. include:: prep_servers.txt .. include:: prep_servers.txt
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in the following diagram.
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-simplex.png
:scale: 50%
:alt: All-in-one Simplex deployment configuration
*All-in-one Simplex deployment configuration*

View File

@ -1,5 +1,5 @@
.. _aio_simplex_install_kubernetes_r5: .. _aio_simplex_install_kubernetes_r7:
================================================= =================================================
Install Kubernetes Platform on All-in-one Simplex Install Kubernetes Platform on All-in-one Simplex
@ -12,7 +12,7 @@ Install Kubernetes Platform on All-in-one Simplex
.. only:: starlingx .. only:: starlingx
This section describes the steps to install the StarlingX Kubernetes This section describes the steps to install the StarlingX Kubernetes
platform on a **StarlingX R5.0 All-in-one Simplex** deployment platform on a **StarlingX R7.0 All-in-one Simplex** deployment
configuration. configuration.
.. contents:: .. contents::
@ -30,7 +30,7 @@ Install Kubernetes Platform on All-in-one Simplex
Install software on controller-0 Install software on controller-0
-------------------------------- --------------------------------
.. include:: /shared/_includes/r5_inc-install-software-on-controller.rest .. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start :start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end :end-before: incl-install-software-controller-0-aio-end
@ -91,7 +91,7 @@ Bootstrap system on controller-0
.. only:: starlingx .. only:: starlingx
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -118,7 +118,7 @@ Bootstrap system on controller-0
In either of the above options, the bootstrap playbooks default In either of the above options, the bootstrap playbooks default
values will pull all container images required for the |prod-p| from values will pull all container images required for the |prod-p| from
Docker hub. Docker hub
If you have setup a private Docker registry to use for bootstrapping If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml: then you will need to add the following lines in $HOME/localhost.yml:
@ -138,6 +138,8 @@ Bootstrap system on controller-0
url: myprivateregistry.abc.com:9001/docker.elastic.co url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io: gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io url: myprivateregistry.abc.com:9001/gcr.io
ghcr.io:
url: myprivateregistry.abc.com:9001/ghcr.io
k8s.gcr.io: k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io: docker.io:
@ -151,7 +153,7 @@ Bootstrap system on controller-0
# certificate as a Trusted CA # certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r5>` See :ref:`Use a Private Docker Registry <use-private-docker-registry-r7>`
for more information. for more information.
@ -178,7 +180,7 @@ Bootstrap system on controller-0
- 1.2.3.4 - 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r5>` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios.
@ -244,7 +246,6 @@ The newly installed controller needs to be configured.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prefix|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx .. only:: starlingx
.. parsed-literal:: .. parsed-literal::
@ -298,24 +299,23 @@ The newly installed controller needs to be configured.
# ( if not use another unused disk ) # ( if not use another unused disk )
# Get device path of ROOT DISK # Get device path of ROOT DISK
system host-show controller-0 --nowrap | fgrep rootfs system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks # Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0 system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response # Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response
# Use a partition size such that you'll be able to increase docker fs size from 30G to 60G # Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30 PARTITION_SIZE=30
system hostdisk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE} system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to 'cgts-vg' local volume group # Add new partition to cgts-vg local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID> system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added sleep 2 # wait for partition to be added
# Increase docker filesystem to 60G # Increase docker filesystem to 60G
system host-fs-modify controller-0 docker=60 system host-fs-modify controller-0 docker=60
#. **For OpenStack only:** Configure the system setting for the vSwitch. #. **For OpenStack only:** Configure the system setting for the vSwitch.
.. only:: starlingx .. only:: starlingx
@ -397,7 +397,7 @@ The newly installed controller needs to be configured.
.. note:: .. note::
After controller-0 is unlocked, changing vSwitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
@ -480,7 +480,7 @@ The newly installed controller needs to be configured.
Optionally Configure PCI-SRIOV Interfaces Optionally Configure PCI-SRIOV Interfaces
***************************************** *****************************************
#. **Optionally**, configure pci-sriov interfaces for controller-0. #. **Optionally**, configure |PCI|-SRIOV interfaces for controller-0.
This step is **optional** for Kubernetes. Do this step if using |SRIOV| This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers. network attachments in hosted application containers.
@ -488,11 +488,11 @@ Optionally Configure PCI-SRIOV Interfaces
.. only:: openstack .. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV| This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can vNICs in hosted application VMs. Note that |PCI|-SRIOV interfaces can
have the same Data Networks assigned to them as vswitch data interfaces. have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-0. * Configure the |PCI|-SRIOV interfaces for controller-0.
.. code-block:: bash .. code-block:: bash
@ -696,7 +696,7 @@ machine.
Next steps Next steps
---------- ----------
.. include::/_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt
.. only:: partner .. only:: partner

View File

@ -1,6 +1,6 @@
.. vqr1569420650576 .. vqr1569420650576
.. _bootstrapping-from-a-private-docker-registry: .. _bootstrapping-from-a-private-docker-registry-r7:
============================================ ============================================
Bootstrapping from a Private Docker Registry Bootstrapping from a Private Docker Registry
@ -22,6 +22,8 @@ your server is isolated from the public Internet.
url: <my-registry.io>/k8s.gcr.io url: <my-registry.io>/k8s.gcr.io
gcr.io: gcr.io:
url: <my-registry.io>/gcr.io url: <my-registry.io>/gcr.io
ghcr.io:
url: <my-registry.io>/ghcr.io
quay.io: quay.io:
url: <my-registry.io>/quay.io url: <my-registry.io>/quay.io
docker.io: docker.io:

View File

@ -1,6 +1,6 @@
.. hzf1552927866550 .. hzf1552927866550
.. _bulk-host-xml-file-format: .. _bulk-host-xml-file-format-r7:
========================= =========================
Bulk Host XML File Format Bulk Host XML File Format

View File

@ -1,6 +1,9 @@
.. jow1440534908675 .. jow1440534908675
.. _configuring-a-pxe-boot-server-r5:
.. _configuring-a-pxe-boot-server-r7:
=========================== ===========================
Configure a PXE Boot Server Configure a PXE Boot Server
@ -14,7 +17,7 @@ initialization.
|prod| includes a setup script to simplify configuring a |PXE| boot server. If |prod| includes a setup script to simplify configuring a |PXE| boot server. If
you prefer, you can manually apply a custom configuration; for more you prefer, you can manually apply a custom configuration; for more
information, see :ref:`Access PXE Boot Server Files for a Custom Configuration information, see :ref:`Access PXE Boot Server Files for a Custom Configuration
<accessing-pxe-boot-server-files-for-a-custom-configuration>`. <accessing-pxe-boot-server-files-for-a-custom-configuration-r7>`.
The |prod| setup script accepts a path to the root TFTP directory as a The |prod| setup script accepts a path to the root TFTP directory as a
parameter, and copies all required files for BIOS and |UEFI| clients into this parameter, and copies all required files for BIOS and |UEFI| clients into this
@ -33,7 +36,7 @@ entry, supporting the use of separate boot files for different clients.
The file names and locations depend on the BIOS or |UEFI| implementation. The file names and locations depend on the BIOS or |UEFI| implementation.
.. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb: .. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb-r7:
.. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations .. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations
:widths: auto :widths: auto
@ -57,7 +60,7 @@ The file names and locations depend on the BIOS or |UEFI| implementation.
Use a Linux workstation as the |PXE| Boot server. Use a Linux workstation as the |PXE| Boot server.
.. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt: .. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt-r7:
- On the workstation, install the packages required to support |DHCP|, TFTP, - On the workstation, install the packages required to support |DHCP|, TFTP,
and Apache. and Apache.
@ -93,7 +96,7 @@ Use a Linux workstation as the |PXE| Boot server.
.. rubric:: |proc| .. rubric:: |proc|
.. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb: .. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb-r7:
#. Copy the ISO image from the source \(product DVD, USB device, or #. Copy the ISO image from the source \(product DVD, USB device, or
|dnload-loc| to a temporary location on the |PXE| boot server. |dnload-loc| to a temporary location on the |PXE| boot server.
@ -110,6 +113,12 @@ Use a Linux workstation as the |PXE| Boot server.
#. Set up the |PXE| boot configuration. #. Set up the |PXE| boot configuration.
.. important::
|PXE| configuration steps differ for |prod| |deb-eval-release|
evaluation on the Debian distribution. See the :ref:`Debian Technology
Preview <deb-grub-deltas>` |PXE| configuration procedure for details.
The ISO image includes a setup script, which you can run to complete the The ISO image includes a setup script, which you can run to complete the
configuration. configuration.

View File

@ -1,14 +1,14 @@
============================================================= =============================================================
Bare metal Standard with Controller Storage Installation R5.0 Bare metal Standard with Controller Storage Installation R7.0
============================================================= =============================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_controller_storage.txt .. include:: /shared/_includes/desc_controller_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------

View File

@ -3,7 +3,7 @@ Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal Standard with Controller Storage** deployment **StarlingX R7.0 bare metal Standard with Controller Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -54,3 +54,14 @@ Prepare bare metal servers
-------------------------- --------------------------
.. include:: prep_servers.txt .. include:: prep_servers.txt
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in the following diagram.
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-controller-storage.png
:scale: 50%
:alt: Controller storage deployment configuration
*Controller storage deployment configuration*

View File

@ -1,5 +1,5 @@
.. _controller_storage_install_kubernetes_r5: .. _controller_storage_install_kubernetes_r7:
=============================================================== ===============================================================
Install Kubernetes Platform on Standard with Controller Storage Install Kubernetes Platform on Standard with Controller Storage
@ -12,7 +12,7 @@ Install Kubernetes Platform on Standard with Controller Storage
.. only:: starlingx .. only:: starlingx
This section describes the steps to install the StarlingX Kubernetes This section describes the steps to install the StarlingX Kubernetes
platform on a **StarlingX R5.0 Standard with Controller Storage** platform on a **StarlingX R7.0 Standard with Controller Storage**
deployment configuration. deployment configuration.
------------------- -------------------
@ -26,7 +26,7 @@ Install Kubernetes Platform on Standard with Controller Storage
Install software on controller-0 Install software on controller-0
-------------------------------- --------------------------------
.. include:: /shared/_includes/r5_inc-install-software-on-controller.rest .. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-standard-start :start-after: incl-install-software-controller-0-standard-start
:end-before: incl-install-software-controller-0-standard-end :end-before: incl-install-software-controller-0-standard-end
@ -90,7 +90,7 @@ Bootstrap system on controller-0
.. only:: starlingx .. only:: starlingx
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -137,8 +137,10 @@ Bootstrap system on controller-0
url: myprivateregistry.abc.com:9001/docker.elastic.co url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io: gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io url: myprivateregistry.abc.com:9001/gcr.io
ghcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
k8s.gcr.io: k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io url: myprivateregistry.abc.com:9001/k8s.ghcr.io
docker.io: docker.io:
url: myprivateregistry.abc.com:9001/docker.io url: myprivateregistry.abc.com:9001/docker.io
defaults: defaults:
@ -150,7 +152,7 @@ Bootstrap system on controller-0
# certificate as a Trusted CA # certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r5>` See :ref:`Use a Private Docker Registry <use-private-docker-registry-r7>`
for more information. for more information.
.. only:: starlingx .. only:: starlingx
@ -176,7 +178,7 @@ Bootstrap system on controller-0
- 1.2.3.4 - 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations Refer to :ref:`Ansible Bootstrap Configurations
<ansible_bootstrap_configs_r5>` for information on additional Ansible <ansible_bootstrap_configs_r7>` for information on additional Ansible
bootstrap configurations for advanced Ansible bootstrap scenarios. bootstrap configurations for advanced Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -371,7 +373,7 @@ machine.
# ( if not use another unused disk ) # ( if not use another unused disk )
# Get device path of ROOT DISK # Get device path of ROOT DISK
system host-show controller-0 --nowrap | fgrep rootfs system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks # Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0 system host-disk-list controller-0
@ -533,9 +535,9 @@ machine.
.. only:: openstack .. only:: openstack
#. **For OpenStack only:** Due to the additional openstack services * **For OpenStack only:** Due to the additional openstack services containers
containers running on the controller host, the size of the docker running on the controller host, the size of the docker filesystem needs to be
filesystem needs to be increased from the default size of 30G to 60G. increased from the default size of 30G to 60G.
.. code-block:: bash .. code-block:: bash
@ -552,7 +554,7 @@ machine.
# ( if not use another unused disk ) # ( if not use another unused disk )
# Get device path of ROOT DISK # Get device path of ROOT DISK
system host-show controller-1 --nowrap | fgrep rootfs system host-show controller-1 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks # Get UUID of ROOT DISK by listing disks
system host-disk-list controller-1 system host-disk-list controller-1
@ -560,7 +562,7 @@ machine.
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response # Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G # Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30 PARTITION_SIZE=30
system hostdisk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE} system host disk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group # Add new partition to cgts-vg local volume group
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID> system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
@ -630,6 +632,7 @@ Configure worker nodes
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label| system host-label-assign $NODE |vswitch-label|
done done
@ -731,7 +734,7 @@ Configure worker nodes
# Additional PARTITION(s) from additional disks can be added later if required. # Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30 PARTITION_SIZE=30
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE} system host disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group # Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID> system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
@ -917,7 +920,7 @@ If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt
.. only:: partner .. only:: partner

View File

@ -1,15 +1,15 @@
 
============================================================ ============================================================
Bare metal Standard with Dedicated Storage Installation R5.0 Bare metal Standard with Dedicated Storage Installation R7.0
============================================================ ============================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_dedicated_storage.txt .. include:: /shared/_includes/desc_dedicated_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal Standard with Dedicated Storage** deployment **StarlingX R7.0 bare metal Standard with Dedicated Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -59,3 +59,14 @@ Prepare bare metal servers
-------------------------- --------------------------
.. include:: prep_servers.txt .. include:: prep_servers.txt
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in the following diagram.
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-dedicated-storage.png
:scale: 50%
:alt: Standard with dedicated storage
*Standard with dedicated storage*

View File

@ -1,6 +1,6 @@
.. _dedicated_storage_install_kubernetes_r5: .. _dedicated_storage_install_kubernetes_r7:
.. only:: partner .. only:: partner
@ -30,7 +30,7 @@ This section describes the steps to install the |prod| Kubernetes platform on a
Install software on controller-0 Install software on controller-0
-------------------------------- --------------------------------
.. include:: /shared/_includes/r5_inc-install-software-on-controller.rest .. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-standard-start :start-after: incl-install-software-controller-0-standard-start
:end-before: incl-install-software-controller-0-standard-end :end-before: incl-install-software-controller-0-standard-end
@ -283,6 +283,7 @@ Configure worker nodes
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label| system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
@ -526,7 +527,7 @@ host machine.
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt
.. only:: partner .. only:: partner

View File

@ -0,0 +1,33 @@
.. _delete-hosts-using-the-host-delete-command-1729d2e3153b-r7:
===================================
Delete Hosts Using the Command Line
===================================
You can delete hosts from the system inventory using the :command:`host-delete` command.
.. rubric:: |proc|
#. Check for alarms related to the host.
Use the :command:`fm alarm-list` command to check for any alarms (major
or critical events). You can also type :command:`fm event-list` to see a log
of events. For more information on alarms, see :ref:`Fault Management
Overview <fault-management-overview>`.
#. Lock the host that will be deleted.
Use the :command:`system host-lock` command. Only locked hosts can be deleted.
#. Delete the host from the system inventory.
Use the command :command:`system host-delete`. This command accepts one
parameter: the hostname or ID. Make sure that the remaining hosts have
sufficient capacity and workload to account for the deleted host.
#. Verify that the host has been deleted successfully.
Use the :command:`fm alarm-list` command to check for any alarms (major
or critical events). You can also type :command:`fm event-list` to see a log
of events. For more information on alarms, see :ref:`Fault Management
Overview <fault-management-overview>`.

View File

@ -1,6 +1,6 @@
.. fdm1552927801987 .. fdm1552927801987
.. _exporting-host-configurations: .. _exporting-host-configurations-r7:
========================== ==========================
Export Host Configurations Export Host Configurations
@ -29,7 +29,7 @@ To perform this procedure, you must be logged in as the **admin** user.
.. rubric:: |proc| .. rubric:: |proc|
.. _exporting-host-configurations-steps-unordered-ntw-nw1-c2b: .. _exporting-host-configurations-steps-unordered-ntw-nw1-c2b-r7:
- Run the :command:`system host-bulk-export` command to create the host - Run the :command:`system host-bulk-export` command to create the host
configuration file. configuration file.
@ -47,7 +47,7 @@ To perform this procedure, you must be logged in as the **admin** user.
To use the host configuration file, see :ref:`Reinstall a System Using an To use the host configuration file, see :ref:`Reinstall a System Using an
Exported Host Configuration File Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>`. <reinstalling-a-system-using-an-exported-host-configuration-file-r7>`.
For details on the structure and elements of the file, see :ref:`Bulk Host XML For details on the structure and elements of the file, see :ref:`Bulk Host XML
File Format <bulk-host-xml-file-format>`. File Format <bulk-host-xml-file-format-r7>`.

View File

@ -1,5 +1,5 @@
==================================== ====================================
Bare metal Standard with Ironic R5.0 Bare metal Standard with Ironic R7.0
==================================== ====================================
-------- --------
@ -20,7 +20,7 @@ more bare metal servers.
settings. Refer to :ref:`docker_proxy_config` for settings. Refer to :ref:`docker_proxy_config` for
details. details.
.. figure:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-ironic.png .. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-ironic.png
:scale: 50% :scale: 50%
:alt: Standard with Ironic deployment configuration :alt: Standard with Ironic deployment configuration

View File

@ -3,7 +3,7 @@ Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal Ironic** deployment configuration. **StarlingX R7.0 bare metal Ironic** deployment configuration.
.. contents:: .. contents::
:local: :local:

View File

@ -1,14 +1,14 @@
================================ ================================
Install Ironic on StarlingX R5.0 Install Ironic on StarlingX R7.0
================================ ================================
This section describes the steps to install Ironic on a standard configuration, This section describes the steps to install Ironic on a standard configuration,
either: either:
* **StarlingX R5.0 bare metal Standard with Controller Storage** deployment * **StarlingX R7.0 bare metal Standard with Controller Storage** deployment
configuration configuration
* **StarlingX R5.0 bare metal Standard with Dedicated Storage** deployment * **StarlingX R7.0 bare metal Standard with Dedicated Storage** deployment
configuration configuration
.. contents:: .. contents::

View File

@ -5,11 +5,6 @@ the following condition:
* Cabled for power * Cabled for power
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in Figure 1.
* All disks wiped * All disks wiped
* Ensures that servers will boot from either the network or USB storage (if present) * Ensures that servers will boot from either the network or USB storage (if present)

View File

@ -1,6 +1,6 @@
.. deo1552927844327 .. deo1552927844327
.. _reinstalling-a-system-or-a-host: .. _reinstalling-a-system-or-a-host-r7:
============================ ============================
Reinstall a System or a Host Reinstall a System or a Host
@ -23,7 +23,7 @@ type \(for example, Standard or All-in-one\).
To simplify system reinstallation, you can export and reuse an existing To simplify system reinstallation, you can export and reuse an existing
system configuration. For more information, see :ref:`Reinstalling a System system configuration. For more information, see :ref:`Reinstalling a System
Using an Exported Host Configuration File Using an Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>`. <reinstalling-a-system-using-an-exported-host-configuration-file-r7>`.
To reinstall the software on a host using the Host Inventory controls, see To reinstall the software on a host using the Host Inventory controls, see
|node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete |node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete
@ -33,7 +33,7 @@ complete the configuration change \(for example, if the |MAC| address of the
management interface has changed\). management interface has changed\).
- :ref:`Reinstalling a System Using an Exported Host Configuration File - :ref:`Reinstalling a System Using an Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>` <reinstalling-a-system-using-an-exported-host-configuration-file-r7>`
- :ref:`Exporting Host Configurations <exporting-host-configurations>` - :ref:`Exporting Host Configurations <exporting-host-configurations-r7>`

View File

@ -1,6 +1,6 @@
.. wuh1552927822054 .. wuh1552927822054
.. _reinstalling-a-system-using-an-exported-host-configuration-file: .. _reinstalling-a-system-using-an-exported-host-configuration-file-r7:
============================================================ ============================================================
Reinstall a System Using an Exported Host Configuration File Reinstall a System Using an Exported Host Configuration File
@ -17,7 +17,7 @@ For the following procedure, **controller-0** must be the active controller.
#. Create a host configuration file using the :command:`system #. Create a host configuration file using the :command:`system
host-bulk-export` command, as described in :ref:`Exporting Host host-bulk-export` command, as described in :ref:`Exporting Host
Configurations <exporting-host-configurations>`. Configurations <exporting-host-configurations-r7>`.
#. Copy the host configuration file to a USB drive or somewhere off the #. Copy the host configuration file to a USB drive or somewhere off the
controller hard disk. controller hard disk.
@ -33,7 +33,7 @@ For the following procedure, **controller-0** must be the active controller.
#. Run :command:`Ansible Bootstrap playbook`. #. Run :command:`Ansible Bootstrap playbook`.
#. Follow the instructions for using the :command:`system host-bulk-add` #. Follow the instructions for using the :command:`system host-bulk-add`
command, as detailed in :ref:`Adding Hosts in Bulk <adding-hosts-in-bulk>`. command, as detailed in :ref:`Adding Hosts in Bulk <adding-hosts-in-bulk-r7>`.
.. rubric:: |postreq| .. rubric:: |postreq|

View File

@ -1,14 +1,14 @@
======================================================= =======================================================
Bare metal Standard with Rook Storage Installation R5.0 Bare metal Standard with Rook Storage Installation R7.0
======================================================= =======================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_rook_storage.txt .. include:: /shared/_includes/desc_rook_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------

View File

@ -3,7 +3,7 @@ Hardware Requirements
===================== =====================
This section describes the hardware requirements and server preparation for a This section describes the hardware requirements and server preparation for a
**StarlingX R5.0 bare metal Standard with Rook Storage** deployment **StarlingX R7.0 bare metal Standard with Rook Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -60,3 +60,14 @@ Prepare bare metal servers
-------------------------- --------------------------
.. include:: prep_servers.txt .. include:: prep_servers.txt
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in the following diagram.
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-controller-storage.png
:scale: 50%
:alt: Standard with Rook Storage deployment configuration
*Standard with Rook Storage deployment configuration*

View File

@ -1,11 +1,9 @@
.. _rook_storage_install_kubernetes:
===================================================================== =====================================================================
Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage
===================================================================== =====================================================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 bare metal Standard with Rook Storage** deployment on a **StarlingX R7.0 bare metal Standard with Rook Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -16,8 +14,8 @@ configuration.
Create bootable USB Create bootable USB
------------------- -------------------
Refer to :ref:`bootable_usb` for instructions on how to create a bootable USB Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
with the StarlingX ISO on your system. create a bootable USB with the StarlingX ISO on your system.
-------------------------------- --------------------------------
Install software on controller-0 Install software on controller-0
@ -100,7 +98,7 @@ Bootstrap system on controller-0
The default location where Ansible looks for and imports user The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``. configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -148,11 +146,11 @@ Bootstrap system on controller-0
EOF EOF
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :ref:`docker_proxy_config` for firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
details about Docker proxy settings. for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -182,7 +180,7 @@ Configure controller-0
attached networks. Use the OAM and MGMT port names, for example eth0, that are attached networks. Use the OAM and MGMT port names, for example eth0, that are
applicable to your deployment environment. applicable to your deployment environment.
:: .. code-block:: bash
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
@ -244,7 +242,8 @@ OpenStack-specific host configuration
used: used:
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
To deploy the default containerized |OVS|: To deploy the default containerized |OVS|:
@ -283,7 +282,7 @@ OpenStack-specific host configuration
pages to enable networking and must use a flavor with property: pages to enable networking and must use a flavor with property:
hw:mem_page_size=large hw:mem_page_size=large
Configure the huge pages for VMs in an |OVS|-|DPDK| environment with the Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with the
command: command:
:: ::
@ -299,7 +298,7 @@ OpenStack-specific host configuration
.. note:: .. note::
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking all compute-labeled worker nodes (and/or AIO locking and unlocking all compute-labeled worker nodes (and/or |AIO|
controllers) to apply the change. controllers) to apply the change.
.. incl-config-controller-0-storage-end: .. incl-config-controller-0-storage-end:
@ -347,8 +346,8 @@ Install software on controller-1 and worker nodes
#. As controller-1 boots, a message appears on its console instructing you to #. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node. configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered controller-1 #. On the console of controller-0, list hosts to see newly discovered
host (hostname=None): controller-1 host (hostname=None):
:: ::
@ -535,7 +534,7 @@ Configure worker nodes
* If planning on running |DPDK| in containers on this host, configure the * If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes: number of 1G Huge pages required on both |NUMA| nodes:
.. code-block:: bash ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100 system host-memory-modify ${NODE} 0 -1G 100
@ -544,7 +543,7 @@ Configure worker nodes
For both Kubernetes and OpenStack: For both Kubernetes and OpenStack:
.. code-block:: bash ::
DATA0IF=<DATA-0-PORT> DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT> DATA1IF=<DATA-1-PORT>
@ -590,14 +589,14 @@ OpenStack-specific host configuration
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prefix|-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx .. only:: starlingx
.. code-block:: bash .. parsed-literal::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
@ -610,7 +609,7 @@ OpenStack-specific host configuration
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for |prefix|-openstack nova ephemeral disks. which is needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE" echo "Configuring Nova local for: $NODE"
@ -674,7 +673,8 @@ Unlock storage nodes in order to bring them into service:
done done
The storage nodes will reboot in order to apply configuration changes and come The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the host machine. into service. This can take 5-10 minutes, depending on the performance of the
host machine.
------------------------------------------------- -------------------------------------------------
Install Rook application manifest and helm-charts Install Rook application manifest and helm-charts
@ -729,7 +729,7 @@ On host storage-0 and storage-1:
system application-apply rook-ceph-apps system application-apply rook-ceph-apps
#. Wait for |OSDs| pod ready. #. Wait for OSDs pod ready.
:: ::
@ -749,4 +749,4 @@ On host storage-0 and storage-1:
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -1,7 +1,7 @@
.. _index-install-r5-distcloud-8164d5952ac5: .. _index-install-r7-distcloud-46f4880ec78b:
=================================== ===================================
Distributed Cloud Installation R5.0 Distributed Cloud Installation R7.0
=================================== ===================================
This section describes how to install and configure the StarlingX distributed This section describes how to install and configure the StarlingX distributed
@ -31,8 +31,7 @@ are localized for maximum responsiveness. The architecture features:
- Centralized orchestration of edge cloud control planes. - Centralized orchestration of edge cloud control planes.
- Full synchronized control planes at edge clouds (that is, Kubernetes cluster - Full synchronized control planes at edge clouds (that is, Kubernetes cluster
control-plane and worker nodes), with greater benefits for local master and nodes), with greater benefits for local services, such as:
services, such as:
- Reduced network latency. - Reduced network latency.
- Operational availability, even if northbound connectivity - Operational availability, even if northbound connectivity
@ -68,13 +67,13 @@ networks, as shown in Figure 1.
In the Horizon GUI, SystemController is the name of the access mode, or In the Horizon GUI, SystemController is the name of the access mode, or
region, used to manage the subclouds. region, used to manage the subclouds.
You can use the SystemController to add subclouds, synchronize select You can use the System Controller to add subclouds, synchronize select
configuration data across all subclouds and monitor subcloud operations configuration data across all subclouds and monitor subcloud operations
and alarms. System software updates for the subclouds are also centrally and alarms. System software updates for the subclouds are also centrally
managed and applied from the SystemController. managed and applied from the System Controller.
DNS, NTP, and other select configuration settings are centrally managed DNS, NTP, and other select configuration settings are centrally managed
at the SystemController and pushed to the subclouds in parallel to at the System Controller and pushed to the subclouds in parallel to
maintain synchronization across the distributed cloud. maintain synchronization across the distributed cloud.
- **Subclouds** - **Subclouds**
@ -82,12 +81,14 @@ networks, as shown in Figure 1.
The subclouds are StarlingX Kubernetes edge systems/clusters used to host The subclouds are StarlingX Kubernetes edge systems/clusters used to host
containerized applications. Any type of StarlingX Kubernetes configuration, containerized applications. Any type of StarlingX Kubernetes configuration,
(including simplex, duplex, or standard with or without storage nodes), can (including simplex, duplex, or standard with or without storage nodes), can
be used for a subcloud. The two edge clouds shown in Figure 1 are subclouds. be used for a subcloud.
Alarms raised at the subclouds are sent to the SystemController for The two edge clouds shown in Figure 1 are subclouds.
Alarms raised at the subclouds are sent to the System Controller for
central reporting. central reporting.
.. figure:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-distributed-cloud.png .. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-distributed-cloud.png
:scale: 45% :scale: 45%
:alt: Distributed cloud deployment configuration :alt: Distributed cloud deployment configuration
@ -98,21 +99,21 @@ networks, as shown in Figure 1.
Network requirements Network requirements
-------------------- --------------------
Subclouds are connected to the SystemController through both the OAM and the Subclouds are connected to the System Controller through both the OAM and the
Management interfaces. Because each subcloud is on a separate L3 subnet, the Management interfaces. Because each subcloud is on a separate L3 subnet, the
OAM, Management and PXE boot L2 networks are local to the subclouds. They are OAM, Management and PXE boot L2 networks are local to the subclouds. They are
not connected via L2 to the central cloud, they are only connected via L3 not connected via L2 to the central cloud, they are only connected via L3
routing. The settings required to connect a subcloud to the SystemController routing. The settings required to connect a subcloud to the System Controller
are specified when a subcloud is defined. A gateway router is required to are specified when a subcloud is defined. A gateway router is required to
complete the L3 connections, which will provide IP routing between the complete the L3 connections, which will provide IP routing between the
subcloud Management and OAM IP subnet and the SystemController Management and subcloud Management and OAM IP subnet and the System Controller Management and
OAM IP subnet, respectively. The SystemController bootstraps the subclouds via OAM IP subnet, respectively. The System Controller bootstraps the subclouds via
the OAM network, and manages them via the management network. For more the OAM network, and manages them via the management network. For more
information, see the `Install a Subcloud`_ section later in this guide. information, see the `Install a Subcloud`_ section later in this guide.
.. note:: .. note::
All messaging between SystemControllers and Subclouds uses the ``admin`` All messaging between System Controllers and Subclouds uses the ``admin``
REST API service endpoints which, in this distributed cloud environment, REST API service endpoints which, in this distributed cloud environment,
are all configured for secure HTTPS. Certificates for these HTTPS are all configured for secure HTTPS. Certificates for these HTTPS
connections are managed internally by StarlingX. connections are managed internally by StarlingX.
@ -135,13 +136,13 @@ gateway routers providing routing to the subclouds' management subnets.
Procedure: Procedure:
- Follow the StarlingX R5.0 installation procedures with the extra step noted below: - Follow the StarlingX R7.0 installation procedures with the extra step noted below:
- AIO-duplex: - AIO-duplex:
`Bare metal All-in-one Duplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_duplex.html>`_ `Bare metal All-in-one Duplex Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/aio_duplex.html>`_
- Standard with dedicated storage nodes: - Standard with dedicated storage nodes:
`Bare metal Standard with Dedicated Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage.html>`_ `Bare metal Standard with Dedicated Storage Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/dedicated_storage.html>`_
- For the step "Bootstrap system on controller-0", add the following - For the step "Bootstrap system on controller-0", add the following
parameters to the Ansible bootstrap override file. parameters to the Ansible bootstrap override file.
@ -164,14 +165,14 @@ At the subcloud location:
required networks. required networks.
#. Physically install the gateway routers which will provide IP routing #. Physically install the gateway routers which will provide IP routing
between the subcloud |OAM| and Management subnets and the SystemController between the subcloud OAM and Management subnets and the System Controller
|OAM| and management subnets. OAM and management subnets.
#. On the server designated for controller-0, install the StarlingX #. On the server designated for controller-0, install the StarlingX
Kubernetes software from USB or a PXE Boot server. Kubernetes software from USB or a PXE Boot server.
#. Establish an L3 connection to the SystemController by enabling the |OAM| #. Establish an L3 connection to the System Controller by enabling the OAM
interface (with |OAM| IP/subnet) on the subcloud controller using the interface (with OAM IP/subnet) on the subcloud controller using the
``config_management`` script. This step is for subcloud ansible bootstrap ``config_management`` script. This step is for subcloud ansible bootstrap
preparation. preparation.
@ -181,15 +182,15 @@ At the subcloud location:
Be prepared to provide the following information: Be prepared to provide the following information:
- Subcloud |OAM| interface name (for example, enp0s3). - Subcloud OAM interface name (for example, enp0s3).
- Subcloud |OAM| interface address, in CIDR format (for example, 10.10.10.12/24). - Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24).
.. note:: This must match the *external_oam_floating_address* supplied in .. note:: This must match the *external_oam_floating_address* supplied in
the subcloud's ansible bootstrap override file. the subcloud's ansible bootstrap override file.
- Subcloud gateway address on the |OAM| network - Subcloud gateway address on the OAM network
(for example, 10.10.10.1). A default value is shown. (for example, 10.10.10.1). A default value is shown.
- System Controller |OAM| subnet (for example, 10,10.10.0/24). - System Controller OAM subnet (for example, 10,10.10.0/24).
.. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for .. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for
the script to finish. the script to finish.
@ -270,7 +271,7 @@ At the System Controller:
tail f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log tail f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log
#. Confirm that the subcloud was deployed successfully: 3. Confirm that the subcloud was deployed successfully:
.. code:: sh .. code:: sh
@ -283,20 +284,20 @@ At the System Controller:
+----+-----------+------------+--------------+---------------+---------+ +----+-----------+------------+--------------+---------------+---------+
#. Continue provisioning the subcloud system as required using the StarlingX #. Continue provisioning the subcloud system as required using the StarlingX
R5.0 Installation procedures and starting from the 'Configure controller-0' R7.0 Installation procedures and starting from the 'Configure controller-0'
step. step.
- For AIO-Simplex: - For AIO-Simplex:
`Bare metal All-in-one Simplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_simplex.html>`_ `Bare metal All-in-one Simplex Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/aio_simplex.html>`_
- For AIO-Duplex: - For AIO-Duplex:
`Bare metal All-in-one Duplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_duplex.html>`_ `Bare metal All-in-one Duplex Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/aio_duplex.html>`_
- For Standard with controller storage: - For Standard with controller storage:
`Bare metal Standard with Controller Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/controller_storage.html>`_ `Bare metal Standard with Controller Storage Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/controller_storage.html>`_
- For Standard with dedicated storage nodes: - For Standard with dedicated storage nodes:
`Bare metal Standard with Dedicated Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage.html>`_ `Bare metal Standard with Dedicated Storage Installation R7.0 <https://docs.starlingx.io/deploy_install_guides/r7_release/bare_metal/dedicated_storage.html>`_
On the active controller for each subcloud: On the active controller for each subcloud:

View File

@ -1,12 +1,12 @@
.. _index-install-r5-ca4053cb3ab9: .. _index-install-r7-8966076f0e81:
=========================== ===========================
StarlingX R5.0 Installation StarlingX R7.0 Installation
=========================== ===========================
StarlingX provides a pre-defined set of standard StarlingX provides a pre-defined set of standard :doc:`deployment
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may configurations </introduction/deploy_options>`. Most deployment options may be
be installed in a virtual environment or on bare metal. installed in a virtual environment or on bare metal.
----------------------------------------------------- -----------------------------------------------------
Install StarlingX Kubernetes in a virtual environment Install StarlingX Kubernetes in a virtual environment
@ -46,7 +46,7 @@ Appendixes
********** **********
.. _use-private-docker-registry-r5: .. _use-private-docker-registry-r7:
Use a private Docker registry Use a private Docker registry
***************************** *****************************
@ -64,7 +64,6 @@ Set up a Simple DNS Server in Lab
setup-simple-dns-server-in-lab setup-simple-dns-server-in-lab
Install controller-0 from a PXE boot server Install controller-0 from a PXE boot server
******************************************* *******************************************
@ -82,6 +81,7 @@ Add and reinstall a host
:maxdepth: 1 :maxdepth: 1
bare_metal/adding-hosts-using-the-host-add-command bare_metal/adding-hosts-using-the-host-add-command
bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b
Add hosts in bulk Add hosts in bulk
@ -116,7 +116,7 @@ Install StarlingX Distributed Cloud on bare metal
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
distributed_cloud/index-install-r5-distcloud-8164d5952ac5 distributed_cloud/index-install-r7-distcloud-46f4880ec78b
----------------- -----------------
Access Kubernetes Access Kubernetes
@ -134,5 +134,5 @@ Access StarlingX OpenStack
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
openstack/index-install-r5-os-bf0f49699241 openstack/index-install-r7-os-adc44604968c

View File

@ -1,7 +1,7 @@
.. _kubernetes_access: .. _kubernetes_access_r7:
================================ ================================
Access StarlingX Kubernetes R5.0 Access StarlingX Kubernetes R7.0
================================ ================================
This section describes how to use local/remote CLIs, GUIs, and/or REST APIs to This section describes how to use local/remote CLIs, GUIs, and/or REST APIs to

View File

@ -2,7 +2,7 @@
Access StarlingX OpenStack Access StarlingX OpenStack
========================== ==========================
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX Use local/remote CLIs, GUIs and/or REST APIs to access and manage |prod|
OpenStack and hosted virtualized applications. OpenStack and hosted virtualized applications.
.. contents:: .. contents::
@ -24,7 +24,7 @@ You can use this method on either controller, active or standby.
**Do not** use ``source /etc/platform/openrc``. **Do not** use ``source /etc/platform/openrc``.
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up #. Set the CLI context to the |prod-os| Cloud Application and set up
OpenStack admin credentials: OpenStack admin credentials:
:: ::
@ -51,14 +51,14 @@ You can use this method on either controller, active or standby.
**Method 2** **Method 2**
Use this method to access StarlingX Kubernetes commands and StarlingX OpenStack Use this method to access |prod| Kubernetes commands and |prod-os|
commands in the same shell. You can only use this method on the active commands in the same shell. You can only use this method on the active
controller. controller.
#. Log in to the active controller via the console or SSH with a #. Log in to the active controller via the console or SSH with a
sysadmin/<sysadmin-password>. sysadmin/<sysadmin-password>.
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up #. Set the CLI context to the |prod-os| Cloud Application and set up
OpenStack admin credentials: OpenStack admin credentials:
:: ::
@ -68,16 +68,16 @@ controller.
.. note:: .. note::
To switch between StarlingX Kubernetes/Platform credentials and StarlingX To switch between |prod| Kubernetes/Platform credentials and |prod-os|
OpenStack credentials, use ``source /etc/platform/openrc`` or credentials, use ``source /etc/platform/openrc`` or ``source
``source ./openrc.os`` respectively. ./openrc.os`` respectively.
********************** **********************
OpenStack CLI commands OpenStack CLI commands
********************** **********************
Access OpenStack CLI commands for the StarlingX OpenStack cloud application Access OpenStack CLI commands for the |prod-os| cloud application
using the :command:`openstack` command. For example: using the :command:`openstack` command. For example:
:: ::
@ -98,36 +98,36 @@ using the :command:`openstack` command. For example:
The image below shows a typical successful run. The image below shows a typical successful run.
.. figure:: /deploy_install_guides/r5_release/figures/starlingx-access-openstack-flavorlist.png .. figure:: /deploy_install_guides/r7_release/figures/starlingx-access-openstack-flavorlist.png
:alt: starlingx-access-openstack-flavorlist :alt: starlingx-access-openstack-flavorlist
:scale: 50% :scale: 50%
*Figure 1: StarlingX OpenStack Flavorlist* Figure 1: |prod-os| Flavorlist
.. figure:: /deploy_install_guides/r5_release/figures/starlingx-access-openstack-command.png .. figure:: /deploy_install_guides/r7_release/figures/starlingx-access-openstack-command.png
:alt: starlingx-access-openstack-command :alt: starlingx-access-openstack-command
:scale: 50% :scale: 50%
*Figure 2: StarlingX OpenStack Commands* Figure 2: |prod-os| Commands
------------------------------ ------------------------------
Configure Helm endpoint domain Configure Helm endpoint domain
------------------------------ ------------------------------
Containerized OpenStack services in StarlingX are deployed behind an ingress Containerized OpenStack services in |prod| are deployed behind an ingress
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS). controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
The ingress controller routes packets to the specific OpenStack service, such as The ingress controller routes packets to the specific OpenStack service, such as
the Cinder service, or the Neutron service, by parsing the FQDN in the packet. the Cinder service, or the Neutron service, by parsing the |FQDN| in the packet.
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service, For example, ``neutron.openstack.svc.cluster.local`` is for the Neutron service,
`cinderapi.openstack.svc.cluster.local` is for the Cinder service. ``cinderapi.openstack.svc.cluster.local`` is for the Cinder service.
This routing requires that access to OpenStack REST APIs must be via a FQDN This routing requires that access to OpenStack REST APIs must be via a |FQDN|
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
OpenStack REST APIs using an IP address. OpenStack REST APIs using an IP address.
FQDNs (such as `cinderapi.openstack.svc.cluster.local`) must be in a DNS server FQDNs (such as ``cinderapi.openstack.svc.cluster.local``) must be in a DNS
that is publicly accessible. server that is publicly accessible.
.. note:: .. note::
@ -136,9 +136,9 @@ that is publicly accessible.
time an OpenStack service is added. Check your particular DNS server for time an OpenStack service is added. Check your particular DNS server for
details on how to wild-card a set of FQDNs. details on how to wild-card a set of FQDNs.
In a “real” deployment, that is, not a lab scenario, you can not use the default In a “real” deployment, that is, not a lab scenario, you cannot use the default
`openstack.svc.cluster.local` domain name externally. You must set a unique ``openstack.svc.cluster.local`` domain name externally. You must set a unique
domain name for your StarlingX system. StarlingX provides the domain name for your |prod| system. |prod| provides the
:command:`system serviceparameter-add` command to configure and set the :command:`system serviceparameter-add` command to configure and set the
OpenStack domain name: OpenStack domain name:
@ -146,8 +146,8 @@ OpenStack domain name:
system service-parameter-add openstack helm endpoint_domain=<domain_name> system service-parameter-add openstack helm endpoint_domain=<domain_name>
`<domain_name>` should be a fully qualified domain name that you own, such that ``<domain_name>`` should be a fully qualified domain name that you own, such that
you can configure the DNS Server that owns `<domain_name>` with the OpenStack you can configure the DNS Server that owns ``<domain_name>`` with the OpenStack
service names underneath the domain. service names underneath the domain.
For example: For example:
@ -157,19 +157,19 @@ For example:
system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
system application-apply |prefix|-openstack system application-apply |prefix|-openstack
This command updates the helm charts of all OpenStack services and restarts them. This command updates the Helm charts of all OpenStack services and restarts them.
For example it would change `cinderapi.openstack.svc.cluster.local` to For example it would change ``cinderapi.openstack.svc.cluster.local`` to
`cinderapi.my-starlingx-domain.my-company.com`, and so on for all OpenStack ``cinderapi.my-starlingx-domain.my-company.com``, and so on for all OpenStack
services. services.
.. note:: .. note::
This command also changes the containerized OpenStack Horizon to listen on This command also changes the containerized OpenStack Horizon to listen on
`horizon.my-starlingx-domain.my-company.com:80` instead of the initial ``horizon.my-starlingx-domain.my-company.com:80`` instead of the initial
`<oamfloatingip>:31000`. ``<oamfloatingip>:31000``.
You must configure `{ *.my-starlingx-domain.my-company.com: --> oamfloatingipaddress }` You must configure { ``*.my-starlingx-domain.my-company.com: --> oamfloatingipaddress }``
in the external DNS server that owns `my-company.com`. in the external DNS server that owns ``my-company.com``.
--------------------------- ---------------------------
Configure HTTPS Certificate Configure HTTPS Certificate
@ -198,7 +198,6 @@ This certificate must be valid for the domain configured for |prod-os|.
#. Open port 443 in |prod| firewall, see :ref:`Modify Firewall Options #. Open port 443 in |prod| firewall, see :ref:`Modify Firewall Options
<security-firewall-options>`. <security-firewall-options>`.
---------- ----------
Remote CLI Remote CLI
---------- ----------
@ -209,7 +208,7 @@ Documentation coming soon.
GUI GUI
--- ---
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the Access the |prod| containerized OpenStack Horizon GUI in your browser at the
following address: following address:
:: ::

View File

@ -0,0 +1,105 @@
.. _convert-worker-nodes-0007b1532308-r7:
====================
Convert Worker Nodes
====================
.. rubric:: |context|
In a hybrid (Kubernetes and OpenStack) cluster scenario you may need to convert
worker nodes to/from ``openstack-compute-nodes``.
.. rubric:: |proc|
#. Convert a k8s-only worker into a OpenStack compute
#. Lock the worker host:
.. code-block:: none
system host-lock <host>
#. Add the ``openstack-compute-node`` taint.
.. code-block:: none
kubectl taint nodes <kubernetes-node-name> openstack-compute-node:NoSchedule
#. Assign OpenStack labels:
.. code-block:: none
system host-label-assign <host> --overwrite openstack-compute-node=enabled avs=enabled sriov=enabled
#. Allocate vswitch huge pages:
.. code-block:: none
system host-memory-modify -1G 1 -f vswitch <host> 0
system host-memory-modify -1G 1 -f vswitch <host> 1
#. Change the class of the data network interface:
.. code-block:: none
system host-if-modify -c data <host> <if_name_or_uuid>
.. note::
If data network interface does not exist yet, refer to |prod-os|
documentation on creating it.
#. Change Kubernetes CPU Manager Policy to allow |VMs| to use application
cores:
.. code-block:: none
system host-label-remove <host> kube-cpu-mgr-policy
#. Unlock the worker host:
.. code-block:: none
system host-unlock <host>
#. Convert a OpenStack compute into a k8s-only worker.
#. Lock the worker host:
.. code-block:: none
system host-lock <host>
#. Remove OpenStack labels:
.. code-block:: none
system host-label-remove <host> openstack-compute-node avs sriov
.. note::
The labels have to be removed, not to have its values changed.
#. Deallocate vswitch huge pages:
.. code-block:: none
system host-memory-modify -1G 0 -f vswitch <host> 0
system host-memory-modify -1G 0 -f vswitch <host> 1
#. Change the class of the data network interface:
.. code-block:: none
system host-if-modify -c none <host> <if_name_or_uuid>
.. note::
This change is needed to avoid raising a permanent alarm for the
interface without the need to delete it.
#. Unlock the worker host:
.. code-block:: none
system host-unlock <host>

View File

@ -0,0 +1,49 @@
.. _hybrid-cluster-c7a3134b6f2a-r7:
==============
Hybrid Cluster
==============
A Hybrid Cluster occurs when the hosts with a worker function (|AIO|
controllers and worker nodes) are split between two groups, one running
|prod-os| for hosting |VM| payloads and the other for hosting containerized
payloads.
The host labels are used to define each worker function on the Hybrid Cluster
setup. For example, a standard configuration (2 controllers and 2 computes) can
be split into (2 controllers, 1 openstack-compute and 1 kubernetes-worker).
.. only:: partner
.. include:: /_includes/hybrid-cluster.rest
:start-after: begin-prepare-cloud-platform
:end-before: end-prepare-cloud-platform
-----------
Limitations
-----------
- Worker function on |AIO| controllers MUST both be either
Kubernetes or OpenStack.
- Hybrid Cluster does not apply to |AIO-SX| or |AIO-DX| setups.
- A worker must have only one function, either it is OpenStack compute or
k8s-only worker, never both at the same time.
- The ``sriov`` and ``sriovdp`` labels cannot coexist on the same host,
in order to prevent the |SRIOV| device plugin from conflicting with the
OpenStack |SRIOV| driver.
- No host will assign |VMs| and application containers to application cores
at the same time.
- Standard Controllers cannot have ``openstack-compute-node`` label;
only |AIO| Controllers can have ``openstack-compute-node`` label.
- Taints must be added to OpenStack compute hosts (i.e. worker nodes or
|AIO|-Controller nodes with the ``openstack-compute-node`` label) to
prevent end users' hosted containerized workloads/pods from being scheduled on
OpenStack compute hosts.

View File

@ -1,4 +1,4 @@
.. _index-install-r5-os-bf0f49699241: .. _index-install-r7-os-adc44604968c:
=================== ===================
StarlingX OpenStack StarlingX OpenStack
@ -7,8 +7,8 @@ StarlingX OpenStack
This section describes the steps to install and access StarlingX OpenStack. This section describes the steps to install and access StarlingX OpenStack.
Other than the OpenStack-specific configurations required in the underlying Other than the OpenStack-specific configurations required in the underlying
StarlingX Kubernetes infrastructure (described in the installation steps for StarlingX Kubernetes infrastructure (described in the installation steps for
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX StarlingX Kubernetes), the installation of containerized OpenStack for
is independent of deployment configuration. StarlingX is independent of deployment configuration.
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@ -16,3 +16,15 @@ is independent of deployment configuration.
install install
access access
uninstall_delete uninstall_delete
--------------
Hybrid Cluster
--------------
.. toctree::
:maxdepth: 1
hybrid-cluster-c7a3134b6f2a
convert-worker-nodes-0007b1532308

View File

@ -38,8 +38,8 @@ Install application manifest and helm-charts
system host-fs-modify controller-0 docker=60 system host-fs-modify controller-0 docker=60
#. Get the latest StarlingX OpenStack application (|prefix|-openstack) manifest #. Get the latest StarlingX OpenStack application (|prefix|-openstack) manifest and
and helm charts. Use one of the following options: helm charts. Use one of the following options:
* Private StarlingX build. See :ref:`Build-stx-openstack-app` for details. * Private StarlingX build. See :ref:`Build-stx-openstack-app` for details.
* Public download from * Public download from
@ -48,7 +48,7 @@ Install application manifest and helm-charts
After you select a release, helm charts are located in ``centos/outputs/helm-charts``. After you select a release, helm charts are located in ``centos/outputs/helm-charts``.
#. Load the |prefix|-openstack application's package into StarlingX. The tarball #. Load the |prefix|-openstack application's package into StarlingX. The tarball
package contains |prefix|-openstack's Armada manifest and package contains |prefix|-openstack's FluxCD manifest and
|prefix|-openstack's set of helm charts. For example: |prefix|-openstack's set of helm charts. For example:
.. parsed-literal:: .. parsed-literal::
@ -57,16 +57,15 @@ Install application manifest and helm-charts
This will: This will:
* Load the Armada manifest and helm charts. * Load the FluxCD manifest and helm charts.
* Internally manage helm chart override values for each chart. * Internally manage helm chart override values for each chart.
* Automatically generate system helm chart overrides for each chart based on * Automatically generate system helm chart overrides for each chart based on
the current state of the underlying StarlingX Kubernetes platform and the the current state of the underlying StarlingX Kubernetes platform and the
recommended StarlingX configuration of OpenStack services. recommended StarlingX configuration of OpenStack services.
#. Apply the |prefix|-openstack application in order to bring StarlingX #. Apply the |prefix|-openstack application in order to bring StarlingX OpenStack into
OpenStack into service. If your environment is preconfigured with a proxy service. If your environment is preconfigured with a proxy server, then
server, then make sure HTTPS proxy is set before applying make sure HTTPS proxy is set before applying |prefix|-openstack.
|prefix|-openstack.
.. parsed-literal:: .. parsed-literal::
@ -75,7 +74,7 @@ Install application manifest and helm-charts
.. note:: .. note::
To set the HTTPS proxy at bootstrap time, refer to To set the HTTPS proxy at bootstrap time, refer to
`Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r5_release/ansible_bootstrap_configs.html#docker-proxy>`_. `Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r7_release/ansible_bootstrap_configs.html#docker-proxy>`_.
To set the HTTPS proxy after installation, refer to To set the HTTPS proxy after installation, refer to
`Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_. `Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_.

View File

@ -1,12 +1,12 @@
.. _uninstall_delete-r5: .. _uninstall_delete-r7:
=================== ===================
Uninstall OpenStack Uninstall OpenStack
=================== ===================
This section provides commands for uninstalling and deleting the This section provides commands for uninstalling and deleting the
|prod-os| application. |prod| OpenStack application.
.. warning:: .. warning::
@ -23,14 +23,12 @@ includes:
- Terminating/Deleting all servers/instances/|VMs| - Terminating/Deleting all servers/instances/|VMs|
- Removing all volumes, volume backups, volume snapshots - Removing all volumes, volume backups, volume snapshots
- Removing all Glance images - Removing all Glance images
- Removing all network trunks, floating IP addresses, manual ports,
application ports, tenant routers, tenant networks, and shared networks.
----------------------------- -----------------------------
Bring down OpenStack services Bring down OpenStack services
----------------------------- -----------------------------
Use the system |CLI| to uninstall the OpenStack application: Use the system CLI to uninstall the OpenStack application:
.. parsed-literal:: .. parsed-literal::
@ -41,9 +39,10 @@ Use the system |CLI| to uninstall the OpenStack application:
Delete OpenStack application definition Delete OpenStack application definition
--------------------------------------- ---------------------------------------
Use the system |CLI| to delete the OpenStack application definition: Use the system CLI to delete the OpenStack application definition:
.. parsed-literal:: .. parsed-literal::
system application-delete |prefix|-openstack system application-delete |prefix|-openstack
system application-list system application-list

View File

@ -1,4 +1,4 @@
.. _setup-simple-dns-server-in-lab: .. _setup-simple-dns-server-in-lab-r7:
===================================== =====================================
Set up a Simple DNS Server in the Lab Set up a Simple DNS Server in the Lab

View File

@ -1,14 +1,14 @@
=========================================== ===========================================
Virtual All-in-one Duplex Installation R5.0 Virtual All-in-one Duplex Installation R7.0
=========================================== ===========================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_aio_duplex.txt .. include:: /shared/_includes/desc_aio_duplex.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Prepare Host and Environment
============================ ============================
This section describes how to prepare the physical host and virtual environment This section describes how to prepare the physical host and virtual environment
for a **StarlingX R5.0 virtual All-in-one Duplex** deployment configuration. for a **StarlingX R7.0 virtual All-in-one Duplex** deployment configuration.
.. contents:: .. contents::
:local: :local:

View File

@ -3,7 +3,7 @@ Install StarlingX Kubernetes on Virtual AIO-DX
============================================== ==============================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 virtual All-in-one Duplex** deployment configuration. on a **StarlingX R7.0 virtual All-in-one Duplex** deployment configuration.
.. contents:: .. contents::
:local: :local:
@ -84,7 +84,7 @@ On virtual controller-0:
The default location where Ansible looks for and imports user The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``. configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -126,11 +126,11 @@ On virtual controller-0:
EOF EOF
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :ref:`docker_proxy_config` for firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
details about Docker proxy settings. for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -191,8 +191,9 @@ Optionally, initialize a Ceph-based Persistent Storage Backend
A persistent storage backend is required if your application requires A persistent storage backend is required if your application requires
Persistent Volume Claims (PVCs). The StarlingX OpenStack application Persistent Volume Claims (PVCs). The StarlingX OpenStack application
requires PVCs, therefore if you plan on using the |prefix|-openstack (|prefix|-openstack) requires PVCs, therefore if you plan on using the
application, then you must configure a persistent storage backend. |prefix|-openstack application, then you must configure a persistent storage
backend.
There are two options for persistent storage backend: There are two options for persistent storage backend:
1) the host-based Ceph solution and 1) the host-based Ceph solution and
@ -241,8 +242,8 @@ For Rook container-based Ceph:
.. important:: .. important::
This step is required only if the |prefix|-openstack application This step is required only if the StarlingX OpenStack application
will be installed. (|prefix|-openstack) will be installed.
1G Huge Pages are not supported in the virtual environment and there is no 1G Huge Pages are not supported in the virtual environment and there is no
virtual NIC supporting SRIOV. For that reason, data interfaces are not virtual NIC supporting SRIOV. For that reason, data interfaces are not
@ -386,8 +387,8 @@ On virtual controller-0:
.. important:: .. important::
This step is required only if the |prefix|-openstack application This step is required only if the StarlingX OpenStack application
will be installed. (|prefix|-openstack) will be installed.
1G Huge Pages are not supported in the virtual environment and there is no 1G Huge Pages are not supported in the virtual environment and there is no
virtual NIC supporting SRIOV. For that reason, data interfaces are not virtual NIC supporting SRIOV. For that reason, data interfaces are not
@ -453,17 +454,17 @@ OpenStack-specific host configuration
.. important:: .. important::
This step is required only if the |prefix|-openstack application This step is required only if the StarlingX OpenStack application
will be installed. (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in #. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the |prefix|-openstack manifest/helm-charts later: support of installing the |prefix|-openstack manifest/helm-charts later:
.. parsed-literal:: ::
system host-label-assign controller-1 openstack-control-plane=enabled system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 |vswitch-label| system host-label-assign controller-1 openvswitch=enabled
.. note:: .. note::
@ -586,4 +587,4 @@ On **virtual** controller-0 and controller-1:
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -1,14 +1,14 @@
============================================ ============================================
Virtual All-in-one Simplex Installation R5.0 Virtual All-in-one Simplex Installation R7.0
============================================ ============================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_aio_simplex.txt .. include:: /shared/_includes/desc_aio_simplex.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Prepare Host and Environment
============================ ============================
This section describes how to prepare the physical host and virtual environment This section describes how to prepare the physical host and virtual environment
for a **StarlingX R5.0 virtual All-in-one Simplex** deployment configuration. for a **StarlingX R7.0 virtual All-in-one Simplex** deployment configuration.
.. contents:: .. contents::
:local: :local:

View File

@ -3,7 +3,7 @@ Install StarlingX Kubernetes on Virtual AIO-SX
============================================== ==============================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 virtual All-in-one Simplex** deployment configuration. on a **StarlingX R7.0 virtual All-in-one Simplex** deployment configuration.
.. contents:: .. contents::
:local: :local:
@ -85,7 +85,7 @@ On virtual controller-0:
The default location where Ansible looks for and imports user The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``. configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -125,11 +125,11 @@ On virtual controller-0:
EOF EOF
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :ref:`docker_proxy_config` for firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
details about Docker proxy settings. for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -313,11 +313,11 @@ OpenStack-specific host configuration
#. **For OpenStack only:** A vSwitch is required. #. **For OpenStack only:** A vSwitch is required.
The default vSwitch is containerized OVS that is packaged with the The default vSwitch is containerized |OVS| that is packaged with the
|prefix|-openstack manifest/helm-charts. StarlingX provides the option to use |prefix|-openstack manifest/helm-charts. StarlingX provides the option to use
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT |OVS-DPDK| on the host, however, in the virtual environment |OVS-DPDK| is
supported, only OVS is supported. Therefore, simply use the default OVS NOT supported, only |OVS| is supported. Therefore, simply use the default
vSwitch here. |OVS| vSwitch here.
#. **For OpenStack Only:** Set up disk partition for nova-local volume group, #. **For OpenStack Only:** Set up disk partition for nova-local volume group,
which is needed for |prefix|-openstack nova ephemeral disks. which is needed for |prefix|-openstack nova ephemeral disks.
@ -351,8 +351,8 @@ Unlock virtual controller-0 to bring it into service:
system host-unlock controller-0 system host-unlock controller-0
Controller-0 will reboot to apply configuration changes and come into Controller-0 will reboot to apply configuration changes and come into service.
service. This can take 5-10 minutes, depending on the performance of the host machine. This can take 5-10 minutes, depending on the performance of the host machine.
-------------------------------------------------------------------------- --------------------------------------------------------------------------
Optionally, finish configuration of Ceph-based Persistent Storage Backend Optionally, finish configuration of Ceph-based Persistent Storage Backend
@ -405,7 +405,7 @@ On **virtual** controller-0:
system application-apply rook-ceph-apps system application-apply rook-ceph-apps
#. Wait for OSDs pod ready #. Wait for |OSDs| pod ready.
:: ::
@ -423,4 +423,4 @@ On **virtual** controller-0:
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -29,7 +29,7 @@ configure the VM using Edit Settings as follows:
#. Go to Display and move the "Video memory" slider all the way to the right. #. Go to Display and move the "Video memory" slider all the way to the right.
Then tick the "Acceleration" checkbox "Enable 3D Acceleration". Then tick the "Acceleration" checkbox "Enable 3D Acceleration".
#. Go to General/Advanced and set "Shared Clipboard" and drag-and-drop to #. Go to General/Advanced and set "Shared Clipboard" and "Drag'n Drop" to
Bidirectional. Bidirectional.
#. Go to User Interface/Devices and select "Devices/Insert Guest Additions CD #. Go to User Interface/Devices and select "Devices/Insert Guest Additions CD
image" from the drop down. Restart your VM. image" from the drop down. Restart your VM.

View File

@ -1,14 +1,14 @@
========================================================== ==========================================================
Virtual Standard with Controller Storage Installation R5.0 Virtual Standard with Controller Storage Installation R7.0
========================================================== ==========================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_controller_storage.txt .. include:: /shared/_includes/desc_controller_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Prepare Host and Environment
============================ ============================
This section describes how to prepare the physical host and virtual environment This section describes how to prepare the physical host and virtual environment
for a **StarlingX R5.0 virtual Standard with Controller Storage** deployment for a **StarlingX R7.0 virtual Standard with Controller Storage** deployment
configuration. configuration.
.. contents:: .. contents::

View File

@ -3,7 +3,7 @@ Install StarlingX Kubernetes on Virtual Standard with Controller Storage
======================================================================== ========================================================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 virtual Standard with Controller Storage** deployment on a **StarlingX R7.0 virtual Standard with Controller Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -89,7 +89,7 @@ On virtual controller-0:
The default location where Ansible looks for and imports user The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``. configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
.. include:: /shared/_includes/r5_ansible_install_time_only.txt .. include:: /_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods: playbook using one of the following methods:
@ -131,11 +131,11 @@ On virtual controller-0:
EOF EOF
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :ref:`docker_proxy_config` for firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
details about Docker proxy settings. for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -510,6 +510,7 @@ OpenStack-specific host configuration
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE openvswitch=enabled
done done
@ -605,4 +606,4 @@ On virtual controller-0:
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -1,14 +1,14 @@
========================================================= =========================================================
Virtual Standard with Dedicated Storage Installation R5.0 Virtual Standard with Dedicated Storage Installation R7.0
========================================================= =========================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_dedicated_storage.txt .. include:: /shared/_includes/desc_dedicated_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Prepare Host and Environment
============================ ============================
This section describes how to prepare the physical host and virtual environment This section describes how to prepare the physical host and virtual environment
for a **StarlingX R5.0 virtual Standard with Dedicated Storage** deployment for a **StarlingX R7.0 virtual Standard with Dedicated Storage** deployment
configuration. configuration.
.. contents:: .. contents::

View File

@ -3,7 +3,7 @@ Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
======================================================================= =======================================================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 virtual Standard with Dedicated Storage** deployment on a **StarlingX R7.0 virtual Standard with Dedicated Storage** deployment
configuration. configuration.
.. contents:: .. contents::
@ -367,6 +367,7 @@ OpenStack-specific host configuration
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label| system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
@ -399,4 +400,4 @@ Unlock worker nodes
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -276,10 +276,10 @@ Console usage:
Install`` and continue using the Virtual Box console. Install`` and continue using the Virtual Box console.
For details on how to specify installation parameters such as rootfs device For details on how to specify installation parameters such as rootfs device
and console port, see :ref:`config_install_parms_r5`. and console port, see :ref:`config_install_parms_r7`.
Follow the :ref:`StarlingX Installation and Deployment Guides Follow the :ref:`StarlingX Installation and Deployment Guides <index-install-e083ca818006>`
<index-install-e083ca818006>` to continue. to continue.
* Ensure that boot priority on all VMs is changed using the commands in the "Set * Ensure that boot priority on all VMs is changed using the commands in the "Set
the boot priority" step above. the boot priority" step above.
@ -288,7 +288,7 @@ Follow the :ref:`StarlingX Installation and Deployment Guides
* On Virtual Box, click F12 immediately when the VM starts to select a different * On Virtual Box, click F12 immediately when the VM starts to select a different
boot option. Select the ``lan`` option to force a network boot. boot option. Select the ``lan`` option to force a network boot.
.. _config_install_parms_r5: .. _config_install_parms_r7:
------------------------------------ ------------------------------------
Configurable installation parameters Configurable installation parameters
@ -324,7 +324,7 @@ Controller Node Install vs Graphics Controller Node Install), and hitting the
tab key to allow command line modification. The example below shows how to tab key to allow command line modification. The example below shows how to
modify the ``rootfs_device`` specification. modify the ``rootfs_device`` specification.
.. figure:: /deploy_install_guides/r5_release/figures/install_virtualbox_configparms.png .. figure:: /deploy_install_guides/r7_release/figures/install_virtualbox_configparms.png
:scale: 100% :scale: 100%
:alt: Install controller-0 :alt: Install controller-0
@ -361,6 +361,6 @@ If youd prefer to install to the second disk on your node, use the command:
Alternatively, these values can be set from the GUI via the ``Edit Host`` Alternatively, these values can be set from the GUI via the ``Edit Host``
option. option.
.. figure:: /deploy_install_guides/r5_release/figures/install_virtualbox_guiscreen.png .. figure:: /deploy_install_guides/r7_release/figures/install_virtualbox_guiscreen.png
:scale: 100% :scale: 100%
:alt: Install controller-0 :alt: Install controller-0

View File

@ -1,14 +1,14 @@
==================================================== ====================================================
Virtual Standard with Rook Storage Installation R5.0 Virtual Standard with Rook Storage Installation R7.0
==================================================== ====================================================
-------- --------
Overview Overview
-------- --------
.. include:: /shared/_includes/r5_desc_rook_storage.txt .. include:: /shared/_includes/desc_rook_storage.txt
.. include:: /shared/_includes/r5_ipv6_note.txt .. include:: /shared/_includes/ipv6_note.txt
------------ ------------
Installation Installation

View File

@ -3,7 +3,7 @@ Prepare Host and Environment
============================ ============================
This section describes how to prepare the physical host and virtual environment This section describes how to prepare the physical host and virtual environment
for a **StarlingX R5.0 virtual Standard with Rook Storage** deployment for a **StarlingX R7.0 virtual Standard with Rook Storage** deployment
configuration. configuration.
.. contents:: .. contents::

View File

@ -3,7 +3,7 @@ Install StarlingX Kubernetes on Virtual Standard with Rook Storage
================================================================== ==================================================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R5.0 virtual Standard with Rook Storage** deployment configuration, on a **StarlingX R7.0 virtual Standard with Rook Storage** deployment configuration,
deploy rook ceph cluster replacing default native ceph cluster. deploy rook ceph cluster replacing default native ceph cluster.
.. contents:: .. contents::
@ -435,6 +435,7 @@ OpenStack-specific host configuration
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
@ -543,4 +544,4 @@ On virtual storage-0 and storage-1:
Next steps Next steps
---------- ----------
.. include:: /_includes/r5_kubernetes_install_next.txt .. include:: /_includes/kubernetes_install_next.txt

View File

@ -28,7 +28,7 @@ Deployment
A system install is required to deploy StarlingX release 5.0.1. There is no A system install is required to deploy StarlingX release 5.0.1. There is no
upgrade path from previous StarlingX releases. upgrade path from previous StarlingX releases.
Use the :ref:`R5.0 Installation Guides <index-install-r5-ca4053cb3ab9>` Use the `R5.0 Installation Guides <https://docs.starlingx.io/r/stx.5.0/deploy_install_guides/r5_release/index-install-r5-ca4053cb3ab9.html>`
to install R5.0.1. to install R5.0.1.
----------------------------- -----------------------------

View File

@ -27,7 +27,7 @@ Deployment
A system install is required to deploy StarlingX release 5.0. There is no A system install is required to deploy StarlingX release 5.0. There is no
upgrade path from previous StarlingX releases. For detailed instructions, see upgrade path from previous StarlingX releases. For detailed instructions, see
the :ref:`R5.0 Installation Guides <index-install-r5-ca4053cb3ab9>`. the `R5.0 Installation Guides <https://docs.starlingx.io/r/stx.5.0/deploy_install_guides/r5_release/index-install-r5-ca4053cb3ab9.html>`.
----------------------------- -----------------------------
New features and enhancements New features and enhancements
@ -40,8 +40,8 @@ associated user guides (if applicable).
A new storage backend rook-ceph to provide storage service to StarlingX. A new storage backend rook-ceph to provide storage service to StarlingX.
Guide: :ref:`Install StarlingX Kubernetes on Bare Metal Standard with Rook Guide: `Install StarlingX Kubernetes on Bare Metal Standard with Rook
Storage <rook_storage_install_kubernetes>` Storage <https://docs.starlingx.io/r/stx.5.0/deploy_install_guides/r5_release/bare_metal/rook_storage.html>`
* FPGA image update orchestration for distributed cloud * FPGA image update orchestration for distributed cloud