From f98eb51fb434efb0e1a47ca705942a560add4b66 Mon Sep 17 00:00:00 2001 From: Ron Stone Date: Fri, 30 Dec 2022 09:05:46 -0500 Subject: [PATCH] Simplify install dirs Simplify install doc dir structure - Remove r6 directory - Rename r7 directory to be non release-specific - Delete unused files - Delete obsolete include files - Delete obsolete commented sections in install topics - Remove redundent version menu entry Signed-off-by: Ron Stone Change-Id: I59634826d4b3af41410e9d26cc182f6b4aed8ade --- doc/source/_includes/deb-tech-preview.rest | 21 - doc/source/_includes/docker-proxy-config.rest | 4 +- .../installing-software-on-controller-0.rest | 3 - doc/source/_includes/ironic.rest | 2 +- .../_includes/kubernetes_install_next.txt | 2 +- .../archive/configuration/cert_config.rst | 2 +- doc/source/conf.py | 2 +- ...ptions-all-in-one-duplex-configuration.rst | 2 +- .../index-install-e083ca818006.rst | 3 +- .../r6_release/ansible_bootstrap_configs.rst | 434 ------ ...erver-files-for-a-custom-configuration.rst | 60 - .../bare_metal/adding-hosts-in-bulk.rst | 61 - ...dding-hosts-using-the-host-add-command.rst | 177 --- .../r6_release/bare_metal/aio_duplex.rst | 26 - .../bare_metal/aio_duplex_extend.rst | 340 ----- .../bare_metal/aio_duplex_hardware.rst | 69 - .../aio_duplex_install_kubernetes.rst | 1182 ----------------- .../r6_release/bare_metal/aio_simplex.rst | 21 - .../bare_metal/aio_simplex_hardware.rst | 71 - .../aio_simplex_install_kubernetes.rst | 718 ---------- ...rapping-from-a-private-docker-registry.rst | 54 - .../bare_metal/bulk-host-xml-file-format.rst | 135 -- .../configuring-a-pxe-boot-server.rst | 211 --- .../bare_metal/controller_storage.rst | 22 - .../controller_storage_hardware.rst | 67 - .../controller_storage_install_kubernetes.rst | 941 ------------- .../bare_metal/dedicated_storage.rst | 22 - .../bare_metal/dedicated_storage_hardware.rst | 72 - .../dedicated_storage_install_kubernetes.rst | 536 -------- ...g-the-host-delete-command-1729d2e3153b.rst | 33 - .../exporting-host-configurations.rst | 53 - .../r6_release/bare_metal/ironic.rst | 72 - .../r6_release/bare_metal/ironic_hardware.rst | 51 - .../r6_release/bare_metal/ironic_install.rst | 392 ------ .../reinstalling-a-system-or-a-host.rst | 39 - ...ng-an-exported-host-configuration-file.rst | 45 - .../r6_release/bare_metal/rook_storage.rst | 22 - .../bare_metal/rook_storage_hardware.rst | 73 - .../rook_storage_install_kubernetes.rst | 752 ----------- ...ndex-install-r6-distcloud-46f4880ec78b.rst | 317 ----- .../index-install-r6-8966076f0e81.rst | 146 -- .../r6_release/kubernetes_access.rst | 190 --- .../convert-worker-nodes-0007b1532308.rst | 105 -- .../openstack/hybrid-cluster-c7a3134b6f2a.rst | 49 - .../index-install-r6-os-adc44604968c.rst | 29 - .../r6_release/openstack/install.rst | 134 -- .../r6_release/openstack/uninstall_delete.rst | 48 - .../setup-simple-dns-server-in-lab.rst | 99 -- .../r6_release/virtual/aio_duplex.rst | 21 - .../r6_release/virtual/aio_duplex_environ.rst | 57 - .../virtual/aio_duplex_install_kubernetes.rst | 590 -------- .../virtual/aio_simplex_environ.rst | 55 - .../aio_simplex_install_kubernetes.rst | 426 ------ .../r6_release/virtual/controller_storage.rst | 21 - .../virtual/controller_storage_environ.rst | 59 - .../controller_storage_install_kubernetes.rst | 609 --------- .../r6_release/virtual/dedicated_storage.rst | 21 - .../virtual/dedicated_storage_environ.rst | 61 - .../dedicated_storage_install_kubernetes.rst | 403 ------ .../r6_release/virtual/install_virtualbox.rst | 366 ----- .../r6_release/virtual/rook_storage.rst | 21 - .../virtual/rook_storage_environ.rst | 61 - .../rook_storage_install_kubernetes.rst | 547 -------- ...erver-files-for-a-custom-configuration.txt | 60 - .../r7_release/bare_metal/aio_duplex.txt | 26 - .../bare_metal/aio_duplex_hardware.txt | 69 - .../r7_release/bare_metal/aio_simplex.txt | 21 - .../bare_metal/aio_simplex_hardware.txt | 71 - .../configuring-a-pxe-boot-server.txt | 209 --- .../bare_metal/controller_storage.txt | 22 - .../controller_storage_hardware.txt | 67 - .../bare_metal/dedicated_storage.txt | 22 - .../bare_metal/dedicated_storage_hardware.txt | 72 - .../r7_release/bare_metal/prep_servers.txt | 12 - .../install_virtualbox_configparms.png | Bin 13447 -> 0 bytes .../figures/install_virtualbox_guiscreen.png | Bin 23175 -> 0 bytes .../starlingx-access-openstack-command.png | Bin 366267 -> 0 bytes .../starlingx-access-openstack-flavorlist.png | Bin 447605 -> 0 bytes ...-deployment-options-controller-storage.png | Bin 98773 -> 0 bytes ...x-deployment-options-dedicated-storage.png | Bin 111169 -> 0 bytes ...x-deployment-options-distributed-cloud.png | Bin 320078 -> 0 bytes ...ngx-deployment-options-duplex-extended.png | Bin 101986 -> 0 bytes .../starlingx-deployment-options-duplex.png | Bin 103883 -> 0 bytes .../starlingx-deployment-options-ironic.png | Bin 129791 -> 0 bytes .../starlingx-deployment-options-simplex.png | Bin 72126 -> 0 bytes .../r7_release/openstack/access.rst | 357 ----- .../r7_release/virtual/aio_simplex.rst | 21 - .../virtual/config_virtualbox_netwk.rst | 161 --- .../r7_release/virtual/physical_host_req.txt | 72 - .../.vscode/settings.json | 0 .../ansible_bootstrap_configs.rst | 0 .../bare_metal/adding-hosts-in-bulk.rst | 0 ...dding-hosts-using-the-host-add-command.rst | 0 .../aio_duplex_install_kubernetes.rst | 103 -- .../aio_simplex_install_kubernetes.rst | 2 + ...rapping-from-a-private-docker-registry.rst | 0 .../bare_metal/bulk-host-xml-file-format.rst | 0 .../controller_storage_install_kubernetes.rst | 0 .../dedicated_storage_install_kubernetes.rst | 62 - ...g-the-host-delete-command-1729d2e3153b.rst | 0 .../exporting-host-configurations.rst | 0 .../bare_metal/ironic_install.rst | 0 .../bare_metal/prep_servers.txt | 0 .../pxe-boot-controller-0-d5da025c2524.rst | 45 - .../reinstalling-a-system-or-a-host.rst | 0 ...ng-an-exported-host-configuration-file.rst | 0 .../rook_storage_install_kubernetes.rst | 0 ...dex-install-r7-distcloud-46f4880ec78b.rest | 0 .../install_virtualbox_configparms.png | Bin .../figures/install_virtualbox_guiscreen.png | Bin .../starlingx-access-openstack-command.png | Bin .../starlingx-access-openstack-flavorlist.png | Bin ...-deployment-options-controller-storage.png | Bin ...x-deployment-options-dedicated-storage.png | Bin ...x-deployment-options-distributed-cloud.png | Bin ...ngx-deployment-options-duplex-extended.png | Bin .../starlingx-deployment-options-duplex.png | Bin .../starlingx-deployment-options-ironic.png | Bin .../starlingx-deployment-options-simplex.png | Bin .../index-install-r7-8966076f0e81.rst | 0 .../kubernetes_access.rst | 0 .../openstack/access.rst | 4 +- .../convert-worker-nodes-0007b1532308.rst | 0 .../openstack/hybrid-cluster-c7a3134b6f2a.rst | 0 .../index-install-r7-os-adc44604968c.rst | 0 .../openstack/install.rst | 0 .../openstack/uninstall_delete.rst | 0 .../setup-simple-dns-server-in-lab.rst | 0 .../virtual/aio_duplex.rst | 0 .../virtual/aio_duplex_environ.rst | 0 .../virtual/aio_duplex_install_kubernetes.rst | 0 .../virtual/aio_simplex.rst | 8 +- .../virtual/aio_simplex_environ.rst | 0 .../aio_simplex_install_kubernetes.rst | 0 .../virtual/config_virtualbox_netwk.rst | 0 .../virtual/controller_storage.rst | 0 .../virtual/controller_storage_environ.rst | 0 .../controller_storage_install_kubernetes.rst | 0 .../virtual/dedicated_storage.rst | 0 .../virtual/dedicated_storage_environ.rst | 0 .../dedicated_storage_install_kubernetes.rst | 0 .../virtual/install_virtualbox.rst | 4 +- .../virtual/physical_host_req.txt | 0 .../virtual/rook_storage.rst | 0 .../virtual/rook_storage_environ.rst | 0 .../rook_storage_install_kubernetes.rst | 0 doc/source/index.rst | 5 +- .../r6-0-release-notes-bc72d0b961e7.rst | 4 +- ...flexran-2107-on-starlingx-c4efa00b1b98.rst | 4 +- ...flexran-2111-on-starlingx-ca139fa4e285.rst | 4 +- ...flexran-2203-on-starlingx-1d1b15ecb16f.rst | 4 +- .../shared/_includes/desc_rook_storage.txt | 2 +- 152 files changed, 31 insertions(+), 12709 deletions(-) delete mode 100644 doc/source/_includes/deb-tech-preview.rest delete mode 100644 doc/source/_includes/installing-software-on-controller-0.rest delete mode 100644 doc/source/deploy_install_guides/r6_release/ansible_bootstrap_configs.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-in-bulk.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-using-the-host-add-command.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/bulk-host-xml-file-format.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/exporting-host-configurations.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/ironic.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/ironic_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/ironic_install.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-or-a-host.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_hardware.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/distributed_cloud/index-install-r6-distcloud-46f4880ec78b.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/index-install-r6-8966076f0e81.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/kubernetes_access.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/openstack/convert-worker-nodes-0007b1532308.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/openstack/hybrid-cluster-c7a3134b6f2a.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/openstack/index-install-r6-os-adc44604968c.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/openstack/install.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/openstack/uninstall_delete.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/setup-simple-dns-server-in-lab.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/aio_duplex.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/aio_duplex_environ.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/aio_duplex_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/aio_simplex_environ.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/aio_simplex_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/controller_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/controller_storage_environ.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/controller_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/dedicated_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/dedicated_storage_environ.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/dedicated_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/install_virtualbox.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/rook_storage.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/rook_storage_environ.rst delete mode 100644 doc/source/deploy_install_guides/r6_release/virtual/rook_storage_install_kubernetes.rst delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/aio_duplex.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/aio_duplex_hardware.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/aio_simplex.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/aio_simplex_hardware.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/configuring-a-pxe-boot-server.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/controller_storage.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/controller_storage_hardware.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/dedicated_storage.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/dedicated_storage_hardware.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/bare_metal/prep_servers.txt delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/install_virtualbox_configparms.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/install_virtualbox_guiscreen.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-access-openstack-command.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-access-openstack-flavorlist.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-controller-storage.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-dedicated-storage.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-distributed-cloud.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-duplex-extended.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-duplex.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-ironic.png delete mode 100644 doc/source/deploy_install_guides/r7_release/figures/starlingx-deployment-options-simplex.png delete mode 100644 doc/source/deploy_install_guides/r7_release/openstack/access.rst delete mode 100644 doc/source/deploy_install_guides/r7_release/virtual/aio_simplex.rst delete mode 100644 doc/source/deploy_install_guides/r7_release/virtual/config_virtualbox_netwk.rst delete mode 100644 doc/source/deploy_install_guides/r7_release/virtual/physical_host_req.txt rename doc/source/deploy_install_guides/{r7_release => release}/.vscode/settings.json (100%) rename doc/source/deploy_install_guides/{r7_release => release}/ansible_bootstrap_configs.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/adding-hosts-in-bulk.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/adding-hosts-using-the-host-add-command.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/aio_duplex_install_kubernetes.rst (90%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/aio_simplex_install_kubernetes.rst (97%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/bootstrapping-from-a-private-docker-registry.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/bulk-host-xml-file-format.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/controller_storage_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/dedicated_storage_install_kubernetes.rst (87%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/exporting-host-configurations.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/ironic_install.rst (100%) rename doc/source/deploy_install_guides/{r6_release => release}/bare_metal/prep_servers.txt (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/pxe-boot-controller-0-d5da025c2524.rst (87%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/reinstalling-a-system-or-a-host.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/bare_metal/rook_storage_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/distributed_cloud/index-install-r7-distcloud-46f4880ec78b.rest (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/install_virtualbox_configparms.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/install_virtualbox_guiscreen.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-access-openstack-command.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-access-openstack-flavorlist.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-controller-storage.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-dedicated-storage.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-distributed-cloud.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-duplex-extended.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-duplex.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-ironic.png (100%) rename doc/source/deploy_install_guides/{r6_release => release}/figures/starlingx-deployment-options-simplex.png (100%) rename doc/source/deploy_install_guides/{r7_release => release}/index-install-r7-8966076f0e81.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/kubernetes_access.rst (100%) rename doc/source/deploy_install_guides/{r6_release => release}/openstack/access.rst (95%) rename doc/source/deploy_install_guides/{r7_release => release}/openstack/convert-worker-nodes-0007b1532308.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/openstack/hybrid-cluster-c7a3134b6f2a.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/openstack/index-install-r7-os-adc44604968c.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/openstack/install.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/openstack/uninstall_delete.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/setup-simple-dns-server-in-lab.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/aio_duplex.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/aio_duplex_environ.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/aio_duplex_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r6_release => release}/virtual/aio_simplex.rst (59%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/aio_simplex_environ.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/aio_simplex_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r6_release => release}/virtual/config_virtualbox_netwk.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/controller_storage.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/controller_storage_environ.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/controller_storage_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/dedicated_storage.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/dedicated_storage_environ.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/dedicated_storage_install_kubernetes.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/install_virtualbox.rst (96%) rename doc/source/deploy_install_guides/{r6_release => release}/virtual/physical_host_req.txt (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/rook_storage.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/rook_storage_environ.rst (100%) rename doc/source/deploy_install_guides/{r7_release => release}/virtual/rook_storage_install_kubernetes.rst (100%) diff --git a/doc/source/_includes/deb-tech-preview.rest b/doc/source/_includes/deb-tech-preview.rest deleted file mode 100644 index e607937d3..000000000 --- a/doc/source/_includes/deb-tech-preview.rest +++ /dev/null @@ -1,21 +0,0 @@ -.. begin-prod-an-1 -.. end-prod-an-1 - -.. begin-prod-an-2 -.. end-prod-an-2 - -.. begin-dec-and-imp -.. end-dec-and-imp - -.. begin-declarative -.. end-declarative - -.. begin-install-prereqs -.. end-install-prereqs - -.. begin-prep-servers -.. end-prep-servers - -.. begin-known-issues -.. end-known-issues - diff --git a/doc/source/_includes/docker-proxy-config.rest b/doc/source/_includes/docker-proxy-config.rest index fd78f570a..d32dd23e9 100644 --- a/doc/source/_includes/docker-proxy-config.rest +++ b/doc/source/_includes/docker-proxy-config.rest @@ -14,9 +14,9 @@ Set proxy at bootstrap ---------------------- -To set the Docker proxy at bootstrap time, refer to :doc:`Ansible Bootstrap +To set the Docker proxy at bootstrap time, refer to :ref:`Ansible Bootstrap Configurations -`. +`. .. r3_end diff --git a/doc/source/_includes/installing-software-on-controller-0.rest b/doc/source/_includes/installing-software-on-controller-0.rest deleted file mode 100644 index 795a74f0a..000000000 --- a/doc/source/_includes/installing-software-on-controller-0.rest +++ /dev/null @@ -1,3 +0,0 @@ -.. begin-install-ctl-0 -.. end-install-ctl-0 - diff --git a/doc/source/_includes/ironic.rest b/doc/source/_includes/ironic.rest index 5c0b5a2e3..d29f568d8 100644 --- a/doc/source/_includes/ironic.rest +++ b/doc/source/_includes/ironic.rest @@ -16,7 +16,7 @@ more bare metal servers. settings. Refer to :ref:`docker_proxy_config` for details. -.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-ironic.png +.. figure:: /deploy_install_guides/release/figures/starlingx-deployment-options-ironic.png :scale: 50% :alt: Standard with Ironic deployment configuration diff --git a/doc/source/_includes/kubernetes_install_next.txt b/doc/source/_includes/kubernetes_install_next.txt index 462bc2c43..8fcdd4656 100644 --- a/doc/source/_includes/kubernetes_install_next.txt +++ b/doc/source/_includes/kubernetes_install_next.txt @@ -4,4 +4,4 @@ For instructions on how to access StarlingX Kubernetes see :ref:`kubernetes_access_r7`. For instructions on how to install and access StarlingX OpenStack see -:ref:`index-install-r6-os-adc44604968c`. +:ref:`index-install-r7-os-adc44604968c`. diff --git a/doc/source/archive/configuration/cert_config.rst b/doc/source/archive/configuration/cert_config.rst index 48ea16693..b13570ad4 100644 --- a/doc/source/archive/configuration/cert_config.rst +++ b/doc/source/archive/configuration/cert_config.rst @@ -113,7 +113,7 @@ Certificate Authority. Currently the Kubernetes root CA certificate and key can only be updated at Ansible bootstrap time. For details, see -:ref:`Kubernetes root CA certificate and key `. +:ref:`Kubernetes root CA certificate and key `. --------------------- Local Docker registry diff --git a/doc/source/conf.py b/doc/source/conf.py index 11ac4493d..f69214d2f 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -191,4 +191,4 @@ starlingxdocs_plus_bug_project = 'starlingx' starlingxdocs_plus_bug_tag = 'stx.docs' starlingxdocs_plus_this_version = "Latest" # starlingxdocs_plus_other_versions = [("even later","even-later"),("sooner","sooner")] -starlingxdocs_plus_other_versions = [("Version 6.0","r/stx.6.0"),("Version 7.0","r/stx.7.0"),("Latest","master")] +starlingxdocs_plus_other_versions = [("Version 6.0","r/stx.6.0"),("Version 7.0","r/stx.7.0")] diff --git a/doc/source/deploy/kubernetes/deployment-config-options-all-in-one-duplex-configuration.rst b/doc/source/deploy/kubernetes/deployment-config-options-all-in-one-duplex-configuration.rst index d8e9d2c26..491de094c 100644 --- a/doc/source/deploy/kubernetes/deployment-config-options-all-in-one-duplex-configuration.rst +++ b/doc/source/deploy/kubernetes/deployment-config-options-all-in-one-duplex-configuration.rst @@ -97,7 +97,7 @@ Up to fifty worker/compute nodes can be added to the All-in-one Duplex deployment, allowing a capacity growth path if starting with an AIO-Duplex deployment. -.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex-extended.png +.. image:: /deploy_install_guides/release/figures/starlingx-deployment-options-duplex-extended.png :width: 800 The extended capacity is limited up to fifty worker/compute nodes as the diff --git a/doc/source/deploy_install_guides/index-install-e083ca818006.rst b/doc/source/deploy_install_guides/index-install-e083ca818006.rst index 9a7c673bb..1163de7bd 100644 --- a/doc/source/deploy_install_guides/index-install-e083ca818006.rst +++ b/doc/source/deploy_install_guides/index-install-e083ca818006.rst @@ -45,8 +45,7 @@ To view the archived installation guides, see .. toctree:: :hidden: - r7_release/index-install-r7-8966076f0e81 - r6_release/index-install-r6-8966076f0e81 + release/index-install-r7-8966076f0e81 .. Add common files to toctree diff --git a/doc/source/deploy_install_guides/r6_release/ansible_bootstrap_configs.rst b/doc/source/deploy_install_guides/r6_release/ansible_bootstrap_configs.rst deleted file mode 100644 index bd169b13c..000000000 --- a/doc/source/deploy_install_guides/r6_release/ansible_bootstrap_configs.rst +++ /dev/null @@ -1,434 +0,0 @@ - -.. _ansible_bootstrap_configs_r6: - -================================ -Ansible Bootstrap Configurations -================================ - -This section describes Ansible bootstrap configuration options. - -.. contents:: - :local: - :depth: 1 - - -.. _install-time-only-params-r6: - ----------------------------- -Install-time-only parameters ----------------------------- - -Some Ansible bootstrap parameters can not be changed or are very difficult to -change after installation is complete. - -Review the set of install-time-only parameters before installation and confirm -that your values for these parameters are correct for the desired installation. - -.. note:: - - If you notice an incorrect install-time-only parameter value *before you - unlock controller-0 for the first time*, you can re-run the Ansible bootstrap - playbook with updated override values and the updated values will take effect. - -**************************** -Install-time-only parameters -**************************** - -**System Properties** - -* ``system_mode`` -* ``distributed_cloud_role`` - -**Network Properties** - -* ``pxeboot_subnet`` -* ``pxeboot_start_address`` -* ``pxeboot_end_address`` -* ``management_subnet`` -* ``management_start_address`` -* ``management_end_address`` -* ``cluster_host_subnet`` -* ``cluster_host_start_address`` -* ``cluster_host_end_address`` -* ``cluster_pod_subnet`` -* ``cluster_pod_start_address`` -* ``cluster_pod_end_address`` -* ``cluster_service_subnet`` -* ``cluster_service_start_address`` -* ``cluster_service_end_address`` -* ``management_multicast_subnet`` -* ``management_multicast_start_address`` -* ``management_multicast_end_address`` - -**Docker Proxies** - -* ``docker_http_proxy`` -* ``docker_https_proxy`` -* ``docker_no_proxy`` - -**Docker Registry Overrides** - -* ``docker_registries`` - - * ``k8s.gcr.io`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``gcr.io`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``ghcr.io`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``quay.io`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``docker.io`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``docker.elastic.co`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - - * ``defaults`` - - * ``url`` - * ``username`` - * ``password`` - * ``secure`` - -**Certificates** - -* ``k8s_root_ca_cert`` -* ``k8s_root_ca_key`` - -**Kubernetes Parameters** - -* ``apiserver_oidc`` - ----- -IPv6 ----- - -If you are using IPv6, provide IPv6 configuration overrides for the Ansible -bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be -updated to IPv6 addressing. - -Example IPv6 override values are shown below: - -:: - - dns_servers: - ‐ 2001:4860:4860::8888 - ‐ 2001:4860:4860::8844 - pxeboot_subnet: 169.254.202.0/24 - management_subnet: 2001:db8:2::/64 - cluster_host_subnet: 2001:db8:3::/64 - cluster_pod_subnet: 2001:db8:4::/64 - cluster_service_subnet: 2001:db8:4::/112 - external_oam_subnet: 2001:db8:1::/64 - external_oam_gateway_address: 2001:db8::1 - external_oam_floating_address: 2001:db8::2 - external_oam_node_0_address: 2001:db8::3 - external_oam_node_1_address: 2001:db8::4 - management_multicast_subnet: ff08::1:1:0/124 - -.. note:: - - The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters - are not required for the AIO‐SX installation. - ----------------- -Private registry ----------------- - -To bootstrap StarlingX you must pull container images for multiple system -services. By default these container images are pulled from public registries: -k8s.gcr.io, gcr.io, quay.io, and docker.io. - -It may be required (or desired) to copy the container images to a private -registry and pull the images from the private registry (instead of the public -registries) as part of the StarlingX bootstrap. For example, a private registry -would be required if a StarlingX system was deployed in an air-gapped network -environment. - -Use the `docker_registries` structure in the bootstrap overrides file to specify -alternate registry(s) for the public registries from which container images are -pulled. These alternate registries are used during the bootstrapping of -controller-0, and on :command:`system application-apply` of application packages. - -The `docker_registries` structure is a map of public registries and the -alternate registry values for each public registry. For each public registry the -key is a fully scoped registry name of a public registry (for example "k8s.gcr.io") -and the alternate registry URL and username/password (if authenticated). - -url - The fully scoped registry name (and optionally namespace/) for the alternate - registry location where the images associated with this public registry - should now be pulled from. - - Valid formats for the `url` value are: - - * Domain. For example: - - :: - - example.domain - - * Domain with port. For example: - - :: - - example.domain:5000 - - * IPv4 address. For example: - - :: - - 1.2.3.4 - - * IPv4 address with port. For example: - - :: - - 1.2.3.4:5000 - - * IPv6 address. For example: - - :: - - FD01::0100 - - * IPv6 address with port. For example: - - :: - - [FD01::0100]:5000 - -username - The username for logging into the alternate registry, if authenticated. - -password - The password for logging into the alternate registry, if authenticated. - - -Additional configuration options in the `docker_registries` structure are: - -defaults - A special public registry key which defines common values to be applied to - all overrideable public registries. If only the `defaults` registry - is defined, it will apply `url`, `username`, and `password` for all - registries. - - If values under specific registries are defined, they will override the - values defined in the defaults registry. - - .. note:: - - The `defaults` key was formerly called `unified`. It was renamed - in StarlingX R3.0 and updated semantics were applied. - - This change affects anyone with a StarlingX installation prior to R3.0 that - specifies alternate Docker registries using the `unified` key. - -secure - Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure). - Applies to all alternate registries. A boolean value. The default value is - True (secure, HTTPS). - -.. note:: - - The ``secure`` parameter was formerly called ``is_secure_registry``. It was - renamed in StarlingX R3.0. - -If an alternate registry is specified to be secure (using HTTPS), the certificate -used by the registry may not be signed by a well-known Certificate Authority (CA). -This results in the :command:`docker pull` of images from this registry to fail. -Use the `ssl_ca_cert` override to specify the public certificate of the CA that -signed the alternate registry’s certificate. This will add the CA as a trusted -CA to the StarlingX system. - -ssl_ca_cert - The `ssl_ca_cert` value is the absolute path of the certificate file. The - certificate must be in PEM format and the file may contain a single CA - certificate or multiple CA certificates in a bundle. - -The following example will apply `url`, `username`, and `password` to all -registries. - -:: - - docker_registries: - defaults: - url: my.registry.io - username: myreguser - password: myregP@ssw0rd - -The next example applies `username` and `password` from the defaults registry -to all public registries. `url` is different for each public registry. It -additionally specifies an alternate CA certificate. - -:: - - docker_registries: - k8s.gcr.io: - url: my.k8sregistry.io - gcr.io: - url: my.gcrregistry.io - ghcr.io: - url: my.ghrcregistry.io - docker.elastic.co - url: my.dockerregistry.io - quay.io: - url: my.quayregistry.io - docker.io: - url: my.dockerregistry.io - defaults: - url: my.registry.io - username: myreguser - password: myregP@ssw0rd - - ssl_ca_cert: /path/to/ssl_ca_cert_file - ------------- -Docker proxy ------------- - -If the StarlingX OAM interface or network is behind a http/https proxy, relative -to the Docker registries used by StarlingX or applications running on StarlingX, -then Docker within StarlingX must be configured to use these http/https proxies. - -Use the following configuration overrides to configure your Docker proxy settings. - -docker_http_proxy - Specify the HTTP proxy URL to use. For example: - - :: - - docker_http_proxy: http://my.proxy.com:1080 - -docker_https_proxy - Specify the HTTPS proxy URL to use. For example: - - :: - - docker_https_proxy: https://my.proxy.com:1443 - -docker_no_proxy - A no-proxy address list can be provided for registries not on the other side - of the proxies. This list will be added to the default no-proxy list derived - from localhost, loopback, management, and OAM floating addresses at run time. - Each address in the no-proxy list must neither contain a wildcard nor have - subnet format. For example: - - :: - - docker_no_proxy: - - 1.2.3.4 - - 5.6.7.8 - -.. _k8s-root-ca-cert-key-r6: - --------------------------------------- -Kubernetes root CA certificate and key --------------------------------------- - -By default the Kubernetes Root CA Certificate and Key are auto-generated and -result in the use of self-signed certificates for the Kubernetes API server. In -the case where self-signed certificates are not acceptable, use the bootstrap -override values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the -certificate and key for the Kubernetes root CA. - -k8s_root_ca_cert - Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert` - value is the absolute path of the certificate file. The certificate must be - in PEM format and the value must be provided as part of a pair with - `k8s_root_ca_key`. The playbook will not proceed if only one value is provided. - -k8s_root_ca_key - Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key` - value is the absolute path of the certificate file. The certificate must be - in PEM format and the value must be provided as part of a pair with - `k8s_root_ca_cert`. The playbook will not proceed if only one value is provided. - -.. important:: - - The default length for the generated Kubernetes root CA certificate is 10 - years. Replacing the root CA certificate is an involved process so the custom - certificate expiry should be as long as possible. We recommend ensuring root - CA certificate has an expiry of at least 5-10 years. - -The administrator can also provide values to add to the Kubernetes API server -certificate Subject Alternative Name list using the `apiserver_cert_sans` -override parameter. - -apiserver_cert_sans - Specifies a list of Subject Alternative Name entries that will be added to the - Kubernetes API server certificate. Each entry in the list must be an IP address - or domain name. For example: - - :: - - apiserver_cert_sans: - - hostname.domain - - 198.51.100.75 - -StarlingX automatically updates this parameter to include IP records for the OAM -floating IP and both OAM unit IP addresses. - ----------------------------------------------------- -OpenID Connect authentication for Kubernetes cluster ----------------------------------------------------- - -The Kubernetes cluster can be configured to use an external OpenID Connect -:abbr:`IDP (identity provider)`, such as Azure Active Directory, Salesforce, or -Google, for Kubernetes API authentication. - -By default, OpenID Connect authentication is disabled. To enable OpenID Connect, -use the following configuration values in the Ansible bootstrap overrides file -to specify the IDP for OpenID Connect: - -:: - - apiserver_oidc: - client_id: - issuer_url: - username_claim: - -When the three required fields of the `apiserver_oidc` parameter are defined, -OpenID Connect is considered active. The values will be used to configure the -Kubernetes cluster to use the specified external OpenID Connect IDP for -Kubernetes API authentication. - -In addition, you will need to configure the external OpenID Connect IDP and any -required OpenID client application according to the specific IDP's documentation. - -If not configuring OpenID Connect, all values should be absent from the -configuration file. - -.. note:: - - Default authentication via service account tokens is always supported, - even when OpenID Connect authentication is configured. \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.rst deleted file mode 100644 index be082b719..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.rst +++ /dev/null @@ -1,60 +0,0 @@ - -.. jow1442253584837 -.. _accessing-pxe-boot-server-files-for-a-custom-configuration-r6: - -======================================================= -Access PXE Boot Server Files for a Custom Configuration -======================================================= - -If you prefer, you can create a custom |PXE| boot configuration using the -installation files provided with |prod|. - -.. rubric:: |context| - -You can use the setup script included with the ISO image to copy the boot -configuration files and distribution content to a working directory. You can -use the contents of the working directory to construct a |PXE| boot environment -according to your own requirements or preferences. - -For more information about using a |PXE| boot server, see :ref:`Configure a -PXE Boot Server `. - -.. rubric:: |proc| - -.. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t-r6: - -#. Copy the ISO image from the source \(product DVD, USB device, or - |dnload-loc|\) to a temporary location on the |PXE| boot server. - - This example assumes that the copied image file is - tmp/TS-host-installer-1.0.iso. - -#. Mount the ISO image and make it executable. - - .. code-block:: none - - $ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso - $ mount -o remount,exec,dev /media/iso - -#. Create and populate a working directory. - - Use a command of the following form: - - .. code-block:: none - - $ /media/iso/pxeboot_setup.sh -u http:/// <-w > - - where: - - **ip-addr** - is the Apache listening address. - - **symlink** - is a name for a symbolic link to be created under the Apache document - root directory, pointing to the directory specified by . - - **working-dir** - is the path to the working directory. - -#. Copy the required files from the working directory to your custom |PXE| - boot server directory. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-in-bulk.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-in-bulk.rst deleted file mode 100644 index e8eefd36e..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-in-bulk.rst +++ /dev/null @@ -1,61 +0,0 @@ - -.. ulc1552927930507 -.. _adding-hosts-in-bulk-r6: - -================= -Add Hosts in Bulk -================= - -You can add an arbitrary number of hosts using a single CLI command. - -.. rubric:: |proc| - -#. Prepare an XML file that describes the hosts to be added. - - For more information, see :ref:`Bulk Host XML File Format - `. - - You can also create the XML configuration file from an existing, running - configuration using the :command:`system host-bulk-export` command. - -#. Run the :command:`system host-bulk-add` utility. - - The command syntax is: - - .. code-block:: none - - ~[keystone_admin]$ system host-bulk-add - - where is the name of the prepared XML file. - -#. Power on the hosts to be added, if required. - - .. note:: - Hosts can be powered on automatically from board management controllers - using settings in the XML file. - -.. rubric:: |result| - -The hosts are configured. The utility provides a summary report, as shown in -the following example: - -.. code-block:: none - - Success: - worker-0 - worker-1 - Error: - controller-1: Host-add Rejected: Host with mgmt_mac 08:00:28:A9:54:19 already exists - -.. rubric:: |postreq| - -After adding the host, you must provision it according to the requirements of -the personality. - -.. xbooklink For more information, see :ref:`Installing, Configuring, and - Unlocking Nodes `, for your system, - and follow the *Configure* steps for the appropriate node personality. - -.. seealso:: - - :ref:`Bulk Host XML File Format ` diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-using-the-host-add-command.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-using-the-host-add-command.rst deleted file mode 100644 index 95dbdf6a1..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/adding-hosts-using-the-host-add-command.rst +++ /dev/null @@ -1,177 +0,0 @@ - -.. pyp1552927946441 -.. _adding-hosts-using-the-host-add-command-r6: - -================================ -Add Hosts Using the Command Line -================================ - -You can add hosts to the system inventory using the :command:`host-add` command. - -.. rubric:: |context| - -There are several ways to add hosts to |prod|; for an overview, see the -StarlingX Installation Guides, -`https://docs.starlingx.io/deploy_install_guides/index.html -`_ for your -system. Instead of powering up each host and then defining its personality and -other characteristics interactively, you can use the :command:`system host-add` -command to define hosts before you power them up. This can be useful for -scripting an initial setup. - -.. note:: - On systems that use static IP address assignment on the management network, - new hosts must be added to the inventory manually and assigned an IP - address using the :command:`system host-add` command. If a host is not - added successfully, the host console displays the following message at - power-on: - - .. code-block:: none - - This system has been configured with static management - and infrastructure IP address allocation. This requires - that the node be manually provisioned in System - Inventory using the 'system host-add' CLI, GUI, or - stx API equivalent. - -.. rubric:: |proc| - -#. Add the host to the system inventory. - - .. note:: - The host must be added to the system inventory before it is powered on. - - On **controller-0**, acquire Keystone administrative privileges: - - .. code-block:: none - - $ source /etc/platform/openrc - - Use the :command:`system host-add` command to add a host and specify its - personality. You can also specify the device used to display messages - during boot. - - .. note:: - The hostname parameter is required for worker hosts. For controller and - storage hosts, it is ignored. - - .. code-block:: none - - ~(keystone_admin)]$ system host-add -n \ - -p [-s ] \ - [-l ] [-o [-c ]] [-b ] \ - [-r ] [-m ] [-i ] [-D ] \ - [-T -I -U -P ] - - - where - - **** - is a name to assign to the host. This is used for worker nodes only. - Controller and storage node names are assigned automatically and - override user input. - - **** - is the host type. The following are valid values: - - - controller - - - worker - - - storage - - **** - are the host personality subfunctions \(used only for a worker host\). - - For a worker host, the only valid value is worker,lowlatency to enable - a low-latency performance profile. For a standard performance profile, - omit this option. - - For more information about performance profiles, see |deploy-doc|: - :ref:`Worker Function Performance Profiles - `. - - **** - is a string describing the location of the host - - **** - is the output device to use for message display on the host \(for - example, tty0\). The default is ttys0, 115200. - - **** - is the format for console output on the host \(text or graphical\). The - default is text. - - .. note:: - The graphical option currently has no effect. Text-based - installation is used regardless of this setting. - - **** - is the host device for boot partition, relative to /dev. The default is - sda. - - **** - is the host device for rootfs partition, relative to/dev. The default - is sda. - - **** - is the |MAC| address of the port connected to the internal management - or |PXE| boot network. - - **** - is the IP address of the port connected to the internal management or - |PXE| boot network, if static IP address allocation is used. - - .. note:: - The option is not used for a controller node. - - **** - is set to **True** to have any active console session automatically - logged out when the serial console cable is disconnected, or **False** - to disable this behavior. The server must support data carrier detect - on the serial console port. - - **** - is the board management controller type. Use bmc. - - **** - is the board management controller IP address \(used for external - access to board management controllers over the |OAM| network\) - - **** - is the username for board management controller access - - **** - is the password for board management controller access - - For example: - - .. code-block:: none - - ~(keystone_admin)]$ system host-add -n compute-0 -p worker -I 10.10.10.100 - -#. Verify that the host has been added successfully. - - Use the :command:`fm alarm-list` command to check if any alarms (major or - critical) events have occurred. You can also type :command:`fm event-list` - to see a log of events. For more information on alarms, see :ref:`Fault - Management Overview `. - -#. With **controller-0** running, start the host. - - The host is booted and configured with a personality. - -#. Verify that the host has started successfully. - - The command :command:`system host-list` shows a list of hosts. The - added host should be available, enabled, and unlocked. You can also - check alarms and events again. - -.. rubric:: |postreq| - -After adding the host, you must provision it according to the requirements of -the personality. - -.. xbooklink For more information, see :ref:`Install, Configure, and Unlock - Nodes ` and follow the *Configure* - steps for the appropriate node personality. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex.rst deleted file mode 100644 index fd013f4b9..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex.rst +++ /dev/null @@ -1,26 +0,0 @@ -============================================== -Bare metal All-in-one Duplex Installation R6.0 -============================================== - --------- -Overview --------- - -.. include:: /shared/_includes/desc_aio_duplex.txt - -The bare metal AIO-DX deployment configuration may be extended with up to four -worker nodes (not shown in the diagram). Installation instructions for -these additional nodes are described in :doc:`aio_duplex_extend`. - -.. include:: /shared/_includes/ipv6_note.txt - ------------- -Installation ------------- - -.. toctree:: - :maxdepth: 1 - - aio_duplex_hardware - aio_duplex_install_kubernetes - aio_duplex_extend \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.rst deleted file mode 100644 index be48818d2..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.rst +++ /dev/null @@ -1,340 +0,0 @@ -================================= -Extend Capacity with Worker Nodes -================================= - -This section describes the steps to extend capacity with worker nodes on a -|prod| All-in-one Duplex deployment configuration. - -.. contents:: - :local: - :depth: 1 - --------------------------------- -Install software on worker nodes --------------------------------- - -#. Power on the worker node servers and force them to network boot with the - appropriate BIOS boot options for your particular server. - -#. As the worker nodes boot, a message appears on their console instructing - you to configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered worker - node hosts (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | unlocked | enabled | available | - | 3 | None | None | locked | disabled | offline | - | 4 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'worker': - - .. code-block:: bash - - system host-update 3 personality=worker hostname=worker-0 - system host-update 4 personality=worker hostname=worker-1 - - This initiates the install of software on worker nodes. - This can take 5-10 minutes, depending on the performance of the host machine. - - .. only:: starlingx - - .. Note:: - - A node with Edgeworker personality is also available. See - :ref:`deploy-edgeworker-nodes` for details. - -#. Wait for the install of software on the worker nodes to complete, for the - worker nodes to reboot, and for both to show as locked/disabled/online in - 'system host-list'. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | unlocked | enabled | available | - | 3 | worker-0 | worker | locked | disabled | online | - | 4 | worker-1 | worker | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - ----------------------- -Configure worker nodes ----------------------- - -#. The MGMT interfaces are partially set up by the network install procedure; - configuring the port used for network install as the MGMT port and - specifying the attached network of "mgmt". - - Complete the MGMT interface configuration of the worker nodes by specifying - the attached network of "cluster-host". - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - **These steps are required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed.** - - #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. parsed-literal:: - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE openstack-compute-node=enabled - kubectl taint nodes $NODE openstack-compute-node:NoSchedule - system host-label-assign $NODE |vswitch-label| - system host-label-assign $NODE sriov=enabled - done - - #. **For OpenStack only:** Configure the host settings for the vSwitch. - - If using |OVS-DPDK| vswitch, run the following commands: - - Default recommendation for worker node is to use two cores on numa-node 0 - for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node. - This should have been automatically configured, if not run the following - command. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 2 cores on processor/numa-node 0 on worker-node to vswitch - system host-cpu-modify -f vswitch -p0 2 $NODE - - done - - When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on - each |NUMA| node on the host. It is recommended to configure 1x 1G huge - page (-1G 1) for vSwitch memory on each |NUMA| node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application VMs require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 0 - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 1 - - done - - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with property: - hw:mem_page_size=large - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, assuming 1G huge page size is being used on this host, with - the following commands: - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 0 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 1 - - done - - #. **For OpenStack only:** Setup disk partition for nova-local volume group, - needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | fgrep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to ‘nova-local’ local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - done - - #. **For OpenStack only:** Configure data interfaces for worker nodes. - Data class interfaces are vswitch interfaces used by vswitch to provide - |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled worker host **MUST** have at least one Data class interface. - - * Configure the data interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure pci-sriov interfaces for worker nodes. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application VMs. Note that pci-sriov interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the pci-sriov interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not already created, create Data Networks that the 'pci-sriov' - # interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes only** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE sriovdp=enabled - done - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application $NODE 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application $NODE 1 -1G 10 - - done - - -------------------- -Unlock worker nodes -------------------- - -Unlock worker nodes in order to bring them into service: - -.. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-unlock $NODE - done - -The worker nodes will reboot to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_hardware.rst deleted file mode 100644 index 009a995bd..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_hardware.rst +++ /dev/null @@ -1,69 +0,0 @@ -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal All-in-one Duplex** deployment configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -The recommended minimum hardware requirements for bare metal servers for various -host types are: - -+-------------------------+-----------------------------------------------------------+ -| Minimum Requirement | All-in-one Controller Node | -+=========================+===========================================================+ -| Number of servers | 2 | -+-------------------------+-----------------------------------------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -| | | -| | or | -| | | -| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | -| | (low-power/low-cost option) | -+-------------------------+-----------------------------------------------------------+ -| Minimum memory | 64 GB | -+-------------------------+-----------------------------------------------------------+ -| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) | -+-------------------------+-----------------------------------------------------------+ -| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | -| | - Recommended, but not required: 1 or more SSDs or NVMe | -| | drives for Ceph journals (min. 1024 MiB per OSD journal)| -| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)| -| | for VM local ephemeral storage | -+-------------------------+-----------------------------------------------------------+ -| Minimum network ports | - Mgmt/Cluster: 1x10GE | -| | - OAM: 1x1GE | -| | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------------------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------------------------------------+ - --------------------------- -Prepare bare metal servers --------------------------- - -.. include:: prep_servers.txt - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in the following diagram. - - .. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex.png - :scale: 50% - :alt: All-in-one Duplex deployment configuration - - *All-in-one Duplex deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.rst deleted file mode 100644 index 31c922226..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.rst +++ /dev/null @@ -1,1182 +0,0 @@ - -.. Greg updates required for -High Security Vulnerability Document Updates - -.. _aio_duplex_install_kubernetes_r6: - -================================================ -Install Kubernetes Platform on All-in-one Duplex -================================================ - -.. only:: partner - - .. include:: /_includes/install-kubernetes-null-labels.rest - -.. only:: starlingx - - This section describes the steps to install the StarlingX Kubernetes - platform on a **StarlingX R6.0 All-in-one Duplex** deployment - configuration. - - .. contents:: - :local: - :depth: 1 - - --------------------- - Create a bootable USB - --------------------- - - Refer to :ref:`Bootable USB ` for instructions on how - to create a bootable USB with the StarlingX ISO on your system. - - -------------------------------- - Install software on controller-0 - -------------------------------- - - .. include:: /shared/_includes/inc-install-software-on-controller.rest - :start-after: incl-install-software-controller-0-aio-start - :end-before: incl-install-software-controller-0-aio-end - --------------------------------- -Bootstrap system on controller-0 --------------------------------- - -#. Login using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the - password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. Verify and/or configure IP connectivity. - - External connectivity is required to run the Ansible bootstrap playbook. The - StarlingX boot image will |DHCP| out all interfaces so the server may have - obtained an IP address and have external IP connectivity if a |DHCP| server - is present in your environment. Verify this using the :command:`ip addr` and - :command:`ping 8.8.8.8` commands. - - Otherwise, manually configure an IP address and default IP route. Use the - PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your - deployment environment. - - :: - - sudo ip address add / dev - sudo ip link set up dev - sudo ip route add default via dev - ping 8.8.8.8 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for - Ansible configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml`` - The default configuration values for the bootstrap playbook. - - ``sysadmin home directory ($HOME)`` - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: - ``$HOME/.yml``. - - .. only:: starlingx - - .. include:: /shared/_includes/ansible_install_time_only.txt - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - .. note:: - - This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml) - contains security sensitive information, use the - :command:`ansible-vault create $HOME/localhost.yml` command to create it. - You will be prompted for a password to protect/encrypt the file. - Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the - file needs to be edited after it is created. - - #. Use a copy of the default.yml file listed above to provide your overrides. - - The default.yml file lists all available parameters for bootstrap - configuration with a brief description for each parameter in the file - comments. - - To use this method, run the :command:`ansible-vault create $HOME/localhost.yml` - command and copy the contents of the ``default.yml`` file into the - ansible-vault editor, and edit the configurable values as required. - - #. Create a minimal user configuration override file. - - To use this method, create your override file with - the :command:`ansible-vault create $HOME/localhost.yml` - command and provide the minimum required parameters for the deployment - configuration as shown in the example below. Use the OAM IP SUBNET and IP - ADDRESSing applicable to your deployment environment. - - .. include:: /_includes/min-bootstrap-overrides-non-simplex.rest - - - .. only:: starlingx - - In either of the above options, the bootstrap playbook’s default values - will pull all container images required for the |prod-p| from Docker hub. - - If you have setup a private Docker registry to use for bootstrapping - then you will need to add the following lines in $HOME/localhost.yml: - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: docker-reg-begin - :end-before: docker-reg-end - - .. code-block:: yaml - - docker_registries: - quay.io: - url: myprivateregistry.abc.com:9001/quay.io - docker.elastic.co: - url: myprivateregistry.abc.com:9001/docker.elastic.co - gcr.io: - url: myprivateregistry.abc.com:9001/gcr.io - ghcr.io: - url: myprivateregistry.abc.com:9001/ghcr.io - k8s.gcr.io: - url: myprivateregistry.abc.com:9001/k8s.gcr.io - docker.io: - url: myprivateregistry.abc.com:9001/docker.io - defaults: - type: docker - username: - password: - - # Add the CA Certificate that signed myprivateregistry.abc.com’s - # certificate as a Trusted CA - ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem - - See :ref:`Use a Private Docker Registry ` - for more information. - - .. only:: starlingx - - If a firewall is blocking access to Docker hub or your private - registry from your StarlingX deployment, you will need to add the - following lines in $HOME/localhost.yml (see :ref:`Docker Proxy - Configuration ` for more details about Docker - proxy settings): - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: firewall-begin - :end-before: firewall-end - - .. code-block:: bash - - # Add these lines to configure Docker to use a proxy server - docker_http_proxy: http://my.proxy.com:1080 - docker_https_proxy: https://my.proxy.com:1443 - docker_no_proxy: - - 1.2.3.4 - - - Refer to :ref:`Ansible Bootstrap Configurations ` - for information on additional Ansible bootstrap configurations for advanced - Ansible bootstrap scenarios. - -#. Run the Ansible bootstrap playbook: - - .. include:: /shared/_includes/ntp-update-note.rest - - :: - - ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes, - depending on the performance of the host machine. - ----------------------- -Configure controller-0 ----------------------- - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the |OAM| interface of controller-0 and specify the - attached network as "oam". - - The following example configures the |OAM| interface on a physical untagged - ethernet port. Use the |OAM| port name that is applicable to your - deployment environment, for example eth0: - - .. code-block:: bash - - OAM_IF= - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. Configure the MGMT interface of controller-0 and specify the attached - networks of both "mgmt" and "cluster-host". - - The following example configures the MGMT interface on a physical untagged - ethernet port. Use the MGMT port name that is applicable to your deployment - environment, for example eth1: - - .. code-block:: bash - - MGMT_IF= - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. Configure |NTP| servers for network time synchronization: - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - - To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration - `. - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - These steps are required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. only:: starlingx - - .. parsed-literal:: - - system host-label-assign controller-0 openstack-control-plane=enabled - system host-label-assign controller-0 openstack-compute-node=enabled - system host-label-assign controller-0 |vswitch-label| - - .. note:: - - If you have a |NIC| that supports |SRIOV|, then you can enable it by - using the following: - - .. code-block:: none - - system host-label-assign controller-0 sriov=enabled - - .. only:: partner - - .. include:: /_includes/aio_duplex_install_kubernetes.rest - :start-after: ref1-begin - :end-before: ref1-end - - #. **For OpenStack only:** Due to the additional OpenStack services running - on the |AIO| controller platform cores, a minimum of 4 platform cores are - required, 6 platform cores are recommended. - - Increase the number of platform cores with the following commands: - - .. code-block:: - - # assign 6 cores on processor/numa-node 0 on controller-0 to platform - system host-cpu-modify -f platform -p0 6 controller-0 - - #. Due to the additional OpenStack services' containers running on the - controller host, the size of the Docker filesystem needs to be - increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-0 - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-0 - # if existing docker fs size + cgts-vg available space is less than - # 80G, you will need to add a new disk partition to cgts-vg. - # There must be at least 20GB of available space after the docker - # filesystem is increased. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-0 --nowrap | fgrep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-0 - - # Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response - # Use a partition size such that you'll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to 'cgts-vg' local volume group - system host-pv-add controller-0 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-0 docker=60 - - - #. **For OpenStack only:** Configure the system setting for the vSwitch. - - .. only:: starlingx - - StarlingX has |OVS| (kernel-based) vSwitch configured as default: - - * Runs in a container; defined within the helm charts of |prefix|-openstack - manifest. - * Shares the core(s) assigned to the platform. - - If you require better performance, |OVS-DPDK| (|OVS| with the Data - Plane Development Kit, which is supported only on bare metal hardware) - should be used: - - * Runs directly on the host (it is not containerized). - Requires that at least 1 core be assigned/dedicated to the vSwitch - function. - - To deploy the default containerized |OVS|: - - :: - - system modify --vswitch_type none - - This does not run any vSwitch directly on the host, instead, it uses - the containerized |OVS| defined in the helm charts of - |prefix|-openstack manifest. - - To deploy |OVS-DPDK|, run the following command: - - .. parsed-literal:: - - system modify --vswitch_type |ovs-dpdk| - - Default recommendation for an |AIO|-controller is to use a single - core for |OVS-DPDK| vSwitch. - - .. code-block:: bash - - # assign 1 core on processor/numa-node 0 on controller-0 to vswitch - system host-cpu-modify -f vswitch -p0 1 controller-0 - - Once vswitch_type is set to |OVS-DPDK|, any subsequent nodes created will - default to automatically assigning 1 vSwitch core for |AIO| controllers - and 2 vSwitch cores (both on numa-node 0; physical NICs are typically on - first numa-node) for compute-labeled worker nodes. - - When using |OVS-DPDK|, configure 1G huge page for vSwitch memory on each - |NUMA| node on the host. It is recommended to configure 1x 1G huge page - (-1G 1) for vSwitch memory on each |NUMA| node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application |VMs| require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - - .. code-block:: - - # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-0 0 - - # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-0 1 - - - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with property: - ``hw:mem_page_size=large`` - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, the following commands are an example that assumes that 1G - huge page size is being used on this host: - - .. code-block:: bash - - - # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications - system host-memory-modify -f application -1G 10 controller-0 0 - - # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications - system host-memory-modify -f application -1G 10 controller-0 1 - - - .. note:: - - After controller-0 is unlocked, changing vswitch_type requires - locking and unlocking controller-0 to apply the change. - - - #. **For OpenStack only:** Set up disk partition for nova-local volume - group, which is needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - export NODE=controller-0 - - # Create ‘nova-local’ local volume group - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | grep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to ‘nova-local’ local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - - #. **For OpenStack only:** Configure data interfaces for controller-0. - Data class interfaces are vswitch interfaces used by vswitch to provide - |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled All-in-one controller host **MUST** have at least - one Data class interface. - - * Configure the data interfaces for controller-0. - - .. code-block:: bash - - export NODE=controller-0 - - # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure |PCI|-|SRIOV| interfaces for controller-0. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the pci-sriov interfaces for controller-0. - - .. code-block:: bash - - export NODE=controller-0 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not already created, create Data Networks that the 'pci-sriov' - # interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes Only:** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - :: - - system host-label-assign controller-0 sriovdp=enabled - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications - system host-memory-modify -f application controller-0 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications - system host-memory-modify -f application controller-0 1 -1G 10 - -*************************************************************** -If required, initialize a Ceph-based Persistent Storage Backend -*************************************************************** - -A persistent storage backend is required if your application requires |PVCs|. - -.. only:: openstack - - .. important:: - - The StarlingX OpenStack application **requires** |PVCs|. - -.. only:: starlingx - - There are two options for persistent storage backend: the host-based Ceph - solution and the Rook container-based Ceph solution. - -For host-based Ceph: - -#. Initialize with add ceph backend: - - :: - - system storage-backend-add ceph --confirmed - -#. Add an |OSD| on controller-0 for host-based Ceph: - - .. code-block:: bash - - # List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list controller-0 - - # Add disk as an OSD storage - system host-stor-add controller-0 osd - - # List OSD storage devices - system host-stor-list controller-0 - -.. only:: starlingx - - For Rook container-based Ceph: - - #. Initialize with add ceph-rook backend: - - :: - - system storage-backend-add ceph-rook --confirmed - - #. Assign Rook host labels to controller-0 in support of installing the - rook-ceph-apps manifest/helm-charts later: - - :: - - system host-label-assign controller-0 ceph-mon-placement=enabled - system host-label-assign controller-0 ceph-mgr-placement=enabled - - -------------------- -Unlock controller-0 -------------------- - -.. include:: aio_simplex_install_kubernetes.rst - :start-after: incl-unlock-controller-0-aio-simplex-start: - :end-before: incl-unlock-controller-0-aio-simplex-end: - -.. only:: openstack - - * **For OpenStack Only** Due to the additional OpenStack services’ - containers running on the controller host, the size of the Docker - filesystem needs to be increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-0 - - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-0 - - # if existing docker fs size + cgts-vg available space is less than - # 80G, you will need to add a new disk partition to cgts-vg. - # There must be at least 20GB of available space after the docker - # filesystem is increased. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-0 | grep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-0 - - # Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response - # Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system host-disk-partition-add -t lvm_phys_vol controller-0 ${PARTITION_SIZE} - - # Add new partition to 'cgts-vg' local volume group - system host-pv-add controller-0 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-0 docker=60 - -------------------------------------- -Install software on controller-1 node -------------------------------------- - -#. Power on the controller-1 server and force it to network boot with the - appropriate BIOS boot options for your particular server. - -#. As controller-1 boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered controller-1 - host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - -#. Wait for the software installation on controller-1 to complete, for - controller-1 to reboot, and for controller-1 to show as - locked/disabled/online in 'system host-list'. - - This can take 5-10 minutes, depending on the performance of the host machine. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - ----------------------- -Configure controller-1 ----------------------- - -#. Configure the |OAM| interface of controller-1 and specify the - attached network of "oam". - - The following example configures the |OAM| interface on a physical untagged - ethernet port, use the |OAM| port name that is applicable to your - deployment environment, for example eth0: - - :: - - OAM_IF= - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. The MGMT interface is partially set up by the network install procedure; - configuring the port used for network install as the MGMT port and - specifying the attached network of "mgmt". - - Complete the MGMT interface configuration of controller-1 by specifying the - attached network of "cluster-host". - - :: - - system interface-network-assign controller-1 mgmt0 cluster-host - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - These steps are required only if the |prod-os| application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to controller-1 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. only:: starlingx - - .. parsed-literal:: - - system host-label-assign controller-1 openstack-control-plane=enabled - system host-label-assign controller-1 openstack-compute-node=enabled - system host-label-assign controller-1 |vswitch-label| - - .. note:: - - If you have a |NIC| that supports |SRIOV|, then you can enable it by - using the following: - - .. code-block:: none - - system host-label-assign controller-0 sriov=enabled - - .. only:: partner - - .. include:: /_includes/aio_duplex_install_kubernetes.rest - :start-after: ref2-begin - :end-before: ref2-end - - #. **For OpenStack only:** Due to the additional openstack services running - on the |AIO| controller platform cores, a minimum of 4 platform cores are - required, 6 platform cores are recommended. - - Increase the number of platform cores with the following commands: - - .. code-block:: - - # assign 6 cores on processor/numa-node 0 on controller-1 to platform - system host-cpu-modify -f platform -p0 6 controller-1 - - #. Due to the additional openstack services' containers running on the - controller host, the size of the docker filesystem needs to be - increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-0 - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-0 - # if existing docker fs size + cgts-vg available space is less than - # 80G, you will need to add a new disk partition to cgts-vg. - # There must be at least 20GB of available space after the docker - # filesystem is increased. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-0 --nowrap | fgrep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-0 - - # Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response - # Use a partition size such that you'll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to 'cgts-vg' local volume group - system host-pv-add controller-0 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-0 docker=60 - - #. **For OpenStack only:** Configure the host settings for the vSwitch. - - If using |OVS-DPDK| vswitch, run the following commands: - - Default recommendation for an |AIO|-controller is to use a single core - for |OVS-DPDK| vSwitch. This should have been automatically configured, - if not run the following command. - - .. code-block:: bash - - # assign 1 core on processor/numa-node 0 on controller-1 to vswitch - system host-cpu-modify -f vswitch -p0 1 controller-1 - - - When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on - each |NUMA| node on the host. It is recommended - to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA| - node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application VMs require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - - .. code-block:: bash - - # assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-1 0 - - # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-1 1 - - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with property: - hw:mem_page_size=large - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, assuming 1G huge page size is being used on this host, with - the following commands: - - .. code-block:: bash - - # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications - system host-memory-modify -f application -1G 10 controller-1 0 - - # assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications - system host-memory-modify -f application -1G 10 controller-1 1 - - - #. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - export NODE=controller-1 - - # Create 'nova-local' local volume group - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to 'nova-local' local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | grep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to 'nova-local' local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - - #. **For OpenStack only:** Configure data interfaces for controller-1. - Data class interfaces are vswitch interfaces used by vswitch to provide - VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled All-in-one controller host **MUST** have at least one Data class interface. - - * Configure the data interfaces for controller-1. - - .. code-block:: bash - - export NODE=controller-1 - - # List inventoried host's ports and identify ports to be used as 'data' interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as 'data' class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure |PCI|-|SRIOV| interfaces for controller-1. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the |PCI|-|SRIOV| interfaces for controller-1. - - .. code-block:: bash - - export NODE=controller-1 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured 'ethernet' interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as 'pci-sriov' class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not already created, create Data Networks that the 'pci-sriov' interfaces - # will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes only:** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - .. code-block:: bash - - system host-label-assign controller-1 sriovdp=enabled - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications - system host-memory-modify -f application controller-1 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications - system host-memory-modify -f application controller-1 1 -1G 10 - - -*************************************************************************************** -If configuring a Ceph-based Persistent Storage Backend, configure host-specific details -*************************************************************************************** - -For host-based Ceph: - -#. Add an |OSD| on controller-1 for host-based Ceph: - - .. code-block:: bash - - # List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list controller-1 - - # Add disk as an OSD storage - system host-stor-add controller-1 osd - - # List OSD storage devices - system host-stor-list controller-1 - - .. only:: starlingx - - For Rook container-based Ceph: - - #. Assign Rook host labels to controller-1 in support of installing the - rook-ceph-apps manifest/helm-charts later: - - .. code-block:: bash - - system host-label-assign controller-1 ceph-mon-placement=enabled - system host-label-assign controller-1 ceph-mgr-placement=enabled - - -------------------- -Unlock controller-1 -------------------- - -Unlock controller-1 in order to bring it into service: - -.. code-block:: bash - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -.. only:: starlingx - - ----------------------------------------------------------------------------------------------- - If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend - ----------------------------------------------------------------------------------------------- - - For Rook container-based Ceph: - - On active controller: - - #. Wait for the ``rook-ceph-apps`` application to be uploaded - - :: - - $ source /etc/platform/openrc - $ system application-list - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed | - | platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed | - | rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - - #. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph - |OSD|. - - .. code-block:: bash - - $ system host-disk-wipe -s --confirm controller-0 /dev/sdb - $ system host-disk-wipe -s --confirm controller-1 /dev/sdb - - values.yaml for rook-ceph-apps. - - .. code-block:: yaml - - cluster: - storage: - nodes: - - name: controller-0 - devices: - - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 - - name: controller-1 - devices: - - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 - - :: - - system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml - - #. Apply the rook-ceph-apps application. - - :: - - system application-apply rook-ceph-apps - - #. Wait for |OSDs| pod to be ready. - - :: - - kubectl get pods -n kube-system - rook-ceph-crashcollector-controller-0-f984688ff-jsr8t 1/1 Running 0 4m9s - rook-ceph-crashcollector-controller-1-7f9b6f55b6-699bb 1/1 Running 0 2m5s - rook-ceph-mgr-a-7f9d588c5b-49cbg 1/1 Running 0 3m5s - rook-ceph-mon-a-75bcbd8664-pvq99 1/1 Running 0 4m27s - rook-ceph-mon-b-86c67658b4-f4snf 1/1 Running 0 4m10s - rook-ceph-mon-c-7f48b58dfb-4nx2n 1/1 Running 0 3m30s - rook-ceph-operator-77b64588c5-bhfg7 1/1 Running 0 7m6s - rook-ceph-osd-0-6949657cf7-dkfp2 1/1 Running 0 2m6s - rook-ceph-osd-1-5d4b58cf69-kdg82 1/1 Running 0 2m4s - rook-ceph-osd-prepare-controller-0-wcvsn 0/1 Completed 0 2m27s - rook-ceph-osd-prepare-controller-1-98h76 0/1 Completed 0 2m26s - rook-ceph-tools-5778d7f6c-2h8s8 1/1 Running 0 5m55s - rook-discover-xc22t 1/1 Running 0 6m2s - rook-discover-xndld 1/1 Running 0 6m2s - storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s - - -.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest - -.. only:: starlingx - - ---------- - Next steps - ---------- - - .. include:: /_includes/kubernetes_install_next.txt - - -.. only:: partner - - .. include:: /_includes/72hr-to-license.rest diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex.rst deleted file mode 100644 index ee4cedda3..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex.rst +++ /dev/null @@ -1,21 +0,0 @@ -=============================================== -Bare metal All-in-one Simplex Installation R6.0 -=============================================== - --------- -Overview --------- - -.. include:: /shared/_includes/desc_aio_simplex.txt - -.. include:: /shared/_includes/ipv6_note.txt - ------------- -Installation ------------- - -.. toctree:: - :maxdepth: 1 - - aio_simplex_hardware - aio_simplex_install_kubernetes diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_hardware.rst deleted file mode 100644 index 6ed3f6511..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_hardware.rst +++ /dev/null @@ -1,71 +0,0 @@ -.. _aio_simplex_hardware_r6: - -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal All-in-one Simplex** deployment configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -The recommended minimum hardware requirements for bare metal servers for various -host types are: - -+-------------------------+-----------------------------------------------------------+ -| Minimum Requirement | All-in-one Controller Node | -+=========================+===========================================================+ -| Number of servers | 1 | -+-------------------------+-----------------------------------------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -| | | -| | or | -| | | -| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | -| | (low-power/low-cost option) | -+-------------------------+-----------------------------------------------------------+ -| Minimum memory | 64 GB | -+-------------------------+-----------------------------------------------------------+ -| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) | -+-------------------------+-----------------------------------------------------------+ -| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | -| | - Recommended, but not required: 1 or more SSDs or NVMe | -| | drives for Ceph journals (min. 1024 MiB per OSD | -| | journal) | -| | - For OpenStack, recommend 1 or more 500 GB (min. 10K | -| | RPM) for VM local ephemeral storage | -+-------------------------+-----------------------------------------------------------+ -| Minimum network ports | - OAM: 1x1GE | -| | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------------------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------------------------------------+ - --------------------------- -Prepare bare metal servers --------------------------- - -.. include:: prep_servers.txt - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in the following diagram. - - .. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-simplex.png - :scale: 50% - :alt: All-in-one Simplex deployment configuration - - *All-in-one Simplex deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_install_kubernetes.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_install_kubernetes.rst deleted file mode 100644 index 306479703..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/aio_simplex_install_kubernetes.rst +++ /dev/null @@ -1,718 +0,0 @@ - -.. Greg updates required for High Security Vulnerability Document Updates - -.. _aio_simplex_install_kubernetes_r6: - - -================================================= -Install Kubernetes Platform on All-in-one Simplex -================================================= - -.. only:: partner - - .. include:: /_includes/install-kubernetes-null-labels.rest - -.. only:: starlingx - - This section describes the steps to install the StarlingX Kubernetes - platform on a **StarlingX R6.0 All-in-one Simplex** deployment - configuration. - - .. contents:: - :local: - :depth: 1 - - --------------------- - Create a bootable USB - --------------------- - - Refer to :ref:`Bootable USB ` for instructions on how - to create a bootable USB with the StarlingX ISO on your system. - - -------------------------------- - Install software on controller-0 - -------------------------------- - - .. include:: /shared/_includes/inc-install-software-on-controller.rest - :start-after: incl-install-software-controller-0-aio-start - :end-before: incl-install-software-controller-0-aio-end - --------------------------------- -Bootstrap system on controller-0 --------------------------------- - -#. Login using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the - password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. Verify and/or configure IP connectivity. - - External connectivity is required to run the Ansible bootstrap playbook. The - StarlingX boot image will |DHCP| out all interfaces so the server may have - obtained an IP address and have external IP connectivity if a |DHCP| server - is present in your environment. Verify this using the :command:`ip addr` and - :command:`ping 8.8.8.8` commands. - - Otherwise, manually configure an IP address and default IP route. Use the - PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your - deployment environment. - - :: - - sudo ip address add / dev - sudo ip link set up dev - sudo ip route add default via dev - ping 8.8.8.8 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for - Ansible configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml`` - The default configuration values for the bootstrap playbook. - - ``sysadmin home directory ($HOME)`` - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: - ``$HOME/.yml``. - - .. only:: starlingx - - .. include:: /shared/_includes/ansible_install_time_only.txt - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - .. note:: - - This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml) - contains security sensitive information, use the - :command:`ansible-vault create $HOME/localhost.yml` command to create it. - You will be prompted for a password to protect/encrypt the file. - Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the - file needs to be edited after it is created. - - #. Use a copy of the default.yml file listed above to provide your overrides. - - The default.yml file lists all available parameters for bootstrap - configuration with a brief description for each parameter in the file - comments. - - To use this method, run the :command:`ansible-vault create $HOME/localhost.yml` - command and copy the contents of the ``default.yml`` file into the - ansible-vault editor, and edit the configurable values as required. - - #. Create a minimal user configuration override file. - - To use this method, create your override file with - the :command:`ansible-vault create $HOME/localhost.yml` - command and provide the minimum required parameters for the deployment - configuration as shown in the example below. Use the OAM IP SUBNET and IP - ADDRESSing applicable to your deployment environment. - - .. include:: /_includes/min-bootstrap-overrides-simplex.rest - - .. only:: starlingx - - In either of the above options, the bootstrap playbook’s default - values will pull all container images required for the |prod-p| from - Docker hub - - If you have setup a private Docker registry to use for bootstrapping - then you will need to add the following lines in $HOME/localhost.yml: - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: docker-reg-begin - :end-before: docker-reg-end - - .. code-block:: - - docker_registries: - quay.io: - url: myprivateregistry.abc.com:9001/quay.io - docker.elastic.co: - url: myprivateregistry.abc.com:9001/docker.elastic.co - gcr.io: - url: myprivateregistry.abc.com:9001/gcr.io - ghcr.io: - url: myprivateregistry.abc.com:9001/ghcr.io - k8s.gcr.io: - url: myprivateregistry.abc.com:9001/k8s.gcr.io - docker.io: - url: myprivateregistry.abc.com:9001/docker.io - defaults: - type: docker - username: - password: - - # Add the CA Certificate that signed myprivateregistry.abc.com’s - # certificate as a Trusted CA - ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem - - See :ref:`Use a Private Docker Registry ` - for more information. - - - .. only:: starlingx - - If a firewall is blocking access to Docker hub or your private - registry from your StarlingX deployment, you will need to add the - following lines in $HOME/localhost.yml (see :ref:`Docker Proxy - Configuration ` for more details about Docker - proxy settings): - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: firewall-begin - :end-before: firewall-end - - .. code-block:: - - # Add these lines to configure Docker to use a proxy server - docker_http_proxy: http://my.proxy.com:1080 - docker_https_proxy: https://my.proxy.com:1443 - docker_no_proxy: - - 1.2.3.4 - - - Refer to :ref:`Ansible Bootstrap Configurations ` - for information on additional Ansible bootstrap configurations for advanced - Ansible bootstrap scenarios. - -#. Run the Ansible bootstrap playbook: - - .. include:: /shared/_includes/ntp-update-note.rest - - :: - - ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes, - depending on the performance of the host machine. - ----------------------- -Configure controller-0 ----------------------- - -The newly installed controller needs to be configured. - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the |OAM| interface of controller-0 and specify the attached - network as "oam". The following example configures the OAM interface on a - physical untagged ethernet port, use |OAM| port name that is applicable to - your deployment environment, for example eth0: - - :: - - OAM_IF= - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. Configure |NTP| servers for network time synchronization: - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - - To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration - `. - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. incl-config-controller-0-openstack-specific-aio-simplex-start: - - .. important:: - - These steps are required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. only:: starlingx - - .. parsed-literal:: - - system host-label-assign controller-0 openstack-control-plane=enabled - system host-label-assign controller-0 openstack-compute-node=enabled - system host-label-assign controller-0 |vswitch-label| - - .. note:: - - If you have a |NIC| that supports |SRIOV|, then you can enable it by - using the following: - - .. code-block:: none - - system host-label-assign controller-0 sriov=enabled - - .. only:: partner - - .. include:: /_includes/aio_simplex_install_kubernetes.rest - :start-after: ref1-begin - :end-before: ref1-end - - #. **For OpenStack only:** Due to the additional OpenStack services running - on the |AIO| controller platform cores, a minimum of 4 platform cores are - required, 6 platform cores are recommended. - - Increase the number of platform cores with the following commands: - - .. code-block:: - - # Assign 6 cores on processor/numa-node 0 on controller-0 to platform - system host-cpu-modify -f platform -p0 6 controller-0 - - #. Due to the additional OpenStack services' containers running on the - controller host, the size of the Docker filesystem needs to be - increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-0 - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-0 - # if existing docker fs size + cgts-vg available space is less than - # 80G, you will need to add a new disk partition to cgts-vg. - # There must be at least 20GB of available space after the docker - # filesystem is increased. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-0 | fgrep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-0 - - # Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response - # Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system host-disk-partition-add -t lvm_phys_vol controller-0 ${PARTITION_SIZE} - - # Add new partition to ‘cgts-vg’ local volume group - system host-pv-add controller-0 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-0 docker=60 - - #. **For OpenStack only:** Configure the system setting for the vSwitch. - - .. only:: starlingx - - StarlingX has |OVS| (kernel-based) vSwitch configured as default: - - * Runs in a container; defined within the helm charts of |prefix|-openstack - manifest. - * Shares the core(s) assigned to the platform. - - If you require better performance, |OVS-DPDK| (|OVS| with the Data - Plane Development Kit, which is supported only on bare metal hardware) - should be used: - - * Runs directly on the host (it is not containerized). - Requires that at least 1 core be assigned/dedicated to the vSwitch - function. - - To deploy the default containerized |OVS|: - - :: - - system modify --vswitch_type none - - This does not run any vSwitch directly on the host, instead, it uses - the containerized |OVS| defined in the helm charts of - |prefix|-openstack manifest. - - To deploy |OVS-DPDK|, run the following command: - - .. parsed-literal:: - - system modify --vswitch_type |ovs-dpdk| - - Default recommendation for an |AIO|-controller is to use a single core - for |OVS-DPDK| vSwitch. - - .. code-block:: bash - - # assign 1 core on processor/numa-node 0 on controller-0 to vswitch - system host-cpu-modify -f vswitch -p0 1 controller-0 - - When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on - each |NUMA| node on the host. It is recommended - to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA| - node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application |VMs| require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - - .. code-block:: - - # Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-0 0 - - # Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch - system host-memory-modify -f vswitch -1G 1 controller-0 1 - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with property: - hw:mem_page_size=large - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, the following commands are an example that assumes that 1G - huge page size is being used on this host: - - .. code-block:: bash - - # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications - system host-memory-modify -f application -1G 10 controller-0 0 - - # assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications - system host-memory-modify -f application -1G 10 controller-0 1 - - .. note:: - - After controller-0 is unlocked, changing vswitch_type requires - locking and unlocking controller-0 to apply the change. - - #. **For OpenStack only:** Set up disk partition for nova-local volume - group, which is needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - export NODE=controller-0 - - # Create ‘nova-local’ local volume group - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | fgrep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system host-disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to ‘nova-local’ local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - - - #. **For OpenStack only:** Configure data interfaces for controller-0. - Data class interfaces are vSwitch interfaces used by vSwitch to provide - VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled |AIO|-controller host **MUST** have at least one - Data class interface. - - * Configure the data interfaces for controller-0. - - .. code-block:: bash - - export NODE=controller-0 - - # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure |PCI|-SRIOV interfaces for controller-0. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application VMs. Note that |PCI|-SRIOV interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the |PCI|-SRIOV interfaces for controller-0. - - .. code-block:: bash - - export NODE=controller-0 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not already created, create Data Networks that the 'pci-sriov' interfaces will - # be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes Only:** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - :: - - system host-label-assign controller-0 sriovdp=enabled - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications - system host-memory-modify -f application controller-0 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications - system host-memory-modify -f application controller-0 1 -1G 10 - - -*************************************************************** -If required, initialize a Ceph-based Persistent Storage Backend -*************************************************************** - -A persistent storage backend is required if your application requires -|PVCs|. - -.. only:: openstack - - .. important:: - - The StarlingX OpenStack application **requires** |PVCs|. - -.. only:: starlingx - - There are two options for persistent storage backend: the host-based Ceph - solution and the Rook container-based Ceph solution. - -For host-based Ceph: - -#. Add host-based Ceph backend: - - :: - - system storage-backend-add ceph --confirmed - -#. Add an |OSD| on controller-0 for host-based Ceph: - - .. code-block:: bash - - # List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list controller-0 - - # Add disk as an OSD storage - system host-stor-add controller-0 osd - - # List OSD storage devices - system host-stor-list controller-0 - - -.. only:: starlingx - - For Rook container-based Ceph: - - #. Add Rook container-based backend: - - :: - - system storage-backend-add ceph-rook --confirmed - - #. Assign Rook host labels to controller-0 in support of installing the - rook-ceph-apps manifest/helm-charts later: - - :: - - system host-label-assign controller-0 ceph-mon-placement=enabled - system host-label-assign controller-0 ceph-mgr-placement=enabled - - - .. incl-config-controller-0-openstack-specific-aio-simplex-end: - - -------------------- -Unlock controller-0 -------------------- - -.. incl-unlock-controller-0-aio-simplex-start: - -Unlock controller-0 to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -.. incl-unlock-controller-0-aio-simplex-end: - -.. only:: starlingx - - ----------------------------------------------------------------------------------------------- - If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend - ----------------------------------------------------------------------------------------------- - - On controller-0: - - #. Wait for application rook-ceph-apps to be uploaded - - :: - - $ source /etc/platform/openrc - $ system application-list - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed | - | platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed | - | rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - - #. Configure rook to use /dev/sdb disk on controller-0 as a ceph |OSD|. - - :: - - system host-disk-wipe -s --confirm controller-0 /dev/sdb - - values.yaml for rook-ceph-apps. - - .. code-block:: yaml - - cluster: - storage: - nodes: - - name: controller-0 - devices: - - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 - - :: - - system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml - - #. Apply the rook-ceph-apps application. - - :: - - system application-apply rook-ceph-apps - - #. Wait for |OSDs| pod to be ready. - - :: - - kubectl get pods -n kube-system - rook--ceph-crashcollector-controller-0-764c7f9c8-bh5c7 1/1 Running 0 62m - rook--ceph-mgr-a-69df96f57-9l28p 1/1 Running 0 63m - rook--ceph-mon-a-55fff49dcf-ljfnx 1/1 Running 0 63m - rook--ceph-operator-77b64588c5-nlsf2 1/1 Running 0 66m - rook--ceph-osd-0-7d5785889f-4rgmb 1/1 Running 0 62m - rook--ceph-osd-prepare-controller-0-cmwt5 0/1 Completed 0 2m14s - rook--ceph-tools-5778d7f6c-22tms 1/1 Running 0 64m - rook--discover-kmv6c 1/1 Running 0 65m - -.. only:: starlingx - - ---------- - Next steps - ---------- - - .. include:: /_includes/kubernetes_install_next.txt - - -.. only:: partner - - .. include:: /_includes/72hr-to-license.rest diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst deleted file mode 100644 index 68e7e1cd6..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst +++ /dev/null @@ -1,54 +0,0 @@ - -.. vqr1569420650576 -.. _bootstrapping-from-a-private-docker-registry-r6: - -============================================ -Bootstrapping from a Private Docker Registry -============================================ - -You can bootstrap controller-0 from a private Docker registry in the event that -your server is isolated from the public Internet. - -.. rubric:: |proc| - -#. Update your /home/sysadmin/localhost.yml bootstrap overrides file with the - following lines to use a Private Docker Registry pre-populated from the - |org| Docker Registry: - - .. code-block:: none - - docker_registries: - k8s.gcr.io: - url: /k8s.gcr.io - gcr.io: - url: /gcr.io - ghcr.io: - url: /ghcr.io - quay.io: - url: /quay.io - docker.io: - url: /docker.io - docker.elastic.co: - url: /docker.elastic.co - defaults: - type: docker - username: - password: - - Where ```` and - ```` are your login credentials for the - ```` private Docker registry. - - .. note:: - ```` must be a DNS name resolvable by the dns servers - configured in the ``dns_servers:`` structure of the ansible bootstrap - override file /home/sysadmin/localhost.yml. - -#. For any additional local registry images required, use the full image name - as shown below. - - .. code-block:: none - - additional_local_registry_images: - docker.io/wind-river/: - diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/bulk-host-xml-file-format.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/bulk-host-xml-file-format.rst deleted file mode 100644 index 405a8eab9..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/bulk-host-xml-file-format.rst +++ /dev/null @@ -1,135 +0,0 @@ - -.. hzf1552927866550 -.. _bulk-host-xml-file-format-r6: - -========================= -Bulk Host XML File Format -========================= - -Hosts for bulk addition are described using an XML document. - -The document root is **hosts**. Within the root, each host is described using a -**host** node. To provide details, child elements are used, corresponding to -the parameters for the :command:`host-add` command. - -The following elements are accepted. Each element takes a text string. For -valid values, refer to the CLI documentation. - - -.. _bulk-host-xml-file-format-simpletable-tc3-w15-ht: - - -.. table:: - :widths: auto - - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Element | Remarks | - +=========================================================================================================================================================================================+=========================================================================================================================================================================================+ - | hostname | A unique name for the host. | - | | | - | | .. note:: | - | | Controller and storage node names are assigned automatically and override user input. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | personality | The type of host. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | subfunctions | For a worker host, an optional element to enable a low-latency performance profile. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | mgmt\_mac | The MAC address of the management interface. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | mgmt\_ip | The IP address of the management interface. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bm\_ip | The IP address of the board management controller. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bm\_type | The board management controller type. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bm\_username | The username for board management controller authentication. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | bm\_password | The password for board management controller authentication. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | power\_on | An empty element. If present, powers on the host automatically using the specified board management controller. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | install\_output | The display mode to use during installation \(text or graphical\). The default is **text**. | - | | | - | | .. note:: | - | | The graphical option currently has no effect. Text-based installation is used regardless of this setting. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | console | If present, this element specifies the port, and if applicable the baud, for displaying messages. If the element is empty or not present, the default setting **ttyS0,115200** is used. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | rootfs\_device | The device to use for the rootfs partition, relative to /dev. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | boot\_device | The device to use for the boot partition, relative to /dev. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | location | A description of the host location. | - +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -The following sample describes a controller, three worker nodes, and two storage nodes: - -.. code-block:: none - - - - - controller - 08:00:27:19:b0:c5 - 10.10.10.100 - bmc - tsmith1 - mypass1 - text - System12/A4 - - - worker-0 - worker - 08:00:27:dc:42:46 - 192.168.204.50 - 10.10.10.101 - tsmith1 - mypass1 - bmc - text - - - - worker-1 - worker - 08:00:27:87:82:3E - 192.168.204.51 - 10.10.10.102 - bmc - tsmith1 - mypass1 - sda - text - - - worker-2 - worker - 08:00:27:b9:16:0d - 192.168.204.52 - sda - text - - - 10.10.10.103 - bmc - tsmith1 - mypass1 - - - storage - 08:00:27:dd:e3:3f - 10.10.10.104 - bmc - tsmith1 - mypass1 - - - storage - 08:00:27:8e:f1:b8 - 10.10.10.105 - bmc - tsmith1 - mypass1 - - diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.rst deleted file mode 100644 index 734219c03..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.rst +++ /dev/null @@ -1,211 +0,0 @@ - -.. jow1440534908675 - -.. _configuring-a-pxe-boot-server: - -.. _configuring-a-pxe-boot-server-r6: - - - -=========================== -Configure a PXE Boot Server -=========================== - -You can optionally set up a |PXE| Boot Server to support **controller-0** -initialization. - -.. rubric:: |context| - -|prod| includes a setup script to simplify configuring a |PXE| boot server. If -you prefer, you can manually apply a custom configuration; for more -information, see :ref:`Access PXE Boot Server Files for a Custom Configuration -`. - -The |prod| setup script accepts a path to the root TFTP directory as a -parameter, and copies all required files for BIOS and |UEFI| clients into this -directory. - -The |PXE| boot server serves a boot loader file to the requesting client from a -specified path on the server. The path depends on whether the client uses BIOS -or |UEFI|. The appropriate path is selected by conditional logic in the |DHCP| -configuration file. - -The boot loader runs on the client, and reads boot parameters, including the -location of the kernel and initial ramdisk image files, from a boot file -contained on the server. To find the boot file, the boot loader searches a -known directory on the server. This search directory can contain more than one -entry, supporting the use of separate boot files for different clients. - -The file names and locations depend on the BIOS or |UEFI| implementation. - -.. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb-r6: - -.. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations - :widths: auto - - +------------------------------------------+------------------------+-------------------------------+ - | Resource | BIOS | UEFI | - +==========================================+========================+===============================+ - | **boot loader** | ./pxelinux.0 | ./EFI/grubx64.efi | - +------------------------------------------+------------------------+-------------------------------+ - | **boot file search directory** | ./pxelinux.cfg | ./ or ./EFI | - | | | | - | | | \(system-dependent\) | - +------------------------------------------+------------------------+-------------------------------+ - | **boot file** and path | ./pxelinux.cfg/default | ./grub.cfg and ./EFI/grub.cfg | - +------------------------------------------+------------------------+-------------------------------+ - | \(./ indicates the root TFTP directory\) | - +------------------------------------------+------------------------+-------------------------------+ - -.. rubric:: |prereq| - -Use a Linux workstation as the |PXE| Boot server. - - -.. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt-r6: - -- On the workstation, install the packages required to support |DHCP|, TFTP, - and Apache. - -- Configure |DHCP|, TFTP, and Apache according to your system requirements. - For details, refer to the documentation included with the packages. - -- Additionally, configure |DHCP| to support both BIOS and |UEFI| client - architectures. For example: - - .. code-block:: none - - option arch code 93 = unsigned integer 16; # ref RFC4578 - # ... - subnet 192.168.1.0 netmask 255.255.255.0 { - if option arch = 00:07 { - filename "EFI/grubx64.efi"; - # NOTE: substitute the full tftp-boot-dir specified in the setup script - } - else { - filename "pxelinux.0"; - } - # ... - } - - -- Start the |DHCP|, TFTP, and Apache services. - -- Connect the |PXE| boot server to the |prod| management or |PXE| boot - network. - - -.. rubric:: |proc| - - -.. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb-r6: - -#. Copy the ISO image from the source \(product DVD, USB device, or - |dnload-loc| to a temporary location on the |PXE| boot server. - - This example assumes that the copied image file is - ``tmp/TS-host-installer-1.0.iso``. - -#. Mount the ISO image and make it executable. - - .. code-block:: none - - $ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso - $ mount -o remount,exec,dev /media/iso - -#. Set up the |PXE| boot configuration. - - .. important:: - - |PXE| configuration steps differ for |prod| |deb-eval-release| - evaluation on the Debian distribution. See the :ref:`Debian Technology - Preview ` |PXE| configuration procedure for details. - - The ISO image includes a setup script, which you can run to complete the - configuration. - - .. code-block:: none - - $ /media/iso/pxeboot_setup.sh -u http:/// \ - -t - - where - - ``ip-addr`` - is the Apache listening address. - - ``symlink`` - is the name of a user-created symbolic link under the Apache document - root directory, pointing to the directory specified by . - - ``tftp-boot-dir`` - is the path from which the boot loader is served \(the TFTP root - directory\). - - The script creates the directory specified by . - - For example: - - .. code-block:: none - - $ /media/iso/pxeboot_setup.sh -u http://192.168.100.100/BIOS-client -t /export/pxeboot - -#. To serve a specific boot file to a specific controller, assign a special - name to the file. - - The boot loader searches for a file name that uses a string based on the - client interface |MAC| address. The string uses lower case, substitutes - dashes for colons, and includes the prefix "01-". - - - - For a BIOS client, use the |MAC| address string as the file name: - - .. code-block:: none - - $ cd /pxelinux.cfg/ - $ cp pxeboot.cfg - - where: - - ```` - is the path from which the boot loader is served. - - ```` - is a lower-case string formed from the |MAC| address of the client - |PXE| boot interface, using dashes instead of colons, and prefixed - by "01-". - - For example, to represent the |MAC| address ``08:00:27:dl:63:c9``, - use the string ``01-08-00-27-d1-63-c9`` in the file name. - - For example: - - .. code-block:: none - - $ cd /export/pxeboot/pxelinux.cfg/ - $ cp pxeboot.cfg 01-08-00-27-d1-63-c9 - - If the boot loader does not find a file named using this convention, it - looks for a file with the name default. - - - For a |UEFI| client, use the |MAC| address string prefixed by - "grub.cfg-". To ensure the file is found, copy it to both search - directories used by the |UEFI| convention. - - .. code-block:: none - - $ cd - $ cp grub.cfg grub.cfg- - $ cp grub.cfg ./EFI/grub.cfg- - - For example: - - .. code-block:: none - - $ cd /export/pxeboot - $ cp grub.cfg grub.cfg-01-08-00-27-d1-63-c9 - $ cp grub.cfg ./EFI/grub.cfg-01-08-00-27-d1-63-c9 - - .. note:: - Alternatively, you can use symlinks in the search directories to - ensure the file is found. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage.rst deleted file mode 100644 index 322398436..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage.rst +++ /dev/null @@ -1,22 +0,0 @@ -============================================================= -Bare metal Standard with Controller Storage Installation R6.0 -============================================================= - --------- -Overview --------- - -.. include:: /shared/_includes/desc_controller_storage.txt - -.. include:: /shared/_includes/ipv6_note.txt - - ------------- -Installation ------------- - -.. toctree:: - :maxdepth: 1 - - controller_storage_hardware - controller_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_hardware.rst deleted file mode 100644 index b4dc1b059..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_hardware.rst +++ /dev/null @@ -1,67 +0,0 @@ -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal Standard with Controller Storage** deployment -configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -The recommended minimum hardware requirements for bare metal servers for various -host types are: - -+-------------------------+-----------------------------+-----------------------------+ -| Minimum Requirement | Controller Node | Worker Node | -+=========================+=============================+=============================+ -| Number of servers | 2 | 2-10 | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum memory | 64 GB | 32 GB | -+-------------------------+-----------------------------+-----------------------------+ -| Primary disk | 500 GB SSD or NVMe (see | 120 GB (Minimum 10k RPM) | -| | :ref:`nvme_config`) | | -+-------------------------+-----------------------------+-----------------------------+ -| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend | -| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. | -| | - Recommended, but not | 10K RPM) for VM local | -| | required: 1 or more SSDs | ephemeral storage | -| | or NVMe drives for Ceph | | -| | journals (min. 1024 MiB | | -| | per OSD journal) | | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE | -| | - OAM: 1x1GE | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------+-----------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------+-----------------------------+ - --------------------------- -Prepare bare metal servers --------------------------- - -.. include:: prep_servers.txt - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in the following diagram. - - .. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png - :scale: 50% - :alt: Controller storage deployment configuration - - *Controller storage deployment configuration* diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_install_kubernetes.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_install_kubernetes.rst deleted file mode 100644 index 7b7390791..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/controller_storage_install_kubernetes.rst +++ /dev/null @@ -1,941 +0,0 @@ - -.. Greg updates required for -High Security Vulnerability Document Updates - -.. _controller_storage_install_kubernetes_r6: - -=============================================================== -Install Kubernetes Platform on Standard with Controller Storage -=============================================================== - -.. contents:: - :local: - :depth: 1 - -.. only:: starlingx - - This section describes the steps to install the StarlingX Kubernetes - platform on a **StarlingX R6.0 Standard with Controller Storage** - deployment configuration. - - ------------------- - Create bootable USB - ------------------- - - Refer to :ref:`Bootable USB ` for instructions on how to - create a bootable USB with the StarlingX ISO on your system. - - -------------------------------- - Install software on controller-0 - -------------------------------- - - .. include:: /shared/_includes/inc-install-software-on-controller.rest - :start-after: incl-install-software-controller-0-standard-start - :end-before: incl-install-software-controller-0-standard-end - --------------------------------- -Bootstrap system on controller-0 --------------------------------- - -.. incl-bootstrap-sys-controller-0-standard-start: - -#. Login using the username / password of "sysadmin" / "sysadmin". - - When logging in for the first time, you will be forced to change the - password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. Verify and/or configure IP connectivity. - - External connectivity is required to run the Ansible bootstrap playbook. The - StarlingX boot image will |DHCP| out all interfaces so the server may have - obtained an IP address and have external IP connectivity if a |DHCP| server - is present in your environment. Verify this using the :command:`ip addr` and - :command:`ping 8.8.8.8` commands. - - Otherwise, manually configure an IP address and default IP route. Use the - PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your - deployment environment. - - .. code-block:: bash - - sudo ip address add / dev - sudo ip link set up dev - sudo ip route add default via dev - ping 8.8.8.8 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for - Ansible configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml`` - The default configuration values for the bootstrap playbook. - - ``sysadmin home directory ($HOME)`` - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: - ``$HOME/.yml``. - - .. only:: starlingx - - .. include:: /shared/_includes/ansible_install_time_only.txt - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - .. note:: - - This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml) - contains security sensitive information, use the - :command:`ansible-vault create $HOME/localhost.yml` command to create it. - You will be prompted for a password to protect/encrypt the file. - Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the - file needs to be edited after it is created. - - #. Use a copy of the default.yml file listed above to provide your overrides. - - The default.yml file lists all available parameters for bootstrap - configuration with a brief description for each parameter in the file - comments. - - To use this method, run the :command:`ansible-vault create $HOME/localhost.yml` - command and copy the contents of the ``default.yml`` file into the - ansible-vault editor, and edit the configurable values as required. - - #. Create a minimal user configuration override file. - - To use this method, create your override file with - the :command:`ansible-vault create $HOME/localhost.yml` - command and provide the minimum required parameters for the deployment - configuration as shown in the example below. Use the OAM IP SUBNET and IP - ADDRESSing applicable to your deployment environment. - - .. include:: /_includes/min-bootstrap-overrides-non-simplex.rest - - .. only:: starlingx - - In either of the above options, the bootstrap playbook’s default - values will pull all container images required for the |prod-p| from - Docker hub. - - If you have setup a private Docker registry to use for bootstrapping - then you will need to add the following lines in $HOME/localhost.yml: - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: docker-reg-begin - :end-before: docker-reg-end - - .. code-block:: yaml - - docker_registries: - quay.io: - url: myprivateregistry.abc.com:9001/quay.io - docker.elastic.co: - url: myprivateregistry.abc.com:9001/docker.elastic.co - gcr.io: - url: myprivateregistry.abc.com:9001/gcr.io - ghcr.io: - url: myprivateregistry.abc.com:9001/gcr.io - k8s.gcr.io: - url: myprivateregistry.abc.com:9001/k8s.ghcr.io - docker.io: - url: myprivateregistry.abc.com:9001/docker.io - defaults: - type: docker - username: - password: - - # Add the CA Certificate that signed myprivateregistry.abc.com’s - # certificate as a Trusted CA - ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem - - See :ref:`Use a Private Docker Registry ` - for more information. - - .. only:: starlingx - - If a firewall is blocking access to Docker hub or your private - registry from your StarlingX deployment, you will need to add the - following lines in $HOME/localhost.yml (see :ref:`Docker Proxy - Configuration ` for more details about Docker - proxy settings): - - .. only:: partner - - .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest - :start-after: firewall-begin - :end-before: firewall-end - - .. code-block:: bash - - # Add these lines to configure Docker to use a proxy server - docker_http_proxy: http://my.proxy.com:1080 - docker_https_proxy: https://my.proxy.com:1443 - docker_no_proxy: - - 1.2.3.4 - - Refer to :ref:`Ansible Bootstrap Configurations - ` for information on additional Ansible - bootstrap configurations for advanced Ansible bootstrap scenarios. - -#. Run the Ansible bootstrap playbook: - - .. include:: /shared/_includes/ntp-update-note.rest - - :: - - ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -.. incl-bootstrap-sys-controller-0-standard-end: - - ----------------------- -Configure controller-0 ----------------------- - -.. incl-config-controller-0-storage-start: - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the |OAM| interface of controller-0 and specify the - attached network as "oam". - - The following example configures the |OAM| interface on a physical untagged - ethernet port, use the |OAM| port name that is applicable to your deployment - environment, for example eth0: - - .. code-block:: bash - - OAM_IF= - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. Configure the MGMT interface of controller-0 and specify the attached - networks of both "mgmt" and "cluster-host". - - The following example configures the MGMT interface on a physical untagged - ethernet port, use the MGMT port name that is applicable to your deployment - environment, for example eth1: - - .. code-block:: bash - - MGMT_IF= - - # De-provision loopback interface and - # remove mgmt and cluster-host networks from loopback interface - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - - # Configure management interface and assign mgmt and cluster-host networks to it - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. Configure |NTP| servers for network time synchronization: - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - - To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration - `. - -#. If required, configure Ceph storage backend: - - A persistent storage backend is required if your application requires |PVCs|. - - .. only:: openstack - - .. important:: - - The StarlingX OpenStack application **requires** |PVCs|. - - :: - - system storage-backend-add ceph --confirmed - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - These steps are required only if the |prod-os| application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - :: - - system host-label-assign controller-0 openstack-control-plane=enabled - - #. **For OpenStack only:** Configure the system setting for the vSwitch. - - .. only:: starlingx - - StarlingX has |OVS| (kernel-based) vSwitch configured as default: - - * Runs in a container; defined within the helm charts of |prefix|-openstack - manifest. - * Shares the core(s) assigned to the platform. - - If you require better performance, |OVS-DPDK| (|OVS| with the Data - Plane Development Kit, which is supported only on bare metal hardware) - should be used: - - * Runs directly on the host (it is not containerized). - Requires that at least 1 core be assigned/dedicated to the vSwitch - function. - - To deploy the default containerized |OVS|: - - :: - - system modify --vswitch_type none - - This does not run any vSwitch directly on the host, instead, it uses - the containerized |OVS| defined in the helm charts of |prefix|-openstack - manifest. - - To deploy |OVS-DPDK|, run the following command: - - .. parsed-literal:: - - system modify --vswitch_type |ovs-dpdk| - - Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller - or worker nodes created will default to automatically assigning 1 vSwitch - core for |AIO| controllers and 2 vSwitch cores (both on numa-node 0; - physical |NICs| are typically on first numa-node) for compute-labeled - worker nodes. - - .. note:: - After controller-0 is unlocked, changing vswitch_type requires - locking and unlocking controller-0 to apply the change. - - - .. incl-config-controller-0-storage-end: - -------------------- -Unlock controller-0 -------------------- - -Unlock controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -.. only:: openstack - - * **For OpenStack only:** Due to the additional openstack services’ - containers running on the controller host, the size of the docker - filesystem needs to be increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-0 - - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-0 - - # if existing docker fs size + cgts-vg available space is less than - # 60G, you will need to add a new disk partition to cgts-vg. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-0 | fgrep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-0 - - # Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response - # Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system system host-disk-partition-add -t lvm_phys_vol controller-0 ${PARTITION_SIZE} - - # Add new partition to ‘cgts-vg’ local volume group - system host-pv-add controller-0 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-0 docker=60 - -------------------------------------------------- -Install software on controller-1 and worker nodes -------------------------------------------------- - -#. Power on the controller-1 server and force it to network boot with the - appropriate BIOS boot options for your particular server. - -#. As controller-1 boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered - controller-1 host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - - This initiates the install of software on controller-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting for the previous step to complete, power on the worker nodes. - Set the personality to 'worker' and assign a unique hostname for each. - - For example, power on worker-0 and wait for the new host (hostname=None) to - be discovered by checking 'system host-list': - - :: - - system host-update 3 personality=worker hostname=worker-0 - - Repeat for worker-1. Power on worker-1 and wait for the new host - (hostname=None) to be discovered by checking 'system host-list': - - :: - - system host-update 4 personality=worker hostname=worker-1 - - .. only:: starlingx - - .. Note:: - - A node with Edgeworker personality is also available. See - :ref:`deploy-edgeworker-nodes` for details. - -#. Wait for the software installation on controller-1, worker-0, and worker-1 - to complete, for all servers to reboot, and for all to show as - locked/disabled/online in 'system host-list'. - - :: - - system host-list - - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - | 3 | worker-0 | worker | locked | disabled | online | - | 4 | worker-1 | worker | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - ----------------------- -Configure controller-1 ----------------------- - -.. incl-config-controller-1-start: - -#. Configure the |OAM| interface of controller-1 and specify the - attached network of "oam". - - The following example configures the |OAM| interface on a physical untagged - ethernet port, use the |OAM| port name that is applicable to your deployment - environment, for example eth0: - - .. code-block:: bash - - OAM_IF= - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - - To configure a vlan or aggregated ethernet interface, see :ref:`Node - Interfaces `. - -#. The MGMT interface is partially set up by the network install procedure; - configuring the port used for network install as the MGMT port and - specifying the attached network of "mgmt". - - Complete the MGMT interface configuration of controller-1 by specifying the - attached network of "cluster-host". - - :: - - system interface-network-assign controller-1 mgmt0 cluster-host - - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - This step is required only if the |prod-os| application - (|prefix|-openstack) will be installed. - - **For OpenStack only:** Assign OpenStack host labels to controller-1 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - :: - - system host-label-assign controller-1 openstack-control-plane=enabled - -.. incl-config-controller-1-end: - -------------------- -Unlock controller-1 -------------------- - -.. incl-unlock-controller-1-start: - -Unlock controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -.. only:: openstack - - * **For OpenStack only:** Due to the additional openstack services’ containers - running on the controller host, the size of the docker filesystem needs to be - increased from the default size of 30G to 60G. - - .. code-block:: bash - - # check existing size of docker fs - system host-fs-list controller-1 - - # check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located - system host-lvg-list controller-1 - - # if existing docker fs size + cgts-vg available space is less than - # 60G, you will need to add a new disk partition to cgts-vg. - - # Assuming you have unused space on ROOT DISK, add partition to ROOT DISK. - # ( if not use another unused disk ) - - # Get device path of ROOT DISK - system host-show controller-1 | fgrep rootfs - - # Get UUID of ROOT DISK by listing disks - system host-disk-list controller-1 - - # Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response - # Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G - PARTITION_SIZE=30 - system host disk-partition-add -t lvm_phys_vol controller-1 ${PARTITION_SIZE} - - # Add new partition to ‘cgts-vg’ local volume group - system host-pv-add controller-1 cgts-vg - sleep 2 # wait for partition to be added - - # Increase docker filesystem to 60G - system host-fs-modify controller-1 docker=60 - -.. incl-unlock-controller-1-end: - -.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest - ----------------------- -Configure worker nodes ----------------------- - -#. Add the third Ceph monitor to a worker node: - - (The first two Ceph monitors are automatically assigned to controller-0 and - controller-1.) - - :: - - system ceph-mon-add worker-0 - -#. Wait for the worker node monitor to complete configuration: - - :: - - system ceph-mon-list - +--------------------------------------+-------+--------------+------------+------+ - | uuid | ceph_ | hostname | state | task | - | | mon_g | | | | - | | ib | | | | - +--------------------------------------+-------+--------------+------------+------+ - | 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None | - | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None | - | f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None | - +--------------------------------------+-------+--------------+------------+------+ - -#. Assign the cluster-host network to the MGMT interface for the worker nodes: - - (Note that the MGMT interfaces are partially set up automatically by the - network install procedure.) - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - These steps are required only if the |prod-os| application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. parsed-literal:: - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE openstack-compute-node=enabled - kubectl taint nodes $NODE openstack-compute-node:NoSchedule - system host-label-assign $NODE |vswitch-label| - done - - .. note:: - - If you have a |NIC| that supports |SRIOV|, then you can enable it by - using the following: - - .. code-block:: none - - system host-label-assign controller-0 sriov=enabled - - #. **For OpenStack only:** Configure the host settings for the vSwitch. - - If using |OVS-DPDK| vswitch, run the following commands: - - Default recommendation for worker node is to use two cores on numa-node 0 - for |OVS-DPDK| vSwitch; physical NICs are typically on first numa-node. - This should have been automatically configured, if not run the following - command. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 2 cores on processor/numa-node 0 on worker-node to vswitch - system host-cpu-modify -f vswitch -p0 2 $NODE - - done - - - When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on - each |NUMA| node on the host. It is recommended to configure 1x 1G huge - page (-1G 1) for vSwitch memory on each |NUMA| node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application |VMs| require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 0 - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 1 - - done - - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with the - property ``hw:mem_page_size=large`` - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, the following commands are an example that assumes that 1G - huge page size is being used on this host: - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 0 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 1 - - done - - #. **For OpenStack only:** Setup disk partition for nova-local volume group, - needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | fgrep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system host disk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to ‘nova-local’ local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - done - - #. **For OpenStack only:** Configure data interfaces for worker nodes. - Data class interfaces are vswitch interfaces used by vswitch to provide - |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled worker host **MUST** have at least one Data class - interface. - - * Configure the data interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure pci-sriov interfaces for worker nodes. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application |VMs|. Note that pci-sriov interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the pci-sriov interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not already created, create Data Networks that the 'pci-sriov' - # interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes only:** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE sriovdp=enabled - done - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application $NODE 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application $NODE 1 -1G 10 - - done - - --------------------- -Unlock worker nodes --------------------- - -Unlock worker nodes in order to bring them into service: - -.. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-unlock $NODE - done - -The worker nodes will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - ------------------------------------------------------------------ -If configuring Ceph Storage Backend, Add Ceph OSDs to controllers ------------------------------------------------------------------ - -#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk: - - .. code-block:: bash - - HOST=controller-0 - - # List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list ${HOST} - - # Add disk as an OSD storage - system host-stor-add ${HOST} osd - - # List OSD storage devices and wait for configuration of newly added OSD to complete. - system host-stor-list ${HOST} - -#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk: - - .. code-block:: bash - - HOST=controller-1 - - # List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list ${HOST} - - # Add disk as an OSD storage - system host-stor-add ${HOST} osd - - # List OSD storage devices and wait for configuration of newly added OSD to complete. - system host-stor-list ${HOST} - -.. only:: starlingx - - ---------- - Next steps - ---------- - - .. include:: /_includes/kubernetes_install_next.txt - - -.. only:: partner - - .. include:: /_includes/72hr-to-license.rest diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage.rst deleted file mode 100644 index 7a0a68cdd..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage.rst +++ /dev/null @@ -1,22 +0,0 @@ - -============================================================ -Bare metal Standard with Dedicated Storage Installation R6.0 -============================================================ - --------- -Overview --------- - -.. include:: /shared/_includes/desc_dedicated_storage.txt - -.. include:: /shared/_includes/ipv6_note.txt - ------------- -Installation ------------- - -.. toctree:: - :maxdepth: 1 - - dedicated_storage_hardware - dedicated_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.rst deleted file mode 100644 index ffac52bbf..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.rst +++ /dev/null @@ -1,72 +0,0 @@ -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal Standard with Dedicated Storage** deployment -configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -The recommended minimum hardware requirements for bare metal servers for various -host types are: - -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum Requirement | Controller Node | Storage Node | Worker Node | -+=====================+===========================+=======================+=======================+ -| Number of servers | 2 | 2-9 | 2-100 | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket | -| class | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum memory | 64 GB | 64 GB | 32 GB | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) | -| | :ref:`nvme_config`) | | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Additional disks | None | - 1 or more 500 GB | - For OpenStack, | -| | | (min. 10K RPM) for | recommend 1 or more | -| | | Ceph OSD | 500 GB (min. 10K | -| | | - Recommended, but | RPM) for VM | -| | | not required: 1 or | ephemeral storage | -| | | more SSDs or NVMe | | -| | | drives for Ceph | | -| | | journals (min. 1024 | | -| | | MiB per OSD | | -| | | journal) | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: | -| ports | 1x10GE | 1x10GE | 1x10GE | -| | - OAM: 1x1GE | | - Data: 1 or more | -| | | | x 10GE | -+---------------------+---------------------------+-----------------------+-----------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+---------------------+---------------------------+-----------------------+-----------------------+ - --------------------------- -Prepare bare metal servers --------------------------- - -.. include:: prep_servers.txt - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in the following diagram. - - .. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-dedicated-storage.png - :scale: 50% - :alt: Standard with dedicated storage - - *Standard with dedicated storage* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_install_kubernetes.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_install_kubernetes.rst deleted file mode 100644 index 050a0af1d..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/dedicated_storage_install_kubernetes.rst +++ /dev/null @@ -1,536 +0,0 @@ - -.. Greg updates required for -High Security Vulnerability Document Updates - -.. _dedicated_storage_install_kubernetes_r6: - -.. only:: partner - - .. include:: /_includes/install-kubernetes-null-labels.rest - -============================================================== -Install Kubernetes Platform on Standard with Dedicated Storage -============================================================== - -This section describes the steps to install the |prod| Kubernetes platform on a -**Standard with Dedicated Storage** deployment configuration. - -.. contents:: - :local: - :depth: 1 - -.. only:: starlingx - - ------------------- - Create bootable USB - ------------------- - - Refer to :ref:`Bootable USB ` for instructions on how to - create a bootable USB with the StarlingX ISO on your system. - - -------------------------------- - Install software on controller-0 - -------------------------------- - - .. include:: /shared/_includes/inc-install-software-on-controller.rest - :start-after: incl-install-software-controller-0-standard-start - :end-before: incl-install-software-controller-0-standard-end - --------------------------------- -Bootstrap system on controller-0 --------------------------------- - -.. include:: controller_storage_install_kubernetes.rst - :start-after: incl-bootstrap-sys-controller-0-standard-start: - :end-before: incl-bootstrap-sys-controller-0-standard-end: - ----------------------- -Configure controller-0 ----------------------- - -.. include:: controller_storage_install_kubernetes.rst - :start-after: incl-config-controller-0-storage-start: - :end-before: incl-config-controller-0-storage-end: - -------------------- -Unlock controller-0 -------------------- - -.. important:: - - Make sure the Ceph storage backend is configured. If it is - not configured, you will not be able to configure storage - nodes. - -Unlock controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - ------------------------------------------------------------------ -Install software on controller-1, storage nodes, and worker nodes ------------------------------------------------------------------ - -#. Power on the controller-1 server and force it to network boot with the - appropriate BIOS boot options for your particular server. - -#. As controller-1 boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered controller-1 - host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - - This initiates the install of software on controller-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting for the previous step to complete, power on the storage-0 and - storage-1 servers. Set the personality to 'storage' and assign a unique - hostname for each. - - For example, power on storage-0 and wait for the new host (hostname=None) to - be discovered by checking 'system host-list': - - :: - - system host-update 3 personality=storage - - Repeat for storage-1. Power on storage-1 and wait for the new host - (hostname=None) to be discovered by checking 'system host-list': - - :: - - system host-update 4 personality=storage - - This initiates the software installation on storage-0 and storage-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting for the previous step to complete, power on the worker nodes. - Set the personality to 'worker' and assign a unique hostname for each. - - For example, power on worker-0 and wait for the new host (hostname=None) to - be discovered by checking 'system host-list': - - :: - - system host-update 5 personality=worker hostname=worker-0 - - Repeat for worker-1. Power on worker-1 and wait for the new host - (hostname=None) to be discovered by checking 'system host-list': - - :: - - system host-update 6 personality=worker hostname=worker-1 - - This initiates the install of software on worker-0 and worker-1. - - .. only:: starlingx - - .. Note:: - - A node with Edgeworker personality is also available. See - :ref:`deploy-edgeworker-nodes` for details. - -#. Wait for the software installation on controller-1, storage-0, storage-1, - worker-0, and worker-1 to complete, for all servers to reboot, and for all to - show as locked/disabled/online in 'system host-list'. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - | 3 | storage-0 | storage | locked | disabled | online | - | 4 | storage-1 | storage | locked | disabled | online | - | 5 | worker-0 | worker | locked | disabled | online | - | 6 | worker-1 | worker | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - ----------------------- -Configure controller-1 ----------------------- - -.. include:: controller_storage_install_kubernetes.rst - :start-after: incl-config-controller-1-start: - :end-before: incl-config-controller-1-end: - -------------------- -Unlock controller-1 -------------------- - -.. include:: controller_storage_install_kubernetes.rst - :start-after: incl-unlock-controller-1-start: - :end-before: incl-unlock-controller-1-end: - -.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest - ------------------------ -Configure storage nodes ------------------------ - -#. Assign the cluster-host network to the MGMT interface for the storage nodes: - - (Note that the MGMT interfaces are partially set up automatically by the - network install procedure.) - - .. code-block:: bash - - for NODE in storage-0 storage-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -#. Add |OSDs| to storage-0. - - .. code-block:: bash - - HOST=storage-0 - - # List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list ${HOST} - - # Add disk as an OSD storage - system host-stor-add ${HOST} osd - - # List OSD storage devices and wait for configuration of newly added OSD to complete. - system host-stor-list ${HOST} - -#. Add |OSDs| to storage-1. - - .. code-block:: bash - - HOST=storage-1 - - # List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID - # By default, /dev/sda is being used as system disk and can not be used for OSD. - system host-disk-list ${HOST} - - # Add disk as an OSD storage - system host-stor-add ${HOST} osd - - # List OSD storage devices and wait for configuration of newly added OSD to complete. - system host-stor-list ${HOST} - --------------------- -Unlock storage nodes --------------------- - -Unlock storage nodes in order to bring them into service: - -.. code-block:: bash - - for STORAGE in storage-0 storage-1; do - system host-unlock $STORAGE - done - -The storage nodes will reboot in order to apply configuration changes and come -into service. This can take 5-10 minutes, depending on the performance of the -host machine. - ----------------------- -Configure worker nodes ----------------------- - -#. The MGMT interfaces are partially set up by the network install procedure; - configuring the port used for network install as the MGMT port and - specifying the attached network of "mgmt". - - Complete the MGMT interface configuration of the worker nodes by specifying - the attached network of "cluster-host". - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -.. only:: openstack - - ************************************* - OpenStack-specific host configuration - ************************************* - - .. important:: - - These steps are required only if the |prod-os| application - (|prefix|-openstack) will be installed. - - #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. parsed-literal:: - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE openstack-compute-node=enabled - kubectl taint nodes $NODE openstack-compute-node:NoSchedule - system host-label-assign $NODE |vswitch-label| - system host-label-assign $NODE sriov=enabled - done - - #. **For OpenStack only:** Configure the host settings for the vSwitch. - - If using |OVS-DPDK| vSwitch, run the following commands: - - Default recommendation for worker node is to use node two cores on - numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first - numa-node. This should have been automatically configured, if not run - the following command. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 2 cores on processor/numa-node 0 on worker-node to vswitch - system host-cpu-modify -f vswitch -p0 2 $NODE - - done - - When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on - each |NUMA| node on the host. It is recommended to configure 1x 1G huge - page (-1G 1) for vSwitch memory on each |NUMA| node on the host. - - However, due to a limitation with Kubernetes, only a single huge page - size is supported on any one host. If your application |VMs| require 2M - huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch - memory on each |NUMA| node on the host. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 0 - - # assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch - system host-memory-modify -f vswitch -1G 1 $NODE 1 - - done - - - .. important:: - - |VMs| created in an |OVS-DPDK| environment must be configured to use - huge pages to enable networking and must use a flavor with property: - hw:mem_page_size=large - - Configure the huge pages for |VMs| in an |OVS-DPDK| environment on - this host, the following commands are an example that assumes that 1G - huge page size is being used on this host: - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 0 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application -1G 10 $NODE 1 - - done - - #. **For OpenStack only:** Setup disk partition for nova-local volume group, - needed for |prefix|-openstack nova ephemeral disks. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-lvg-add ${NODE} nova-local - - # Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group - # CEPH OSD Disks can NOT be used - # For best performance, do NOT use system/root disk, use a separate physical disk. - - # List host’s disks and take note of UUID of disk to be used - system host-disk-list ${NODE} - # ( if using ROOT DISK, select disk with device_path of - # ‘system host-show ${NODE} | fgrep rootfs’ ) - - # Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response - # The size of the PARTITION needs to be large enough to hold the aggregate size of - # all nova ephemeral disks of all VMs that you want to be able to host on this host, - # but is limited by the size and space available on the physical disk you chose above. - # The following example uses a small PARTITION size such that you can fit it on the - # root disk, if that is what you chose above. - # Additional PARTITION(s) from additional disks can be added later if required. - PARTITION_SIZE=30 - - system hostdisk-partition-add -t lvm_phys_vol ${NODE} ${PARTITION_SIZE} - - # Add new partition to ‘nova-local’ local volume group - system host-pv-add ${NODE} nova-local - sleep 2 - done - - #. **For OpenStack only:** Configure data interfaces for worker nodes. - Data class interfaces are vswitch interfaces used by vswitch to provide - |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the - underlying assigned Data Network. - - .. important:: - - A compute-labeled worker host **MUST** have at least one Data class - interface. - - * Configure the data interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# - system host-if-modify -m 1500 -n data0 -c data ${NODE} - system host-if-modify -m 1500 -n data1 -c data ${NODE} - - # Create Data Networks that vswitch 'data' interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to Data Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - -***************************************** -Optionally Configure PCI-SRIOV Interfaces -***************************************** - -#. **Optionally**, configure pci-sriov interfaces for worker nodes. - - This step is **optional** for Kubernetes. Do this step if using |SRIOV| - network attachments in hosted application containers. - - .. only:: openstack - - This step is **optional** for OpenStack. Do this step if using |SRIOV| - vNICs in hosted application |VMs|. Note that pci-sriov interfaces can - have the same Data Networks assigned to them as vswitch data interfaces. - - - * Configure the pci-sriov interfaces for worker nodes. - - .. code-block:: bash - - # Execute the following lines with - export NODE=worker-0 - # and then repeat with - export NODE=worker-1 - - # List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces, - # based on displayed linux port name, pci address and device type. - system host-port-list ${NODE} - - # List host’s auto-configured ‘ethernet’ interfaces, - # find the interfaces corresponding to the ports identified in previous step, and - # take note of their UUID - system host-if-list -a ${NODE} - - # Modify configuration for these interfaces - # Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov# - system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N - system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N - - # If not created already, create Data Networks that the 'pci-sriov' - # interfaces will be connected to - DATANET0='datanet0' - DATANET1='datanet1' - system datanetwork-add ${DATANET0} vlan - system datanetwork-add ${DATANET1} vlan - - # Assign Data Networks to PCI-SRIOV Interfaces - system interface-datanetwork-assign ${NODE} ${DATANET0} - system interface-datanetwork-assign ${NODE} ${DATANET1} - - - * **For Kubernetes only:** To enable using |SRIOV| network attachments for - the above interfaces in Kubernetes hosted application containers: - - * Configure the Kubernetes |SRIOV| device plugin. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE sriovdp=enabled - done - - * If planning on running |DPDK| in Kubernetes hosted application - containers on this host, configure the number of 1G Huge pages required - on both |NUMA| nodes. - - .. code-block:: bash - - for NODE in worker-0 worker-1; do - - # assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications - system host-memory-modify -f application $NODE 0 -1G 10 - - # assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications - system host-memory-modify -f application $NODE 1 -1G 10 - - done - - -------------------- -Unlock worker nodes -------------------- - -Unlock worker nodes in order to bring them into service: - -.. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-unlock $NODE - done - -The worker nodes will reboot in order to apply configuration changes and come -into service. This can take 5-10 minutes, depending on the performance of the -host machine. - -.. only:: starlingx - - ---------- - Next steps - ---------- - - .. include:: /_includes/kubernetes_install_next.txt - - -.. only:: partner - - .. include:: /_includes/72hr-to-license.rest diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b.rst deleted file mode 100644 index cb96a354a..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b.rst +++ /dev/null @@ -1,33 +0,0 @@ -.. _delete-hosts-using-the-host-delete-command-1729d2e3153b: - -=================================== -Delete Hosts Using the Command Line -=================================== - -You can delete hosts from the system inventory using the :command:`host-delete` command. - -.. rubric:: |proc| - -#. Check for alarms related to the host. - - Use the :command:`fm alarm-list` command to check for any alarms (major - or critical events). You can also type :command:`fm event-list` to see a log - of events. For more information on alarms, see :ref:`Fault Management - Overview `. - -#. Lock the host that will be deleted. - - Use the :command:`system host-lock` command. Only locked hosts can be deleted. - -#. Delete the host from the system inventory. - - Use the command :command:`system host-delete`. This command accepts one - parameter: the hostname or ID. Make sure that the remaining hosts have - sufficient capacity and workload to account for the deleted host. - -#. Verify that the host has been deleted successfully. - - Use the :command:`fm alarm-list` command to check for any alarms (major - or critical events). You can also type :command:`fm event-list` to see a log - of events. For more information on alarms, see :ref:`Fault Management - Overview `. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/exporting-host-configurations.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/exporting-host-configurations.rst deleted file mode 100644 index b25cb67f9..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/exporting-host-configurations.rst +++ /dev/null @@ -1,53 +0,0 @@ - -.. fdm1552927801987 -.. _exporting-host-configurations-r6: - -========================== -Export Host Configurations -========================== - -You can generate a host configuration file from an existing system for -re-installation, upgrade, or maintenance purposes. - -.. rubric:: |context| - -You can generate a host configuration file using the :command:`system -host-bulk-export` command, and then use this file with the :command:`system -host-bulk-add` command to re-create the system. If required, you can modify the -file before using it. - -The configuration settings \(management |MAC| address, BM IP address, and so -on\) for all nodes except **controller-0** are written to the file. - -.. note:: - To ensure that the hosts are not powered on unexpectedly, the **power-on** - element for each host is commented out by default. - -.. rubric:: |prereq| - -To perform this procedure, you must be logged in as the **admin** user. - -.. rubric:: |proc| - -.. _exporting-host-configurations-steps-unordered-ntw-nw1-c2b-r6: - -- Run the :command:`system host-bulk-export` command to create the host - configuration file. - - .. code-block:: none - - system host-bulk-export [--filename - - - - where is the path and name of the output file. If the - ``--filename`` option is not present, the default path ./hosts.xml is - used. - -.. rubric:: |postreq| - -To use the host configuration file, see :ref:`Reinstall a System Using an -Exported Host Configuration File -`. - -For details on the structure and elements of the file, see :ref:`Bulk Host XML -File Format `. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/ironic.rst deleted file mode 100644 index 17515c5de..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic.rst +++ /dev/null @@ -1,72 +0,0 @@ -==================================== -Bare metal Standard with Ironic R6.0 -==================================== - --------- -Overview --------- - -Ironic is an OpenStack project that provisions bare metal machines. For -information about the Ironic project, see -`Ironic Documentation `__. - -End user applications can be deployed on bare metal servers (instead of -virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or -more bare metal servers. - -.. note:: - - If you are behind a corporate firewall or proxy, you need to set proxy - settings. Refer to :ref:`docker_proxy_config` for - details. - -.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-ironic.png - :scale: 50% - :alt: Standard with Ironic deployment configuration - - *Figure 1: Standard with Ironic deployment configuration* - -Bare metal servers must be connected to: - -* IPMI for OpenStack Ironic control -* ironic-provisioning-net tenant network via their untagged physical interface, - which supports PXE booting - -As part of configuring OpenStack Ironic in StarlingX: - -* An ironic-provisioning-net tenant network must be identified as the boot - network for bare metal nodes. -* An additional untagged physical interface must be configured on controller - nodes and connected to the ironic-provisioning-net tenant network. The - OpenStack Ironic tftpboot server will PXE boot the bare metal servers over - this interface. - -.. note:: - - Bare metal servers are NOT: - - * Running any OpenStack / StarlingX software; they are running end user - applications (for example, Glance Images). - * To be connected to the internal management network. - ------------- -Installation ------------- - -StarlingX currently supports only a bare metal installation of Ironic with a -standard configuration, either: - -* :doc:`controller_storage` - -* :doc:`dedicated_storage` - - -This guide assumes that you have a standard deployment installed and configured -with 2x controllers and at least 1x compute-labeled worker node, with the -StarlingX OpenStack application (|prefix|-openstack) applied. - -.. toctree:: - :maxdepth: 1 - - ironic_hardware - ironic_install diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_hardware.rst deleted file mode 100644 index 3e2af7430..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_hardware.rst +++ /dev/null @@ -1,51 +0,0 @@ -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal Ironic** deployment configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -* One or more bare metal hosts as Ironic nodes as well as tenant instance node. - -* BMC support on bare metal host and controller node connectivity to the BMC IP - address of bare metal hosts. - -For controller nodes: - -* Additional NIC port on both controller nodes for connecting to the - ironic-provisioning-net. - -For worker nodes: - -* If using a flat data network for the Ironic provisioning network, an additional - NIC port on one of the worker nodes is required. - -* Alternatively, use a VLAN data network for the Ironic provisioning network and - simply add the new data network to an existing interface on the worker node. - -* Additional switch ports / configuration for new ports on controller, worker, - and Ironic nodes, for connectivity to the Ironic provisioning network. - ------------------------------------ -BMC configuration of Ironic node(s) ------------------------------------ - -Enable BMC and allocate a static IP, username, and password in the BIOS settings. -For example, set: - -IP address - 10.10.10.126 - -username - root - -password - test123 diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_install.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_install.rst deleted file mode 100644 index ecd027a6e..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/ironic_install.rst +++ /dev/null @@ -1,392 +0,0 @@ -================================ -Install Ironic on StarlingX R6.0 -================================ - -This section describes the steps to install Ironic on a standard configuration, -either: - -* **StarlingX R6.0 bare metal Standard with Controller Storage** deployment - configuration - -* **StarlingX R6.0 bare metal Standard with Dedicated Storage** deployment - configuration - -.. contents:: - :local: - :depth: 1 - ---------------------- -Enable Ironic service ---------------------- - -This section describes the pre-configuration required to enable the Ironic service. -All the commands in this section are for the StarlingX platform. - -First acquire administrative privileges: - -:: - - source /etc/platform/openrc - -******************************** -Download Ironic deployment image -******************************** - -The Ironic service requires a deployment image (kernel and ramdisk) which is -used to clean Ironic nodes and install the end-user's image. The cleaning done -by the deployment image wipes the disks and tests connectivity to the Ironic -conductor on the controller nodes via the Ironic Python Agent (IPA). - -The latest Ironic deployment image (**Ironic-kernel** and **Ironic-ramdisk**) -can be found here: - -* `Ironic-kernel ipa-centos8-master.kernel - `__ -* `Ironic-ramdisk ipa-centos8.initramfs - `__ - - -******************************************************* -Configure Ironic network on deployed standard StarlingX -******************************************************* - -#. Add an address pool for the Ironic network. This example uses `ironic-pool`: - - :: - - system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24 - -#. Add the Ironic platform network. This example uses `ironic-net`: - - :: - - system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false - -#. Add the Ironic tenant network. This example uses `ironic-data`: - - .. note:: - - The tenant network is not the same as the platform network described in - the previous step. You can specify any name for the tenant network other - than ‘ironic’. If the name 'ironic' is used, a user override must be - generated to indicate the tenant network name. - - Refer to section `Generate user Helm overrides`_ for details. - - :: - - system datanetwork-add ironic-data flat - -#. Configure the new interfaces (for Ironic) on controller nodes and assign - them to the platform network. Host must be locked. This example uses the - platform network `ironic-net` that was named in a previous step. - - These new interfaces to the controllers are used to connect to the Ironic - provisioning network: - - **controller-0** - - :: - - system interface-network-assign controller-0 enp2s0 ironic-net - system host-if-modify -n ironic -c platform \ - --ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0 - - # Apply the OpenStack Ironic node labels - system host-label-assign controller-0 openstack-ironic=enabled - - # Unlock the node to apply changes - system host-unlock controller-0 - - - **controller-1** - - :: - - system interface-network-assign controller-1 enp2s0 ironic-net - system host-if-modify -n ironic -c platform \ - --ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0 - - # Apply the OpenStack Ironic node labels - system host-label-assign controller-1 openstack-ironic=enabled - - # Unlock the node to apply changes - system host-unlock controller-1 - -#. Configure the new interface (for Ironic) on one of the compute-labeled worker - nodes and assign it to the Ironic data network. This example uses the data - network `ironic-data` that was named in a previous step. - - :: - - system interface-datanetwork-assign worker-0 eno1 ironic-data - system host-if-modify -n ironicdata -c data worker-0 eno1 - -**************************** -Generate user Helm overrides -**************************** - -Ironic Helm Charts are included in the |prefix|-openstack application. By -default, Ironic is disabled. - -To enable Ironic, update the following Ironic Helm Chart attributes: - -.. parsed-literal:: - - system helm-override-update |prefix|-openstack ironic openstack \ - --set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \ - --set network.pxe.neutron_subnet_gateway=10.10.20.1 \ - --set network.pxe.neutron_provider_network=ironic-data - -:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to -Neutron for Ironic node provision, and reserves several IPs for the platform. - -If the data network name for Ironic is changed, modify -:command:`network.pxe.neutron_provider_network` to the command above: - -:: - - --set network.pxe.neutron_provider_network=ironic-data - -*************************** -Apply OpenStack application -*************************** - -Re-apply the |prefix|-openstack application to apply the changes to Ironic: - -.. parsed-literal:: - - system helm-chart-attribute-modify |prefix|-openstack ironic openstack \ - --enabled true - - system application-apply |prefix|-openstack - --------------------- -Start an Ironic node --------------------- - -All the commands in this section are for the OpenStack application with -administrative privileges. - -From a new shell as a root user, without sourcing ``/etc/platform/openrc``: - -:: - - mkdir -p /etc/openstack - - tee /etc/openstack/clouds.yaml << EOF - clouds: - openstack_helm: - region_name: RegionOne - identity_api_version: 3 - endpoint_type: internalURL - auth: - username: 'admin' - password: 'Li69nux*' - project_name: 'admin' - project_domain_name: 'default' - user_domain_name: 'default' - auth_url: 'http://keystone.openstack.svc.cluster.local/v3' - EOF - - export OS_CLOUD=openstack_helm - -******************** -Create Glance images -******************** - -#. Create the **ironic-kernel** image: - - :: - - openstack image create \ - --file ~/coreos_production_pxe-stable-stein.vmlinuz \ - --disk-format aki \ - --container-format aki \ - --public \ - ironic-kernel - -#. Create the **ironic-ramdisk** image: - - :: - - openstack image create \ - --file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \ - --disk-format ari \ - --container-format ari \ - --public \ - ironic-ramdisk - -#. Create the end user application image (for example, CentOS): - - :: - - openstack image create \ - --file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \ - --public --disk-format \ - qcow2 --container-format bare centos - -********************* -Create an Ironic node -********************* - -#. Create a node: - - :: - - openstack baremetal node create --driver ipmi --name ironic-test0 - -#. Add IPMI information: - - :: - - openstack baremetal node set \ - --driver-info ipmi_address=10.10.10.126 \ - --driver-info ipmi_username=root \ - --driver-info ipmi_password=test123 \ - --driver-info ipmi_terminal_port=623 ironic-test0 - -#. Set `ironic-kernel` and `ironic-ramdisk` images driver information, - on this bare metal node: - - :: - - openstack baremetal node set \ - --driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \ - --driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \ - ironic-test0 - -#. Set resource properties on this bare metal node based on actual Ironic node - capacities: - - :: - - openstack baremetal node set \ - --property cpus=4 \ - --property cpu_arch=x86_64\ - --property capabilities="boot_option:local" \ - --property memory_mb=65536 \ - --property local_gb=400 \ - --resource-class bm ironic-test0 - -#. Add pxe_template location: - - :: - - openstack baremetal node set --driver-info \ - pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \ - ironic-test0 - -#. Create a port to identify the specific port used by the Ironic node. - Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node - port which connects to the Ironic network: - - :: - - openstack baremetal port create \ - --node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \ - --pxe-enabled true a4:bf:01:2b:3b:c8 - -#. Change node state to `manage`: - - :: - - openstack baremetal node manage ironic-test0 - -#. Make node available for deployment: - - :: - - openstack baremetal node provide ironic-test0 - -#. Wait for ironic-test0 provision-state: available: - - :: - - openstack baremetal node show ironic-test0 - ---------------------------------- -Deploy an instance on Ironic node ---------------------------------- - -All the commands in this section are for the OpenStack application, but this -time with *tenant* specific privileges. - -#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``: - - :: - - mkdir -p /etc/openstack - - tee /etc/openstack/clouds.yaml << EOF - clouds: - openstack_helm: - region_name: RegionOne - identity_api_version: 3 - endpoint_type: internalURL - auth: - username: 'joeuser' - password: 'mypasswrd' - project_name: 'intel' - project_domain_name: 'default' - user_domain_name: 'default' - auth_url: 'http://keystone.openstack.svc.cluster.local/v3' - EOF - - export OS_CLOUD=openstack_helm - -#. Create flavor. - - Set resource CUSTOM_BM corresponding to **--resource-class bm**: - - :: - - openstack flavor create --ram 4096 --vcpus 4 --disk 400 \ - --property resources:CUSTOM_BM=1 \ - --property resources:VCPU=0 \ - --property resources:MEMORY_MB=0 \ - --property resources:DISK_GB=0 \ - --property capabilities:boot_option='local' \ - bm-flavor - - See `Adding scheduling information - `__ - and `Configure Nova flavors - `__ - for more information. - -#. Enable service - - List the compute services: - - :: - - openstack compute service list - - Set compute service properties: - - :: - - openstack compute service set --enable controller-0 nova-compute - -#. Create instance - - .. note:: - - The :command:`keypair create` command is optional. It is not required to - enable a bare metal instance. - - :: - - openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey - - - Create 2 new servers, one bare metal and one virtual: - - :: - - openstack server create --image centos --flavor bm-flavor \ - --network baremetal --key-name mykey bm - - openstack server create --image centos --flavor m1.small \ - --network baremetal --key-name mykey vm diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-or-a-host.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-or-a-host.rst deleted file mode 100644 index e40f2eea8..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-or-a-host.rst +++ /dev/null @@ -1,39 +0,0 @@ - -.. deo1552927844327 -.. _reinstalling-a-system-or-a-host-r6: - -============================ -Reinstall a System or a Host -============================ - -You can reinstall individual hosts or the entire system if necessary. -Reinstalling host software or deleting and re-adding a host node may be -required to complete certain configuration changes. - -.. rubric:: |context| - -For a summary of changes that require system or host reinstallation, see -|node-doc|: :ref:`Configuration Changes Requiring Re-installation -`. - -To reinstall an entire system, refer to the Installation Guide for your system -type \(for example, Standard or All-in-one\). - -.. note:: - To simplify system reinstallation, you can export and reuse an existing - system configuration. For more information, see :ref:`Reinstalling a System - Using an Exported Host Configuration File - `. - -To reinstall the software on a host using the Host Inventory controls, see -|node-doc|: :ref:`Host Inventory `. In some cases, you must delete -the host instead, and then re-add it using the standard host installation -procedure. This applies if the system inventory record must be corrected to -complete the configuration change \(for example, if the |MAC| address of the -management interface has changed\). - -- :ref:`Reinstalling a System Using an Exported Host Configuration File - ` - -- :ref:`Exporting Host Configurations ` - diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file.rst deleted file mode 100644 index 2fd44676d..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file.rst +++ /dev/null @@ -1,45 +0,0 @@ - -.. wuh1552927822054 -.. _reinstalling-a-system-using-an-exported-host-configuration-file-r6: - -============================================================ -Reinstall a System Using an Exported Host Configuration File -============================================================ - -You can reinstall a system using the host configuration file that is generated -using the :command:`host-bulk-export` command. - -.. rubric:: |prereq| - -For the following procedure, **controller-0** must be the active controller. - -.. rubric:: |proc| - -#. Create a host configuration file using the :command:`system - host-bulk-export` command, as described in :ref:`Exporting Host - Configurations `. - -#. Copy the host configuration file to a USB drive or somewhere off the - controller hard disk. - -#. Edit the host configuration file as needed, for example to specify power-on - or |BMC| information. - -#. Delete all the hosts except **controller-0** from the inventory. - -#. Reinstall the |prod| software on **controller-0**, which must be the active - controller. - -#. Run :command:`Ansible Bootstrap playbook`. - -#. Follow the instructions for using the :command:`system host-bulk-add` - command, as detailed in :ref:`Adding Hosts in Bulk `. - -.. rubric:: |postreq| - -After adding the host, you must provision it according to the requirements of -the personality. - -.. xbooklink For more information, see :ref:`Installing, Configuring, and - Unlocking Nodes `, for your system, - and follow the *Configure* steps for the appropriate node personality. diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage.rst deleted file mode 100644 index 4e85600fe..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage.rst +++ /dev/null @@ -1,22 +0,0 @@ -======================================================= -Bare metal Standard with Rook Storage Installation R6.0 -======================================================= - --------- -Overview --------- - -.. include:: /shared/_includes/desc_rook_storage.txt - -.. include:: /shared/_includes/ipv6_note.txt - - ------------- -Installation ------------- - -.. toctree:: - :maxdepth: 1 - - rook_storage_hardware - rook_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_hardware.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_hardware.rst deleted file mode 100644 index 8bea97547..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_hardware.rst +++ /dev/null @@ -1,73 +0,0 @@ -===================== -Hardware Requirements -===================== - -This section describes the hardware requirements and server preparation for a -**StarlingX R6.0 bare metal Standard with Rook Storage** deployment -configuration. - -.. contents:: - :local: - :depth: 1 - ------------------------------ -Minimum hardware requirements ------------------------------ - -The recommended minimum hardware requirements for bare metal servers for various -host types are: - -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum Requirement | Controller Node | Worker Node for rook | Worker Node for | -| | | storage | application | -+=====================+===========================+=======================+=======================+ -| Number of servers | 2 | 2-9 | 2-100 | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket | -| class | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum memory | 64 GB | 64 GB | 32 GB | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) | -| | :ref:`nvme_config`) | | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Additional disks | None | - 1 or more 500 GB | - For OpenStack, | -| | | (min. 10K RPM) for | recommend 1 or more | -| | | Ceph OSD | 500 GB (min. 10K | -| | | - Recommended, but | RPM) for VM | -| | | not required: 1 or | ephemeral storage | -| | | more SSDs or NVMe | | -| | | drives for Ceph | | -| | | journals (min. 1024 | | -| | | MiB per OSD | | -| | | journal) | | -+---------------------+---------------------------+-----------------------+-----------------------+ -| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: | -| ports | 1x10GE | 1x10GE | 1x10GE | -| | - OAM: 1x1GE | | - Data: 1 or more | -| | | | x 10GE | -+---------------------+---------------------------+-----------------------+-----------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+---------------------+---------------------------+-----------------------+-----------------------+ - --------------------------- -Prepare bare metal servers --------------------------- - -.. include:: prep_servers.txt - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in the following diagram. - - .. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png - :scale: 50% - :alt: Standard with Rook Storage deployment configuration - - *Standard with Rook Storage deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_install_kubernetes.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_install_kubernetes.rst deleted file mode 100644 index c65a5012b..000000000 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/rook_storage_install_kubernetes.rst +++ /dev/null @@ -1,752 +0,0 @@ -===================================================================== -Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage -===================================================================== - -This section describes the steps to install the StarlingX Kubernetes platform -on a **StarlingX R6.0 bare metal Standard with Rook Storage** deployment -configuration. - -.. contents:: - :local: - :depth: 1 - -------------------- -Create bootable USB -------------------- - -Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to -create a bootable USB with the StarlingX ISO on your system. - --------------------------------- -Install software on controller-0 --------------------------------- - -.. incl-install-software-controller-0-standard-start: - -#. Insert the bootable USB into a bootable USB port on the host you are - configuring as controller-0. - -#. Power on the host. - -#. Attach to a console, ensure the host boots from the USB, and wait for the - StarlingX Installer Menus. - -#. Make the following menu selections in the installer: - - #. First menu: Select 'Standard Controller Configuration' - #. Second menu: Select 'Graphical Console' or 'Textual Console' depending on - your terminal access to the console port - -#. Wait for non-interactive install of software to complete and server to reboot. - This can take 5-10 minutes, depending on the performance of the server. - -.. incl-install-software-controller-0-standard-end: - --------------------------------- -Bootstrap system on controller-0 --------------------------------- - -.. incl-bootstrap-sys-controller-0-standard-start: - -#. Login using the username / password of "sysadmin" / "sysadmin". - - When logging in for the first time, you will be forced to change the password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. Verify and/or configure IP connectivity. - - External connectivity is required to run the Ansible bootstrap playbook. The - StarlingX boot image will DHCP out all interfaces so the server may have - obtained an IP address and have external IP connectivity if a DHCP server is - present in your environment. Verify this using the :command:`ip addr` and - :command:`ping 8.8.8.8` commands. - - Otherwise, manually configure an IP address and default IP route. Use the - PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your - deployment environment. - - :: - - sudo ip address add / dev - sudo ip link set up dev - sudo ip route add default via dev - ping 8.8.8.8 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible - configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml`` - The default configuration values for the bootstrap playbook. - - ``sysadmin home directory ($HOME)`` - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: ``$HOME/.yml``. - - .. include:: /shared/_includes/ansible_install_time_only.txt - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - #. Use a copy of the default.yml file listed above to provide your overrides. - - The default.yml file lists all available parameters for bootstrap - configuration with a brief description for each parameter in the file comments. - - To use this method, copy the default.yml file listed above to - ``$HOME/localhost.yml`` and edit the configurable values as desired. - - #. Create a minimal user configuration override file. - - To use this method, create your override file at ``$HOME/localhost.yml`` - and provide the minimum required parameters for the deployment configuration - as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing - applicable to your deployment environment. - - :: - - cd ~ - cat < localhost.yml - system_mode: duplex - - dns_servers: - - 8.8.8.8 - - 8.8.4.4 - - external_oam_subnet: / - external_oam_gateway_address: - external_oam_floating_address: - external_oam_node_0_address: - external_oam_node_1_address: - - admin_username: admin - admin_password: - ansible_become_pass: - - # Add these lines to configure Docker to use a proxy server - # docker_http_proxy: http://my.proxy.com:1080 - # docker_https_proxy: https://my.proxy.com:1443 - # docker_no_proxy: - # - 1.2.3.4 - - EOF - - Refer to :ref:`Ansible Bootstrap Configurations ` - for information on additional Ansible bootstrap configurations for advanced - Ansible bootstrap scenarios, such as Docker proxies when deploying behind a - firewall, etc. Refer to :ref:`Docker Proxy Configuration ` - for details about Docker proxy settings. - -#. Run the Ansible bootstrap playbook: - - :: - - ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -.. incl-bootstrap-sys-controller-0-standard-end: - - ----------------------- -Configure controller-0 ----------------------- - -.. incl-config-controller-0-storage-start: - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the OAM and MGMT interfaces of controller-0 and specify the - attached networks. Use the OAM and MGMT port names, for example eth0, that are - applicable to your deployment environment. - - .. code-block:: bash - - OAM_IF= - MGMT_IF= - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - -#. Configure NTP servers for network time synchronization: - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - -#. If required, and not already done as part of bootstrap, configure Docker to - use a proxy server. - - #. List Docker proxy parameters: - - :: - - system service-parameter-list platform docker - - #. Refer to :ref:`docker_proxy_config` for - details about Docker proxy settings. - -************************************* -OpenStack-specific host configuration -************************************* - -.. important:: - - This step is required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed. - -#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the |prefix|-openstack manifest and helm-charts later. - - :: - - system host-label-assign controller-0 openstack-control-plane=enabled - -#. **For OpenStack only:** Configure the system setting for the vSwitch. - - StarlingX has |OVS| (kernel-based) vSwitch configured as default: - - * Runs in a container; defined within the helm charts of |prefix|-openstack - manifest. - * Shares the core(s) assigned to the platform. - - If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane - Development Kit, which is supported only on bare metal hardware) should be - used: - - * Runs directly on the host (it is not containerized). - * Requires that at least 1 core be assigned/dedicated to the vSwitch - function. - - To deploy the default containerized |OVS|: - - :: - - system modify --vswitch_type none - - Do not run any vSwitch directly on the host, instead, use the containerized - |OVS| defined in the helm charts of |prefix|-openstack manifest. - - To deploy |OVS|-|DPDK|, run the following command: - - :: - - system modify --vswitch_type ovs-dpdk - system host-cpu-modify -f vswitch -p0 1 controller-0 - - Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will - default to automatically assigning 1 vSwitch core for |AIO| controllers and - 2 vSwitch cores for compute-labeled worker nodes. - - When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with the - following command: - - :: - - system host-memory-modify -f -1G <1G hugepages number> - - For example: - - :: - - system host-memory-modify -f vswitch -1G 1 worker-0 0 - - |VMs| created in an |OVS|-|DPDK| environment must be configured to use huge - pages to enable networking and must use a flavor with property: - hw:mem_page_size=large - - Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with the - command: - - :: - - system host-memory-modify -1G <1G hugepages number> - - For example: - - :: - - system host-memory-modify worker-0 0 -1G 10 - - .. note:: - - After controller-0 is unlocked, changing vswitch_type requires - locking and unlocking all compute-labeled worker nodes (and/or |AIO| - controllers) to apply the change. - -.. incl-config-controller-0-storage-end: - -******************************** -Rook-specific host configuration -******************************** - -.. important:: - - **This step is required only if the StarlingX Rook application will be - installed.** - -**For Rook only:** Assign Rook host labels to controller-0 in support of -installing the rook-ceph-apps manifest/helm-charts later and add ceph-rook -as storage backend: - -:: - - system host-label-assign controller-0 ceph-mon-placement=enabled - system host-label-assign controller-0 ceph-mgr-placement=enabled - system storage-backend-add ceph-rook --confirmed - -------------------- -Unlock controller-0 -------------------- - -Unlock controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -------------------------------------------------- -Install software on controller-1 and worker nodes -------------------------------------------------- - -#. Power on the controller-1 server and force it to network boot with the - appropriate BIOS boot options for your particular server. - -#. As controller-1 boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered - controller-1 host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - - This initiates the install of software on controller-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting for the previous step to complete, power on the worker nodes. - Set the personality to 'worker' and assign a unique hostname for each. - - For example, power on worker-0 and wait for the new host (hostname=None) to - be discovered by checking 'system host-list': - - :: - - system host-update 3 personality=worker hostname=worker-0 - - Repeat for worker-1. Power on worker-1 and wait for the new host - (hostname=None) to be discovered by checking 'system host-list': - - :: - - system host-update 4 personality=worker hostname=worker-1 - - For rook storage, there is no storage personality. Some hosts with worker - personality providers storage service. Here we still named these worker host - storage-x. Repeat for storage-0 and storage-1. Power on storage-0, storage-1 - and wait for the new host (hostname=None) to be discovered by checking - 'system host-list': - - :: - - system host-update 5 personality=worker hostname=storage-0 - system host-update 6 personality=worker hostname=storage-1 - - .. only:: starlingx - - .. Note:: - - A node with Edgeworker personality is also available. See - :ref:`deploy-edgeworker-nodes` for details. - -#. Wait for the software installation on controller-1, worker-0, and worker-1 - to complete, for all servers to reboot, and for all to show as - locked/disabled/online in 'system host-list'. - - :: - - system host-list - - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - | 3 | worker-0 | worker | locked | disabled | online | - | 4 | worker-1 | worker | locked | disabled | online | - | 5 | storage-0 | worker | locked | disabled | online | - | 6 | storage-1 | worker | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - ----------------------- -Configure controller-1 ----------------------- - -.. incl-config-controller-1-start: - -Configure the |OAM| and MGMT interfaces of controller-0 and specify the -attached networks. Use the |OAM| and MGMT port names, for example eth0, that -are applicable to your deployment environment. - -(Note that the MGMT interface is partially set up automatically by the network -install procedure.) - -:: - - OAM_IF= - MGMT_IF= - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - system interface-network-assign controller-1 $MGMT_IF cluster-host - -************************************* -OpenStack-specific host configuration -************************************* - -.. important:: - - This step is required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed. - -**For OpenStack only:** Assign OpenStack host labels to controller-1 in support -of installing the |prefix|-openstack manifest and helm-charts later. - -:: - - system host-label-assign controller-1 openstack-control-plane=enabled - -.. incl-config-controller-1-end: - -******************************** -Rook-specific host configuration -******************************** - -.. important:: - - **This step is required only if the StarlingX Rook application will be - installed.** - -**For Rook only:** Assign Rook host labels to controller-1 in support of -installing the rook-ceph-apps manifest/helm-charts later: - -:: - - system host-label-assign controller-1 ceph-mon-placement=enabled - system host-label-assign controller-1 ceph-mgr-placement=enabled - -------------------- -Unlock controller-1 -------------------- - -.. incl-unlock-controller-1-start: - -Unlock controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -.. incl-unlock-controller-1-end: - ----------------------- -Configure worker nodes ----------------------- - -#. Assign the cluster-host network to the MGMT interface for the worker nodes: - - (Note that the MGMT interfaces are partially set up automatically by the - network install procedure.) - - :: - - for NODE in worker-0 worker-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -#. Configure data interfaces for worker nodes. Use the DATA port names, for - example eth0, that are applicable to your deployment environment. - - .. important:: - - This step is **required** for OpenStack. - - This step is optional for Kubernetes: Do this step if using |SRIOV| - network attachments in hosted application containers. - - For Kubernetes |SRIOV| network attachments: - - * Configure |SRIOV| device plug in: - - :: - - for NODE in worker-0 worker-1; do - system host-label-assign ${NODE} sriovdp=enabled - done - - * If planning on running |DPDK| in containers on this host, configure the - number of 1G Huge pages required on both |NUMA| nodes: - - :: - - for NODE in worker-0 worker-1; do - system host-memory-modify ${NODE} 0 -1G 100 - system host-memory-modify ${NODE} 1 -1G 100 - done - - For both Kubernetes and OpenStack: - - :: - - DATA0IF= - DATA1IF= - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - - # configure the datanetworks in sysinv, prior to referencing it - # in the ``system host-if-modify`` command'. - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - for NODE in worker-0 worker-1; do - echo "Configuring interface for: $NODE" - set -ex - system host-port-list ${NODE} --nowrap > ${SPL} - system host-if-list -a ${NODE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} - system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} - set +ex - done - -************************************* -OpenStack-specific host configuration -************************************* - -.. important:: - - This step is required only if the StarlingX OpenStack application - (|prefix|-openstack) will be installed. - -#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in - support of installing the |prefix|-openstack manifest and helm-charts later. - - .. only:: starlingx - - .. parsed-literal:: - - for NODE in worker-0 worker-1; do - system host-label-assign $NODE openstack-compute-node=enabled - kubectl taint nodes $NODE openstack-compute-node:NoSchedule - system host-label-assign $NODE |vswitch-label| - system host-label-assign $NODE sriov=enabled - done - - .. only:: partner - - .. include:: /_includes/rook_storage_install_kubernetes.rest - :start-after: ref1-begin - :end-before: ref1-end - -#. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for |prefix|-openstack nova ephemeral disks. - - :: - - for NODE in worker-0 worker-1; do - echo "Configuring Nova local for: $NODE" - ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - PARTITION_SIZE=10 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${NODE} nova-local - system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} - done - --------------------- -Unlock worker nodes --------------------- - -Unlock worker nodes in order to bring them into service: - -.. code-block:: bash - - for NODE in worker-0 worker-1; do - system host-unlock $NODE - done - -The worker nodes will reboot in order to apply configuration changes and come -into service. This can take 5-10 minutes, depending on the performance of the -host machine. - ------------------------ -Configure storage nodes ------------------------ - -#. Assign the cluster-host network to the MGMT interface for the storage nodes. - - Note that the MGMT interfaces are partially set up by the network install - procedure. - - .. code-block:: bash - - for NODE in storage-0 storage-1; do - system interface-network-assign $NODE mgmt0 cluster-host - done - -#. **For Rook only:** Assign Rook host labels to storage-0 in support - of installing the rook-ceph-apps manifest/helm-charts later: - - :: - - system host-label-assign storage-0 ceph-mon-placement=enabled - --------------------- -Unlock storage nodes --------------------- - -Unlock storage nodes in order to bring them into service: - -.. code-block:: bash - - for STORAGE in storage-0 storage-1; do - system host-unlock $STORAGE - done - -The storage nodes will reboot in order to apply configuration changes and come -into service. This can take 5-10 minutes, depending on the performance of the -host machine. - -------------------------------------------------- -Install Rook application manifest and helm-charts -------------------------------------------------- - -On host storage-0 and storage-1: - -#. Erase gpt header of disk sdb. - - :: - - $ system host-disk-wipe -s --confirm storage-0 /dev/sdb - $ system host-disk-wipe -s --confirm storage-1 /dev/sdb - -#. Wait for application "rook-ceph-apps" uploaded - - :: - - $ source /etc/platform/openrc - $ system application-list - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - | oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed | - | platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed | - | rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed | - +---------------------+---------+-------------------------------+---------------+----------+-----------+ - -#. Edit values.yaml for rook-ceph-apps. - - :: - - cluster: - storage: - nodes: - - name: storage-0 - devices: - - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 - - name: storage-1 - devices: - - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 - -#. Update rook-ceph-apps override value. - - :: - - system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml - -#. Apply the rook-ceph-apps application. - - :: - - system application-apply rook-ceph-apps - -#. Wait for OSDs pod ready. - - :: - - kubectl get pods -n kube-system - rook-ceph-mgr-a-ddffc8fbb-zkvln 1/1 Running 0 66s - rook-ceph-mon-a-c67fdb6c8-tlbvk 1/1 Running 0 2m11s - rook-ceph-mon-b-76969d8685-wcq62 1/1 Running 0 2m2s - rook-ceph-mon-c-5bc47c6cb9-vm4j8 1/1 Running 0 97s - rook-ceph-operator-6fc8cfb68b-bb57z 1/1 Running 1 7m9s - rook-ceph-osd-0-689b6f65b-2nvcx 1/1 Running 0 12s - rook-ceph-osd-1-7bfd69fdf9-vjqmp 1/1 Running 0 4s - rook-ceph-osd-prepare-rook-storage-0-hf28p 0/1 Completed 0 50s - rook-ceph-osd-prepare-rook-storage-1-r6lsd 0/1 Completed 0 50s - rook-ceph-tools-84c7fff88c-x5trx 1/1 Running 0 6m11s - ----------- -Next steps ----------- - -.. include:: /_includes/kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/r6_release/distributed_cloud/index-install-r6-distcloud-46f4880ec78b.rst b/doc/source/deploy_install_guides/r6_release/distributed_cloud/index-install-r6-distcloud-46f4880ec78b.rst deleted file mode 100644 index eab073fef..000000000 --- a/doc/source/deploy_install_guides/r6_release/distributed_cloud/index-install-r6-distcloud-46f4880ec78b.rst +++ /dev/null @@ -1,317 +0,0 @@ -.. _index-install-r6-distcloud-46f4880ec78b: - -=================================== -Distributed Cloud Installation R6.0 -=================================== - -This section describes how to install and configure the StarlingX distributed -cloud deployment. - -.. contents:: - :local: - :depth: 1 - --------- -Overview --------- - -Distributed cloud configuration supports an edge computing solution by -providing central management and orchestration for a geographically -distributed network of StarlingX Kubernetes edge systems/clusters. - -The StarlingX distributed cloud implements the OpenStack Edge Computing -Groups's MVP `Edge Reference Architecture -`_, -specifically the "Distributed Control Plane" scenario. - -The StarlingX distributed cloud deployment is designed to meet the needs of -edge-based data centers with centralized orchestration and independent control -planes, and in which Network Function Cloudification (NFC) worker resources -are localized for maximum responsiveness. The architecture features: - -- Centralized orchestration of edge cloud control planes. -- Full synchronized control planes at edge clouds (that is, Kubernetes cluster - master and nodes), with greater benefits for local services, such as: - - - Reduced network latency. - - Operational availability, even if northbound connectivity - to the central cloud is lost. - -The system supports a scalable number of StarlingX Kubernetes edge -systems/clusters, which are centrally managed and synchronized over L3 -networks from a central cloud. Each edge system is also highly scalable, from -a single node StarlingX Kubernetes deployment to a full standard cloud -configuration with controller, worker and storage nodes. - ------------------------------- -Distributed cloud architecture ------------------------------- - -A distributed cloud system consists of a central cloud, and one or more -subclouds connected to the SystemController region central cloud over L3 -networks, as shown in Figure 1. - -- **Central cloud** - - The central cloud provides a *RegionOne* region for managing the physical - platform of the central cloud and the *SystemController* region for managing - and orchestrating over the subclouds. - - - **RegionOne** - - In the Horizon GUI, RegionOne is the name of the access mode, or region, - used to manage the nodes in the central cloud. - - - **SystemController** - - In the Horizon GUI, SystemController is the name of the access mode, or - region, used to manage the subclouds. - - You can use the System Controller to add subclouds, synchronize select - configuration data across all subclouds and monitor subcloud operations - and alarms. System software updates for the subclouds are also centrally - managed and applied from the System Controller. - - DNS, NTP, and other select configuration settings are centrally managed - at the System Controller and pushed to the subclouds in parallel to - maintain synchronization across the distributed cloud. - -- **Subclouds** - - The subclouds are StarlingX Kubernetes edge systems/clusters used to host - containerized applications. Any type of StarlingX Kubernetes configuration, - (including simplex, duplex, or standard with or without storage nodes), can - be used for a subcloud. - - The two edge clouds shown in Figure 1 are subclouds. - - Alarms raised at the subclouds are sent to the System Controller for - central reporting. - -.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-distributed-cloud.png - :scale: 45% - :alt: Distributed cloud deployment configuration - - *Figure 1: Distributed cloud deployment configuration* - - --------------------- -Network requirements --------------------- - -Subclouds are connected to the System Controller through both the OAM and the -Management interfaces. Because each subcloud is on a separate L3 subnet, the -OAM, Management and PXE boot L2 networks are local to the subclouds. They are -not connected via L2 to the central cloud, they are only connected via L3 -routing. The settings required to connect a subcloud to the System Controller -are specified when a subcloud is defined. A gateway router is required to -complete the L3 connections, which will provide IP routing between the -subcloud Management and OAM IP subnet and the System Controller Management and -OAM IP subnet, respectively. The System Controller bootstraps the subclouds via -the OAM network, and manages them via the management network. For more -information, see the `Install a Subcloud`_ section later in this guide. - -.. note:: - - All messaging between System Controllers and Subclouds uses the ``admin`` - REST API service endpoints which, in this distributed cloud environment, - are all configured for secure HTTPS. Certificates for these HTTPS - connections are managed internally by StarlingX. - ---------------------------------------- -Install and provision the central cloud ---------------------------------------- - -Installing the central cloud is similar to installing a standard -StarlingX Kubernetes system. The central cloud supports either an AIO-duplex -deployment configuration or a standard with dedicated storage nodes deployment -configuration. - -To configure controller-0 as a distributed cloud central controller, you must -set certain system parameters during the initial bootstrapping of -controller-0. Set the system parameter *distributed_cloud_role* to -*systemcontroller* in the Ansible bootstrap override file. Also, set the -management network IP address range to exclude IP addresses reserved for -gateway routers providing routing to the subclouds' management subnets. - -Procedure: - -- Follow the StarlingX R6.0 installation procedures with the extra step noted below: - - - AIO-duplex: - `Bare metal All-in-one Duplex Installation R6.0 `_ - - - Standard with dedicated storage nodes: - `Bare metal Standard with Dedicated Storage Installation R6.0 `_ - -- For the step "Bootstrap system on controller-0", add the following - parameters to the Ansible bootstrap override file. - - .. code:: yaml - - distributed_cloud_role: systemcontroller - management_start_address: - management_end_address: - ------------------- -Install a subcloud ------------------- - -At the subcloud location: - -#. Physically install and cable all subcloud servers. - -#. Physically install the top of rack switch and configure it for the - required networks. - -#. Physically install the gateway routers which will provide IP routing - between the subcloud OAM and Management subnets and the System Controller - OAM and management subnets. - -#. On the server designated for controller-0, install the StarlingX - Kubernetes software from USB or a PXE Boot server. - -#. Establish an L3 connection to the System Controller by enabling the OAM - interface (with OAM IP/subnet) on the subcloud controller using the - ``config_management`` script. This step is for subcloud ansible bootstrap - preparation. - - .. note:: This step should **not** use an interface that uses the MGMT - IP/subnet because the MGMT IP subnet will get moved to the loopback - address by the Ansible bootstrap playbook during installation. - - Be prepared to provide the following information: - - - Subcloud OAM interface name (for example, enp0s3). - - Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24). - - .. note:: This must match the *external_oam_floating_address* supplied in - the subcloud's ansible bootstrap override file. - - - Subcloud gateway address on the OAM network - (for example, 10.10.10.1). A default value is shown. - - System Controller OAM subnet (for example, 10,10.10.0/24). - - .. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for - the script to finish. - - .. note:: The `config_management` in the code snippet configures the OAM - interface/address/gateway. - - .. code:: sh - - $ sudo config_management - Enabling interfaces... DONE - Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE - Available interfaces: - local interface remote port - --------------- ---------- - enp0s3 08:00:27:c4:6c:7a - enp0s8 08:00:27:86:7a:13 - enp0s9 unknown - - Enter management interface name: enp0s3 - Enter management address CIDR: 10.10.10.12/24 - Enter management gateway address [10.10.10.1]: - Enter System Controller subnet: 10.10.10.0/24 - Disabling non-management interfaces... DONE - Configuring management interface... DONE - RTNETLINK answers: File exists - Adding route to System Controller... DONE - -At the System Controller: - -#. Create a ``bootstrap-values.yml`` override file for the subcloud. For - example: - - .. code:: yaml - - system_mode: duplex - name: "subcloud1" - description: "Ottawa Site" - location: "YOW" - - management_subnet: 192.168.101.0/24 - management_start_address: 192.168.101.2 - management_end_address: 192.168.101.50 - management_gateway_address: 192.168.101.1 - - external_oam_subnet: 10.10.10.0/24 - external_oam_gateway_address: 10.10.10.1 - external_oam_floating_address: 10.10.10.12 - - systemcontroller_gateway_address: 192.168.204.101 - - .. important:: The `management_*` entries in the override file are required - for all installation types, including AIO-Simplex. - - .. important:: The `management_subnet` must not overlap with any other subclouds. - - .. note:: The `systemcontroller_gateway_address` is the address of central - cloud management network gateway. - -#. Add the subcloud using the CLI command below: - - .. code:: sh - - dcmanager subcloud add --bootstrap-address - --bootstrap-values - - Where: - - - ** is the OAM interface address set earlier on the subcloud. - - ** is the Ansible override configuration file, ``bootstrap-values.yml``, - created earlier in step 1. - - You will be prompted for the Linux password of the subcloud. This command - will take 5- 10 minutes to complete. You can monitor the progress of the - subcloud bootstrap through logs: - - .. code:: sh - - tail –f /var/log/dcmanager/_bootstrap_