Merge "Simplify install dirs"
@ -1,21 +0,0 @@
|
||||
.. begin-prod-an-1
|
||||
.. end-prod-an-1
|
||||
|
||||
.. begin-prod-an-2
|
||||
.. end-prod-an-2
|
||||
|
||||
.. begin-dec-and-imp
|
||||
.. end-dec-and-imp
|
||||
|
||||
.. begin-declarative
|
||||
.. end-declarative
|
||||
|
||||
.. begin-install-prereqs
|
||||
.. end-install-prereqs
|
||||
|
||||
.. begin-prep-servers
|
||||
.. end-prep-servers
|
||||
|
||||
.. begin-known-issues
|
||||
.. end-known-issues
|
||||
|
@ -14,9 +14,9 @@
|
||||
Set proxy at bootstrap
|
||||
----------------------
|
||||
|
||||
To set the Docker proxy at bootstrap time, refer to :doc:`Ansible Bootstrap
|
||||
To set the Docker proxy at bootstrap time, refer to :ref:`Ansible Bootstrap
|
||||
Configurations
|
||||
</deploy_install_guides/r6_release/ansible_bootstrap_configs>`.
|
||||
<ansible_bootstrap_configs_r7>`.
|
||||
|
||||
.. r3_end
|
||||
|
||||
|
@ -1,3 +0,0 @@
|
||||
.. begin-install-ctl-0
|
||||
.. end-install-ctl-0
|
||||
|
@ -16,7 +16,7 @@ more bare metal servers.
|
||||
settings. Refer to :ref:`docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-ironic.png
|
||||
.. figure:: /deploy_install_guides/release/figures/starlingx-deployment-options-ironic.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Ironic deployment configuration
|
||||
|
||||
|
@ -4,4 +4,4 @@ For instructions on how to access StarlingX Kubernetes see
|
||||
:ref:`kubernetes_access_r7`.
|
||||
|
||||
For instructions on how to install and access StarlingX OpenStack see
|
||||
:ref:`index-install-r6-os-adc44604968c`.
|
||||
:ref:`index-install-r7-os-adc44604968c`.
|
||||
|
@ -113,7 +113,7 @@ Certificate Authority.
|
||||
|
||||
Currently the Kubernetes root CA certificate and key can only be updated at
|
||||
Ansible bootstrap time. For details, see
|
||||
:ref:`Kubernetes root CA certificate and key <k8s-root-ca-cert-key-r6>`.
|
||||
:ref:`Kubernetes root CA certificate and key <k8s-root-ca-cert-key-r7>`.
|
||||
|
||||
---------------------
|
||||
Local Docker registry
|
||||
|
@ -191,4 +191,4 @@ starlingxdocs_plus_bug_project = 'starlingx'
|
||||
starlingxdocs_plus_bug_tag = 'stx.docs'
|
||||
starlingxdocs_plus_this_version = "Latest"
|
||||
# starlingxdocs_plus_other_versions = [("even later","even-later"),("sooner","sooner")]
|
||||
starlingxdocs_plus_other_versions = [("Version 6.0","r/stx.6.0"),("Version 7.0","r/stx.7.0"),("Latest","master")]
|
||||
starlingxdocs_plus_other_versions = [("Version 6.0","r/stx.6.0"),("Version 7.0","r/stx.7.0")]
|
||||
|
@ -97,7 +97,7 @@ Up to fifty worker/compute nodes can be added to the All-in-one Duplex
|
||||
deployment, allowing a capacity growth path if starting with an AIO-Duplex
|
||||
deployment.
|
||||
|
||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex-extended.png
|
||||
.. image:: /deploy_install_guides/release/figures/starlingx-deployment-options-duplex-extended.png
|
||||
:width: 800
|
||||
|
||||
The extended capacity is limited up to fifty worker/compute nodes as the
|
||||
|
@ -45,8 +45,7 @@ To view the archived installation guides, see
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
r7_release/index-install-r7-8966076f0e81
|
||||
r6_release/index-install-r6-8966076f0e81
|
||||
release/index-install-r7-8966076f0e81
|
||||
|
||||
.. Add common files to toctree
|
||||
|
||||
|
@ -1,434 +0,0 @@
|
||||
|
||||
.. _ansible_bootstrap_configs_r6:
|
||||
|
||||
================================
|
||||
Ansible Bootstrap Configurations
|
||||
================================
|
||||
|
||||
This section describes Ansible bootstrap configuration options.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
|
||||
.. _install-time-only-params-r6:
|
||||
|
||||
----------------------------
|
||||
Install-time-only parameters
|
||||
----------------------------
|
||||
|
||||
Some Ansible bootstrap parameters can not be changed or are very difficult to
|
||||
change after installation is complete.
|
||||
|
||||
Review the set of install-time-only parameters before installation and confirm
|
||||
that your values for these parameters are correct for the desired installation.
|
||||
|
||||
.. note::
|
||||
|
||||
If you notice an incorrect install-time-only parameter value *before you
|
||||
unlock controller-0 for the first time*, you can re-run the Ansible bootstrap
|
||||
playbook with updated override values and the updated values will take effect.
|
||||
|
||||
****************************
|
||||
Install-time-only parameters
|
||||
****************************
|
||||
|
||||
**System Properties**
|
||||
|
||||
* ``system_mode``
|
||||
* ``distributed_cloud_role``
|
||||
|
||||
**Network Properties**
|
||||
|
||||
* ``pxeboot_subnet``
|
||||
* ``pxeboot_start_address``
|
||||
* ``pxeboot_end_address``
|
||||
* ``management_subnet``
|
||||
* ``management_start_address``
|
||||
* ``management_end_address``
|
||||
* ``cluster_host_subnet``
|
||||
* ``cluster_host_start_address``
|
||||
* ``cluster_host_end_address``
|
||||
* ``cluster_pod_subnet``
|
||||
* ``cluster_pod_start_address``
|
||||
* ``cluster_pod_end_address``
|
||||
* ``cluster_service_subnet``
|
||||
* ``cluster_service_start_address``
|
||||
* ``cluster_service_end_address``
|
||||
* ``management_multicast_subnet``
|
||||
* ``management_multicast_start_address``
|
||||
* ``management_multicast_end_address``
|
||||
|
||||
**Docker Proxies**
|
||||
|
||||
* ``docker_http_proxy``
|
||||
* ``docker_https_proxy``
|
||||
* ``docker_no_proxy``
|
||||
|
||||
**Docker Registry Overrides**
|
||||
|
||||
* ``docker_registries``
|
||||
|
||||
* ``k8s.gcr.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``gcr.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``ghcr.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``quay.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``docker.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``docker.elastic.co``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``defaults``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
**Certificates**
|
||||
|
||||
* ``k8s_root_ca_cert``
|
||||
* ``k8s_root_ca_key``
|
||||
|
||||
**Kubernetes Parameters**
|
||||
|
||||
* ``apiserver_oidc``
|
||||
|
||||
----
|
||||
IPv6
|
||||
----
|
||||
|
||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
|
||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
|
||||
updated to IPv6 addressing.
|
||||
|
||||
Example IPv6 override values are shown below:
|
||||
|
||||
::
|
||||
|
||||
dns_servers:
|
||||
‐ 2001:4860:4860::8888
|
||||
‐ 2001:4860:4860::8844
|
||||
pxeboot_subnet: 169.254.202.0/24
|
||||
management_subnet: 2001:db8:2::/64
|
||||
cluster_host_subnet: 2001:db8:3::/64
|
||||
cluster_pod_subnet: 2001:db8:4::/64
|
||||
cluster_service_subnet: 2001:db8:4::/112
|
||||
external_oam_subnet: 2001:db8:1::/64
|
||||
external_oam_gateway_address: 2001:db8::1
|
||||
external_oam_floating_address: 2001:db8::2
|
||||
external_oam_node_0_address: 2001:db8::3
|
||||
external_oam_node_1_address: 2001:db8::4
|
||||
management_multicast_subnet: ff08::1:1:0/124
|
||||
|
||||
.. note::
|
||||
|
||||
The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
|
||||
are not required for the AIO‐SX installation.
|
||||
|
||||
----------------
|
||||
Private registry
|
||||
----------------
|
||||
|
||||
To bootstrap StarlingX you must pull container images for multiple system
|
||||
services. By default these container images are pulled from public registries:
|
||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
|
||||
|
||||
It may be required (or desired) to copy the container images to a private
|
||||
registry and pull the images from the private registry (instead of the public
|
||||
registries) as part of the StarlingX bootstrap. For example, a private registry
|
||||
would be required if a StarlingX system was deployed in an air-gapped network
|
||||
environment.
|
||||
|
||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
|
||||
alternate registry(s) for the public registries from which container images are
|
||||
pulled. These alternate registries are used during the bootstrapping of
|
||||
controller-0, and on :command:`system application-apply` of application packages.
|
||||
|
||||
The `docker_registries` structure is a map of public registries and the
|
||||
alternate registry values for each public registry. For each public registry the
|
||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
|
||||
and the alternate registry URL and username/password (if authenticated).
|
||||
|
||||
url
|
||||
The fully scoped registry name (and optionally namespace/) for the alternate
|
||||
registry location where the images associated with this public registry
|
||||
should now be pulled from.
|
||||
|
||||
Valid formats for the `url` value are:
|
||||
|
||||
* Domain. For example:
|
||||
|
||||
::
|
||||
|
||||
example.domain
|
||||
|
||||
* Domain with port. For example:
|
||||
|
||||
::
|
||||
|
||||
example.domain:5000
|
||||
|
||||
* IPv4 address. For example:
|
||||
|
||||
::
|
||||
|
||||
1.2.3.4
|
||||
|
||||
* IPv4 address with port. For example:
|
||||
|
||||
::
|
||||
|
||||
1.2.3.4:5000
|
||||
|
||||
* IPv6 address. For example:
|
||||
|
||||
::
|
||||
|
||||
FD01::0100
|
||||
|
||||
* IPv6 address with port. For example:
|
||||
|
||||
::
|
||||
|
||||
[FD01::0100]:5000
|
||||
|
||||
username
|
||||
The username for logging into the alternate registry, if authenticated.
|
||||
|
||||
password
|
||||
The password for logging into the alternate registry, if authenticated.
|
||||
|
||||
|
||||
Additional configuration options in the `docker_registries` structure are:
|
||||
|
||||
defaults
|
||||
A special public registry key which defines common values to be applied to
|
||||
all overrideable public registries. If only the `defaults` registry
|
||||
is defined, it will apply `url`, `username`, and `password` for all
|
||||
registries.
|
||||
|
||||
If values under specific registries are defined, they will override the
|
||||
values defined in the defaults registry.
|
||||
|
||||
.. note::
|
||||
|
||||
The `defaults` key was formerly called `unified`. It was renamed
|
||||
in StarlingX R3.0 and updated semantics were applied.
|
||||
|
||||
This change affects anyone with a StarlingX installation prior to R3.0 that
|
||||
specifies alternate Docker registries using the `unified` key.
|
||||
|
||||
secure
|
||||
Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
|
||||
Applies to all alternate registries. A boolean value. The default value is
|
||||
True (secure, HTTPS).
|
||||
|
||||
.. note::
|
||||
|
||||
The ``secure`` parameter was formerly called ``is_secure_registry``. It was
|
||||
renamed in StarlingX R3.0.
|
||||
|
||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
|
||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
|
||||
This results in the :command:`docker pull` of images from this registry to fail.
|
||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
|
||||
signed the alternate registry’s certificate. This will add the CA as a trusted
|
||||
CA to the StarlingX system.
|
||||
|
||||
ssl_ca_cert
|
||||
The `ssl_ca_cert` value is the absolute path of the certificate file. The
|
||||
certificate must be in PEM format and the file may contain a single CA
|
||||
certificate or multiple CA certificates in a bundle.
|
||||
|
||||
The following example will apply `url`, `username`, and `password` to all
|
||||
registries.
|
||||
|
||||
::
|
||||
|
||||
docker_registries:
|
||||
defaults:
|
||||
url: my.registry.io
|
||||
username: myreguser
|
||||
password: myregP@ssw0rd
|
||||
|
||||
The next example applies `username` and `password` from the defaults registry
|
||||
to all public registries. `url` is different for each public registry. It
|
||||
additionally specifies an alternate CA certificate.
|
||||
|
||||
::
|
||||
|
||||
docker_registries:
|
||||
k8s.gcr.io:
|
||||
url: my.k8sregistry.io
|
||||
gcr.io:
|
||||
url: my.gcrregistry.io
|
||||
ghcr.io:
|
||||
url: my.ghrcregistry.io
|
||||
docker.elastic.co
|
||||
url: my.dockerregistry.io
|
||||
quay.io:
|
||||
url: my.quayregistry.io
|
||||
docker.io:
|
||||
url: my.dockerregistry.io
|
||||
defaults:
|
||||
url: my.registry.io
|
||||
username: myreguser
|
||||
password: myregP@ssw0rd
|
||||
|
||||
ssl_ca_cert: /path/to/ssl_ca_cert_file
|
||||
|
||||
------------
|
||||
Docker proxy
|
||||
------------
|
||||
|
||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
|
||||
to the Docker registries used by StarlingX or applications running on StarlingX,
|
||||
then Docker within StarlingX must be configured to use these http/https proxies.
|
||||
|
||||
Use the following configuration overrides to configure your Docker proxy settings.
|
||||
|
||||
docker_http_proxy
|
||||
Specify the HTTP proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
|
||||
docker_https_proxy
|
||||
Specify the HTTPS proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
|
||||
docker_no_proxy
|
||||
A no-proxy address list can be provided for registries not on the other side
|
||||
of the proxies. This list will be added to the default no-proxy list derived
|
||||
from localhost, loopback, management, and OAM floating addresses at run time.
|
||||
Each address in the no-proxy list must neither contain a wildcard nor have
|
||||
subnet format. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
- 5.6.7.8
|
||||
|
||||
.. _k8s-root-ca-cert-key-r6:
|
||||
|
||||
--------------------------------------
|
||||
Kubernetes root CA certificate and key
|
||||
--------------------------------------
|
||||
|
||||
By default the Kubernetes Root CA Certificate and Key are auto-generated and
|
||||
result in the use of self-signed certificates for the Kubernetes API server. In
|
||||
the case where self-signed certificates are not acceptable, use the bootstrap
|
||||
override values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the
|
||||
certificate and key for the Kubernetes root CA.
|
||||
|
||||
k8s_root_ca_cert
|
||||
Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
k8s_root_ca_key
|
||||
Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
.. important::
|
||||
|
||||
The default length for the generated Kubernetes root CA certificate is 10
|
||||
years. Replacing the root CA certificate is an involved process so the custom
|
||||
certificate expiry should be as long as possible. We recommend ensuring root
|
||||
CA certificate has an expiry of at least 5-10 years.
|
||||
|
||||
The administrator can also provide values to add to the Kubernetes API server
|
||||
certificate Subject Alternative Name list using the `apiserver_cert_sans`
|
||||
override parameter.
|
||||
|
||||
apiserver_cert_sans
|
||||
Specifies a list of Subject Alternative Name entries that will be added to the
|
||||
Kubernetes API server certificate. Each entry in the list must be an IP address
|
||||
or domain name. For example:
|
||||
|
||||
::
|
||||
|
||||
apiserver_cert_sans:
|
||||
- hostname.domain
|
||||
- 198.51.100.75
|
||||
|
||||
StarlingX automatically updates this parameter to include IP records for the OAM
|
||||
floating IP and both OAM unit IP addresses.
|
||||
|
||||
----------------------------------------------------
|
||||
OpenID Connect authentication for Kubernetes cluster
|
||||
----------------------------------------------------
|
||||
|
||||
The Kubernetes cluster can be configured to use an external OpenID Connect
|
||||
:abbr:`IDP (identity provider)`, such as Azure Active Directory, Salesforce, or
|
||||
Google, for Kubernetes API authentication.
|
||||
|
||||
By default, OpenID Connect authentication is disabled. To enable OpenID Connect,
|
||||
use the following configuration values in the Ansible bootstrap overrides file
|
||||
to specify the IDP for OpenID Connect:
|
||||
|
||||
::
|
||||
|
||||
apiserver_oidc:
|
||||
client_id:
|
||||
issuer_url:
|
||||
username_claim:
|
||||
|
||||
When the three required fields of the `apiserver_oidc` parameter are defined,
|
||||
OpenID Connect is considered active. The values will be used to configure the
|
||||
Kubernetes cluster to use the specified external OpenID Connect IDP for
|
||||
Kubernetes API authentication.
|
||||
|
||||
In addition, you will need to configure the external OpenID Connect IDP and any
|
||||
required OpenID client application according to the specific IDP's documentation.
|
||||
|
||||
If not configuring OpenID Connect, all values should be absent from the
|
||||
configuration file.
|
||||
|
||||
.. note::
|
||||
|
||||
Default authentication via service account tokens is always supported,
|
||||
even when OpenID Connect authentication is configured.
|
@ -1,60 +0,0 @@
|
||||
|
||||
.. jow1442253584837
|
||||
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-r6:
|
||||
|
||||
=======================================================
|
||||
Access PXE Boot Server Files for a Custom Configuration
|
||||
=======================================================
|
||||
|
||||
If you prefer, you can create a custom |PXE| boot configuration using the
|
||||
installation files provided with |prod|.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
You can use the setup script included with the ISO image to copy the boot
|
||||
configuration files and distribution content to a working directory. You can
|
||||
use the contents of the working directory to construct a |PXE| boot environment
|
||||
according to your own requirements or preferences.
|
||||
|
||||
For more information about using a |PXE| boot server, see :ref:`Configure a
|
||||
PXE Boot Server <configuring-a-pxe-boot-server-r6>`.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t-r6:
|
||||
|
||||
#. Copy the ISO image from the source \(product DVD, USB device, or
|
||||
|dnload-loc|\) to a temporary location on the |PXE| boot server.
|
||||
|
||||
This example assumes that the copied image file is
|
||||
tmp/TS-host-installer-1.0.iso.
|
||||
|
||||
#. Mount the ISO image and make it executable.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
|
||||
$ mount -o remount,exec,dev /media/iso
|
||||
|
||||
#. Create and populate a working directory.
|
||||
|
||||
Use a command of the following form:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> <-w <working directory>>
|
||||
|
||||
where:
|
||||
|
||||
**ip-addr**
|
||||
is the Apache listening address.
|
||||
|
||||
**symlink**
|
||||
is a name for a symbolic link to be created under the Apache document
|
||||
root directory, pointing to the directory specified by <working-dir>.
|
||||
|
||||
**working-dir**
|
||||
is the path to the working directory.
|
||||
|
||||
#. Copy the required files from the working directory to your custom |PXE|
|
||||
boot server directory.
|
@ -1,61 +0,0 @@
|
||||
|
||||
.. ulc1552927930507
|
||||
.. _adding-hosts-in-bulk-r6:
|
||||
|
||||
=================
|
||||
Add Hosts in Bulk
|
||||
=================
|
||||
|
||||
You can add an arbitrary number of hosts using a single CLI command.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Prepare an XML file that describes the hosts to be added.
|
||||
|
||||
For more information, see :ref:`Bulk Host XML File Format
|
||||
<bulk-host-xml-file-format-r6>`.
|
||||
|
||||
You can also create the XML configuration file from an existing, running
|
||||
configuration using the :command:`system host-bulk-export` command.
|
||||
|
||||
#. Run the :command:`system host-bulk-add` utility.
|
||||
|
||||
The command syntax is:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~[keystone_admin]$ system host-bulk-add <xml_file>
|
||||
|
||||
where <xml\_file> is the name of the prepared XML file.
|
||||
|
||||
#. Power on the hosts to be added, if required.
|
||||
|
||||
.. note::
|
||||
Hosts can be powered on automatically from board management controllers
|
||||
using settings in the XML file.
|
||||
|
||||
.. rubric:: |result|
|
||||
|
||||
The hosts are configured. The utility provides a summary report, as shown in
|
||||
the following example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
Success:
|
||||
worker-0
|
||||
worker-1
|
||||
Error:
|
||||
controller-1: Host-add Rejected: Host with mgmt_mac 08:00:28:A9:54:19 already exists
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
After adding the host, you must provision it according to the requirements of
|
||||
the personality.
|
||||
|
||||
.. xbooklink For more information, see :ref:`Installing, Configuring, and
|
||||
Unlocking Nodes <installing-configuring-and-unlocking-nodes>`, for your system,
|
||||
and follow the *Configure* steps for the appropriate node personality.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`Bulk Host XML File Format <bulk-host-xml-file-format-r6>`
|
@ -1,177 +0,0 @@
|
||||
|
||||
.. pyp1552927946441
|
||||
.. _adding-hosts-using-the-host-add-command-r6:
|
||||
|
||||
================================
|
||||
Add Hosts Using the Command Line
|
||||
================================
|
||||
|
||||
You can add hosts to the system inventory using the :command:`host-add` command.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
There are several ways to add hosts to |prod|; for an overview, see the
|
||||
StarlingX Installation Guides,
|
||||
`https://docs.starlingx.io/deploy_install_guides/index.html
|
||||
<https://docs.starlingx.io/deploy_install_guides/index.html>`_ for your
|
||||
system. Instead of powering up each host and then defining its personality and
|
||||
other characteristics interactively, you can use the :command:`system host-add`
|
||||
command to define hosts before you power them up. This can be useful for
|
||||
scripting an initial setup.
|
||||
|
||||
.. note::
|
||||
On systems that use static IP address assignment on the management network,
|
||||
new hosts must be added to the inventory manually and assigned an IP
|
||||
address using the :command:`system host-add` command. If a host is not
|
||||
added successfully, the host console displays the following message at
|
||||
power-on:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
This system has been configured with static management
|
||||
and infrastructure IP address allocation. This requires
|
||||
that the node be manually provisioned in System
|
||||
Inventory using the 'system host-add' CLI, GUI, or
|
||||
stx API equivalent.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Add the host to the system inventory.
|
||||
|
||||
.. note::
|
||||
The host must be added to the system inventory before it is powered on.
|
||||
|
||||
On **controller-0**, acquire Keystone administrative privileges:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
|
||||
Use the :command:`system host-add` command to add a host and specify its
|
||||
personality. You can also specify the device used to display messages
|
||||
during boot.
|
||||
|
||||
.. note::
|
||||
The hostname parameter is required for worker hosts. For controller and
|
||||
storage hosts, it is ignored.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system host-add -n <hostname> \
|
||||
-p <personality> [-s <subfunctions>] \
|
||||
[-l <location>] [-o <install_output>[-c <console>]] [-b <boot_device>] \
|
||||
[-r <rootfs_device>] [-m <mgmt_mac>] [-i <mgmt_ip>] [-D <ttys_dcd>] \
|
||||
[-T <bm_type> -I <bm_ip> -U <bm_username> -P <bm_password>]
|
||||
|
||||
|
||||
where
|
||||
|
||||
**<hostname>**
|
||||
is a name to assign to the host. This is used for worker nodes only.
|
||||
Controller and storage node names are assigned automatically and
|
||||
override user input.
|
||||
|
||||
**<personality>**
|
||||
is the host type. The following are valid values:
|
||||
|
||||
- controller
|
||||
|
||||
- worker
|
||||
|
||||
- storage
|
||||
|
||||
**<subfunctions>**
|
||||
are the host personality subfunctions \(used only for a worker host\).
|
||||
|
||||
For a worker host, the only valid value is worker,lowlatency to enable
|
||||
a low-latency performance profile. For a standard performance profile,
|
||||
omit this option.
|
||||
|
||||
For more information about performance profiles, see |deploy-doc|:
|
||||
:ref:`Worker Function Performance Profiles
|
||||
<worker-function-performance-profiles>`.
|
||||
|
||||
**<location>**
|
||||
is a string describing the location of the host
|
||||
|
||||
**<console>**
|
||||
is the output device to use for message display on the host \(for
|
||||
example, tty0\). The default is ttys0, 115200.
|
||||
|
||||
**<install\_output>**
|
||||
is the format for console output on the host \(text or graphical\). The
|
||||
default is text.
|
||||
|
||||
.. note::
|
||||
The graphical option currently has no effect. Text-based
|
||||
installation is used regardless of this setting.
|
||||
|
||||
**<boot\_device>**
|
||||
is the host device for boot partition, relative to /dev. The default is
|
||||
sda.
|
||||
|
||||
**<rootfs\_device>**
|
||||
is the host device for rootfs partition, relative to/dev. The default
|
||||
is sda.
|
||||
|
||||
**<mgmt\_mac>**
|
||||
is the |MAC| address of the port connected to the internal management
|
||||
or |PXE| boot network.
|
||||
|
||||
**<mgmt\_ip>**
|
||||
is the IP address of the port connected to the internal management or
|
||||
|PXE| boot network, if static IP address allocation is used.
|
||||
|
||||
.. note::
|
||||
The <mgmt\_ip> option is not used for a controller node.
|
||||
|
||||
**<ttys\_dcd>**
|
||||
is set to **True** to have any active console session automatically
|
||||
logged out when the serial console cable is disconnected, or **False**
|
||||
to disable this behavior. The server must support data carrier detect
|
||||
on the serial console port.
|
||||
|
||||
**<bm\_type>**
|
||||
is the board management controller type. Use bmc.
|
||||
|
||||
**<bm\_ip>**
|
||||
is the board management controller IP address \(used for external
|
||||
access to board management controllers over the |OAM| network\)
|
||||
|
||||
**<bm\_username>**
|
||||
is the username for board management controller access
|
||||
|
||||
**<bm\_password>**
|
||||
is the password for board management controller access
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system host-add -n compute-0 -p worker -I 10.10.10.100
|
||||
|
||||
#. Verify that the host has been added successfully.
|
||||
|
||||
Use the :command:`fm alarm-list` command to check if any alarms (major or
|
||||
critical) events have occurred. You can also type :command:`fm event-list`
|
||||
to see a log of events. For more information on alarms, see :ref:`Fault
|
||||
Management Overview <fault-management-overview>`.
|
||||
|
||||
#. With **controller-0** running, start the host.
|
||||
|
||||
The host is booted and configured with a personality.
|
||||
|
||||
#. Verify that the host has started successfully.
|
||||
|
||||
The command :command:`system host-list` shows a list of hosts. The
|
||||
added host should be available, enabled, and unlocked. You can also
|
||||
check alarms and events again.
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
After adding the host, you must provision it according to the requirements of
|
||||
the personality.
|
||||
|
||||
.. xbooklink For more information, see :ref:`Install, Configure, and Unlock
|
||||
Nodes <installing-configuring-and-unlocking-nodes>` and follow the *Configure*
|
||||
steps for the appropriate node personality.
|
@ -1,26 +0,0 @@
|
||||
==============================================
|
||||
Bare metal All-in-one Duplex Installation R6.0
|
||||
==============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_duplex.txt
|
||||
|
||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||
worker nodes (not shown in the diagram). Installation instructions for
|
||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_duplex_hardware
|
||||
aio_duplex_install_kubernetes
|
||||
aio_duplex_extend
|
@ -1,340 +0,0 @@
|
||||
=================================
|
||||
Extend Capacity with Worker Nodes
|
||||
=================================
|
||||
|
||||
This section describes the steps to extend capacity with worker nodes on a
|
||||
|prod| All-in-one Duplex deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on worker nodes
|
||||
--------------------------------
|
||||
|
||||
#. Power on the worker node servers and force them to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As the worker nodes boot, a message appears on their console instructing
|
||||
you to configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered worker
|
||||
node hosts (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | None | None | locked | disabled | offline |
|
||||
| 4 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'worker':
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker nodes.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the install of software on the worker nodes to complete, for the
|
||||
worker nodes to reboot, and for both to show as locked/disabled/online in
|
||||
'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. The MGMT interfaces are partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
|
||||
This should have been automatically configured, if not run the following
|
||||
command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application VMs require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, assuming 1G huge page size is being used on this host, with
|
||||
the following commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||
# but is limited by the size and space available on the physical disk you chose above.
|
||||
# The following example uses a small PARTITION size such that you can fit it on the
|
||||
# root disk, if that is what you chose above.
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
*****************************************
|
||||
|
||||
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application VMs. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not already created, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
@ -1,69 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 2 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
|
||||
| | for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Duplex deployment configuration
|
||||
|
||||
*All-in-one Duplex deployment configuration*
|
@ -1,21 +0,0 @@
|
||||
===============================================
|
||||
Bare metal All-in-one Simplex Installation R6.0
|
||||
===============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_simplex.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_simplex_hardware
|
||||
aio_simplex_install_kubernetes
|
@ -1,71 +0,0 @@
|
||||
.. _aio_simplex_hardware_r6:
|
||||
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 1 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD |
|
||||
| | journal) |
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
|
||||
| | RPM) for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-simplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Simplex deployment configuration
|
||||
|
||||
*All-in-one Simplex deployment configuration*
|
@ -1,718 +0,0 @@
|
||||
|
||||
.. Greg updates required for High Security Vulnerability Document Updates
|
||||
|
||||
.. _aio_simplex_install_kubernetes_r6:
|
||||
|
||||
|
||||
=================================================
|
||||
Install Kubernetes Platform on All-in-one Simplex
|
||||
=================================================
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-null-labels.rest
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes
|
||||
platform on a **StarlingX R6.0 All-in-one Simplex** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :ref:`Bootable USB <bootable_usb>` for instructions on how
|
||||
to create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-aio-start
|
||||
:end-before: incl-install-software-controller-0-aio-end
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the
|
||||
password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will |DHCP| out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a |DHCP| server
|
||||
is present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for
|
||||
Ansible configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example:
|
||||
``$HOME/<hostname>.yml``.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
.. note::
|
||||
|
||||
This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml)
|
||||
contains security sensitive information, use the
|
||||
:command:`ansible-vault create $HOME/localhost.yml` command to create it.
|
||||
You will be prompted for a password to protect/encrypt the file.
|
||||
Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the
|
||||
file needs to be edited after it is created.
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file
|
||||
comments.
|
||||
|
||||
To use this method, run the :command:`ansible-vault create $HOME/localhost.yml`
|
||||
command and copy the contents of the ``default.yml`` file into the
|
||||
ansible-vault editor, and edit the configurable values as required.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file with
|
||||
the :command:`ansible-vault create $HOME/localhost.yml`
|
||||
command and provide the minimum required parameters for the deployment
|
||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||
ADDRESSing applicable to your deployment environment.
|
||||
|
||||
.. include:: /_includes/min-bootstrap-overrides-simplex.rest
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block::
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
url: myprivateregistry.abc.com:9001/quay.io
|
||||
docker.elastic.co:
|
||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||
gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/gcr.io
|
||||
ghcr.io:
|
||||
url: myprivateregistry.abc.com:9001/ghcr.io
|
||||
k8s.gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/k8s.gcr.io
|
||||
docker.io:
|
||||
url: myprivateregistry.abc.com:9001/docker.io
|
||||
defaults:
|
||||
type: docker
|
||||
username: <your_myprivateregistry.abc.com_username>
|
||||
password: <your_myprivateregistry.abc.com_password>
|
||||
|
||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||
# certificate as a Trusted CA
|
||||
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
|
||||
|
||||
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r6>`
|
||||
for more information.
|
||||
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
If a firewall is blocking access to Docker hub or your private
|
||||
registry from your StarlingX deployment, you will need to add the
|
||||
following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
|
||||
Configuration <docker_proxy_config>` for more details about Docker
|
||||
proxy settings):
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
.. include:: /shared/_includes/ntp-update-note.rest
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes,
|
||||
depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
The newly installed controller needs to be configured.
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| interface of controller-0 and specify the attached
|
||||
network as "oam". The following example configures the OAM interface on a
|
||||
physical untagged ethernet port, use |OAM| port name that is applicable to
|
||||
your deployment environment, for example eth0:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure |NTP| servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||
<ptp-server-config-index>`.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 |vswitch-label|
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable it by
|
||||
using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/aio_simplex_install_kubernetes.rest
|
||||
:start-after: ref1-begin
|
||||
:end-before: ref1-end
|
||||
|
||||
#. **For OpenStack only:** Due to the additional OpenStack services running
|
||||
on the |AIO| controller platform cores, a minimum of 4 platform cores are
|
||||
required, 6 platform cores are recommended.
|
||||
|
||||
Increase the number of platform cores with the following commands:
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
|
||||
system host-cpu-modify -f platform -p0 6 controller-0
|
||||
|
||||
#. Due to the additional OpenStack services' containers running on the
|
||||
controller host, the size of the Docker filesystem needs to be
|
||||
increased from the default size of 30G to 60G.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# check existing size of docker fs
|
||||
system host-fs-list controller-0
|
||||
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||
system host-lvg-list controller-0
|
||||
# if existing docker fs size + cgts-vg available space is less than
|
||||
# 80G, you will need to add a new disk partition to cgts-vg.
|
||||
# There must be at least 20GB of available space after the docker
|
||||
# filesystem is increased.
|
||||
|
||||
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||
# ( if not use another unused disk )
|
||||
|
||||
# Get device path of ROOT DISK
|
||||
system host-show controller-0 | fgrep rootfs
|
||||
|
||||
# Get UUID of ROOT DISK by listing disks
|
||||
system host-disk-list controller-0
|
||||
|
||||
# Create new PARTITION on ROOT DISK, and take note of new partition's 'uuid' in response
|
||||
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||
PARTITION_SIZE=30
|
||||
system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘cgts-vg’ local volume group
|
||||
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
|
||||
sleep 2 # wait for partition to be added
|
||||
|
||||
# Increase docker filesystem to 60G
|
||||
system host-fs-modify controller-0 docker=60
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||
Plane Development Kit, which is supported only on bare metal hardware)
|
||||
should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||
function.
|
||||
|
||||
To deploy the default containerized |OVS|:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of
|
||||
|prefix|-openstack manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Default recommendation for an |AIO|-controller is to use a single core
|
||||
for |OVS-DPDK| vSwitch.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended
|
||||
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
|
||||
node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
|
||||
.. code-block::
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-0 0
|
||||
|
||||
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 controller-0 1
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-0 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
system host-memory-modify -f application -1G 10 controller-0 1
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume
|
||||
group, which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
# Create ‘nova-local’ local volume group
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||
# but is limited by the size and space available on the physical disk you chose above.
|
||||
# The following example uses a small PARTITION size such that you can fit it on the
|
||||
# root disk, if that is what you chose above.
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for controller-0.
|
||||
Data class interfaces are vSwitch interfaces used by vSwitch to provide
|
||||
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled |AIO|-controller host **MUST** have at least one
|
||||
Data class interface.
|
||||
|
||||
* Configure the data interfaces for controller-0.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
*****************************************
|
||||
|
||||
#. **Optionally**, configure |PCI|-SRIOV interfaces for controller-0.
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application VMs. Note that |PCI|-SRIOV interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the |PCI|-SRIOV interfaces for controller-0.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not already created, create Data Networks that the 'pci-sriov' interfaces will
|
||||
# be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
|
||||
system host-memory-modify -f application controller-0 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
|
||||
system host-memory-modify -f application controller-0 1 -1G 10
|
||||
|
||||
|
||||
***************************************************************
|
||||
If required, initialize a Ceph-based Persistent Storage Backend
|
||||
***************************************************************
|
||||
|
||||
A persistent storage backend is required if your application requires
|
||||
|PVCs|.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
.. important::
|
||||
|
||||
The StarlingX OpenStack application **requires** |PVCs|.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
There are two options for persistent storage backend: the host-based Ceph
|
||||
solution and the Rook container-based Ceph solution.
|
||||
|
||||
For host-based Ceph:
|
||||
|
||||
#. Add host-based Ceph backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Add an |OSD| on controller-0 for host-based Ceph:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list controller-0
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add controller-0 osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices
|
||||
system host-stor-list controller-0
|
||||
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
#. Add Rook container-based backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph-rook --confirmed
|
||||
|
||||
#. Assign Rook host labels to controller-0 in support of installing the
|
||||
rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-start:
|
||||
|
||||
Unlock controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-end:
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
-----------------------------------------------------------------------------------------------
|
||||
If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
|
||||
-----------------------------------------------------------------------------------------------
|
||||
|
||||
On controller-0:
|
||||
|
||||
#. Wait for application rook-ceph-apps to be uploaded
|
||||
|
||||
::
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
$ system application-list
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
|
||||
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
|
||||
#. Configure rook to use /dev/sdb disk on controller-0 as a ceph |OSD|.
|
||||
|
||||
::
|
||||
|
||||
system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: controller-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
|
||||
|
||||
#. Apply the rook-ceph-apps application.
|
||||
|
||||
::
|
||||
|
||||
system application-apply rook-ceph-apps
|
||||
|
||||
#. Wait for |OSDs| pod to be ready.
|
||||
|
||||
::
|
||||
|
||||
kubectl get pods -n kube-system
|
||||
rook--ceph-crashcollector-controller-0-764c7f9c8-bh5c7 1/1 Running 0 62m
|
||||
rook--ceph-mgr-a-69df96f57-9l28p 1/1 Running 0 63m
|
||||
rook--ceph-mon-a-55fff49dcf-ljfnx 1/1 Running 0 63m
|
||||
rook--ceph-operator-77b64588c5-nlsf2 1/1 Running 0 66m
|
||||
rook--ceph-osd-0-7d5785889f-4rgmb 1/1 Running 0 62m
|
||||
rook--ceph-osd-prepare-controller-0-cmwt5 0/1 Completed 0 2m14s
|
||||
rook--ceph-tools-5778d7f6c-22tms 1/1 Running 0 64m
|
||||
rook--discover-kmv6c 1/1 Running 0 65m
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/72hr-to-license.rest
|
@ -1,54 +0,0 @@
|
||||
|
||||
.. vqr1569420650576
|
||||
.. _bootstrapping-from-a-private-docker-registry-r6:
|
||||
|
||||
============================================
|
||||
Bootstrapping from a Private Docker Registry
|
||||
============================================
|
||||
|
||||
You can bootstrap controller-0 from a private Docker registry in the event that
|
||||
your server is isolated from the public Internet.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Update your /home/sysadmin/localhost.yml bootstrap overrides file with the
|
||||
following lines to use a Private Docker Registry pre-populated from the
|
||||
|org| Docker Registry:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
docker_registries:
|
||||
k8s.gcr.io:
|
||||
url: <my-registry.io>/k8s.gcr.io
|
||||
gcr.io:
|
||||
url: <my-registry.io>/gcr.io
|
||||
ghcr.io:
|
||||
url: <my-registry.io>/ghcr.io
|
||||
quay.io:
|
||||
url: <my-registry.io>/quay.io
|
||||
docker.io:
|
||||
url: <my-registry.io>/docker.io
|
||||
docker.elastic.co:
|
||||
url: <my-registry.io>/docker.elastic.co
|
||||
defaults:
|
||||
type: docker
|
||||
username: <your_my-registry.io_username>
|
||||
password: <your_my-registry.io_password>
|
||||
|
||||
Where ``<your_my-registry.io_username>`` and
|
||||
``<your_my-registry.io_password>`` are your login credentials for the
|
||||
``<my-registry.io>`` private Docker registry.
|
||||
|
||||
.. note::
|
||||
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
|
||||
configured in the ``dns_servers:`` structure of the ansible bootstrap
|
||||
override file /home/sysadmin/localhost.yml.
|
||||
|
||||
#. For any additional local registry images required, use the full image name
|
||||
as shown below.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
additional_local_registry_images:
|
||||
docker.io/wind-river/<imageName>:<tag>
|
||||
|
@ -1,135 +0,0 @@
|
||||
|
||||
.. hzf1552927866550
|
||||
.. _bulk-host-xml-file-format-r6:
|
||||
|
||||
=========================
|
||||
Bulk Host XML File Format
|
||||
=========================
|
||||
|
||||
Hosts for bulk addition are described using an XML document.
|
||||
|
||||
The document root is **hosts**. Within the root, each host is described using a
|
||||
**host** node. To provide details, child elements are used, corresponding to
|
||||
the parameters for the :command:`host-add` command.
|
||||
|
||||
The following elements are accepted. Each element takes a text string. For
|
||||
valid values, refer to the CLI documentation.
|
||||
|
||||
|
||||
.. _bulk-host-xml-file-format-simpletable-tc3-w15-ht:
|
||||
|
||||
|
||||
.. table::
|
||||
:widths: auto
|
||||
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Element | Remarks |
|
||||
+=========================================================================================================================================================================================+=========================================================================================================================================================================================+
|
||||
| hostname | A unique name for the host. |
|
||||
| | |
|
||||
| | .. note:: |
|
||||
| | Controller and storage node names are assigned automatically and override user input. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| personality | The type of host. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| subfunctions | For a worker host, an optional element to enable a low-latency performance profile. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| mgmt\_mac | The MAC address of the management interface. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| mgmt\_ip | The IP address of the management interface. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| bm\_ip | The IP address of the board management controller. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| bm\_type | The board management controller type. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| bm\_username | The username for board management controller authentication. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| bm\_password | The password for board management controller authentication. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| power\_on | An empty element. If present, powers on the host automatically using the specified board management controller. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| install\_output | The display mode to use during installation \(text or graphical\). The default is **text**. |
|
||||
| | |
|
||||
| | .. note:: |
|
||||
| | The graphical option currently has no effect. Text-based installation is used regardless of this setting. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| console | If present, this element specifies the port, and if applicable the baud, for displaying messages. If the element is empty or not present, the default setting **ttyS0,115200** is used. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| rootfs\_device | The device to use for the rootfs partition, relative to /dev. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| boot\_device | The device to use for the boot partition, relative to /dev. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| location | A description of the host location. |
|
||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
The following sample describes a controller, three worker nodes, and two storage nodes:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<?xml version="1.0" encoding="UTF-8" ?>
|
||||
<hosts>
|
||||
<host>
|
||||
<personality>controller</personality>
|
||||
<mgmt_mac>08:00:27:19:b0:c5</mgmt_mac>
|
||||
<bm_ip>10.10.10.100</bm_ip>
|
||||
<bm_type>bmc</bm_type>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
<install_output>text</install_output>
|
||||
<location>System12/A4</location>
|
||||
</host>
|
||||
<host>
|
||||
<hostname>worker-0</hostname>
|
||||
<personality>worker</personality>
|
||||
<mgmt_mac>08:00:27:dc:42:46</mgmt_mac>
|
||||
<mgmt_ip>192.168.204.50</mgmt_ip>
|
||||
<bm_ip>10.10.10.101</bm_ip>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
<bm_type>bmc</bm_type>
|
||||
<install_output>text</install_output>
|
||||
<console></console>
|
||||
</host>
|
||||
<host>
|
||||
<hostname>worker-1</hostname>
|
||||
<personality>worker</personality>
|
||||
<mgmt_mac>08:00:27:87:82:3E</mgmt_mac>
|
||||
<mgmt_ip>192.168.204.51</mgmt_ip>
|
||||
<bm_ip>10.10.10.102</bm_ip>
|
||||
<bm_type>bmc</bm_type>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
<rootfs_device>sda</rootfs_device>
|
||||
<install_output>text</install_output>
|
||||
</host>
|
||||
<host>
|
||||
<hostname>worker-2</hostname>
|
||||
<personality>worker</personality>
|
||||
<mgmt_mac>08:00:27:b9:16:0d</mgmt_mac>
|
||||
<mgmt_ip>192.168.204.52</mgmt_ip>
|
||||
<rootfs_device>sda</rootfs_device>
|
||||
<install_output>text</install_output>
|
||||
<console></console>
|
||||
<power_on/>
|
||||
<bm_ip>10.10.10.103</bm_ip>
|
||||
<bm_type>bmc</bm_type>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
</host>
|
||||
<host>
|
||||
<personality>storage</personality>
|
||||
<mgmt_mac>08:00:27:dd:e3:3f</mgmt_mac>
|
||||
<bm_ip>10.10.10.104</bm_ip>
|
||||
<bm_type>bmc</bm_type>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
</host>
|
||||
<host>
|
||||
<personality>storage</personality>
|
||||
<mgmt_mac>08:00:27:8e:f1:b8</mgmt_mac>
|
||||
<bm_ip>10.10.10.105</bm_ip>
|
||||
<bm_type>bmc</bm_type>
|
||||
<bm_username>tsmith1</bm_username>
|
||||
<bm_password>mypass1</bm_password>
|
||||
</host>
|
||||
</hosts>
|
@ -1,211 +0,0 @@
|
||||
|
||||
.. jow1440534908675
|
||||
|
||||
.. _configuring-a-pxe-boot-server:
|
||||
|
||||
.. _configuring-a-pxe-boot-server-r6:
|
||||
|
||||
|
||||
|
||||
===========================
|
||||
Configure a PXE Boot Server
|
||||
===========================
|
||||
|
||||
You can optionally set up a |PXE| Boot Server to support **controller-0**
|
||||
initialization.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
|prod| includes a setup script to simplify configuring a |PXE| boot server. If
|
||||
you prefer, you can manually apply a custom configuration; for more
|
||||
information, see :ref:`Access PXE Boot Server Files for a Custom Configuration
|
||||
<accessing-pxe-boot-server-files-for-a-custom-configuration-r6>`.
|
||||
|
||||
The |prod| setup script accepts a path to the root TFTP directory as a
|
||||
parameter, and copies all required files for BIOS and |UEFI| clients into this
|
||||
directory.
|
||||
|
||||
The |PXE| boot server serves a boot loader file to the requesting client from a
|
||||
specified path on the server. The path depends on whether the client uses BIOS
|
||||
or |UEFI|. The appropriate path is selected by conditional logic in the |DHCP|
|
||||
configuration file.
|
||||
|
||||
The boot loader runs on the client, and reads boot parameters, including the
|
||||
location of the kernel and initial ramdisk image files, from a boot file
|
||||
contained on the server. To find the boot file, the boot loader searches a
|
||||
known directory on the server. This search directory can contain more than one
|
||||
entry, supporting the use of separate boot files for different clients.
|
||||
|
||||
The file names and locations depend on the BIOS or |UEFI| implementation.
|
||||
|
||||
.. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb-r6:
|
||||
|
||||
.. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations
|
||||
:widths: auto
|
||||
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| Resource | BIOS | UEFI |
|
||||
+==========================================+========================+===============================+
|
||||
| **boot loader** | ./pxelinux.0 | ./EFI/grubx64.efi |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| **boot file search directory** | ./pxelinux.cfg | ./ or ./EFI |
|
||||
| | | |
|
||||
| | | \(system-dependent\) |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| **boot file** and path | ./pxelinux.cfg/default | ./grub.cfg and ./EFI/grub.cfg |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| \(./ indicates the root TFTP directory\) |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
Use a Linux workstation as the |PXE| Boot server.
|
||||
|
||||
|
||||
.. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt-r6:
|
||||
|
||||
- On the workstation, install the packages required to support |DHCP|, TFTP,
|
||||
and Apache.
|
||||
|
||||
- Configure |DHCP|, TFTP, and Apache according to your system requirements.
|
||||
For details, refer to the documentation included with the packages.
|
||||
|
||||
- Additionally, configure |DHCP| to support both BIOS and |UEFI| client
|
||||
architectures. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
option arch code 93 = unsigned integer 16; # ref RFC4578
|
||||
# ...
|
||||
subnet 192.168.1.0 netmask 255.255.255.0 {
|
||||
if option arch = 00:07 {
|
||||
filename "EFI/grubx64.efi";
|
||||
# NOTE: substitute the full tftp-boot-dir specified in the setup script
|
||||
}
|
||||
else {
|
||||
filename "pxelinux.0";
|
||||
}
|
||||
# ...
|
||||
}
|
||||
|
||||
|
||||
- Start the |DHCP|, TFTP, and Apache services.
|
||||
|
||||
- Connect the |PXE| boot server to the |prod| management or |PXE| boot
|
||||
network.
|
||||
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
|
||||
.. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb-r6:
|
||||
|
||||
#. Copy the ISO image from the source \(product DVD, USB device, or
|
||||
|dnload-loc| to a temporary location on the |PXE| boot server.
|
||||
|
||||
This example assumes that the copied image file is
|
||||
``tmp/TS-host-installer-1.0.iso``.
|
||||
|
||||
#. Mount the ISO image and make it executable.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
|
||||
$ mount -o remount,exec,dev /media/iso
|
||||
|
||||
#. Set up the |PXE| boot configuration.
|
||||
|
||||
.. important::
|
||||
|
||||
|PXE| configuration steps differ for |prod| |deb-eval-release|
|
||||
evaluation on the Debian distribution. See the :ref:`Debian Technology
|
||||
Preview <deb-grub-deltas>` |PXE| configuration procedure for details.
|
||||
|
||||
The ISO image includes a setup script, which you can run to complete the
|
||||
configuration.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> \
|
||||
-t <tftp-boot-dir>
|
||||
|
||||
where
|
||||
|
||||
``ip-addr``
|
||||
is the Apache listening address.
|
||||
|
||||
``symlink``
|
||||
is the name of a user-created symbolic link under the Apache document
|
||||
root directory, pointing to the directory specified by <tftp-boot-dir>.
|
||||
|
||||
``tftp-boot-dir``
|
||||
is the path from which the boot loader is served \(the TFTP root
|
||||
directory\).
|
||||
|
||||
The script creates the directory specified by <tftp-boot-dir>.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://192.168.100.100/BIOS-client -t /export/pxeboot
|
||||
|
||||
#. To serve a specific boot file to a specific controller, assign a special
|
||||
name to the file.
|
||||
|
||||
The boot loader searches for a file name that uses a string based on the
|
||||
client interface |MAC| address. The string uses lower case, substitutes
|
||||
dashes for colons, and includes the prefix "01-".
|
||||
|
||||
|
||||
- For a BIOS client, use the |MAC| address string as the file name:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd <tftp-boot-dir>/pxelinux.cfg/
|
||||
$ cp pxeboot.cfg <mac-address-string>
|
||||
|
||||
where:
|
||||
|
||||
``<tftp-boot-dir>``
|
||||
is the path from which the boot loader is served.
|
||||
|
||||
``<mac-address-string>``
|
||||
is a lower-case string formed from the |MAC| address of the client
|
||||
|PXE| boot interface, using dashes instead of colons, and prefixed
|
||||
by "01-".
|
||||
|
||||
For example, to represent the |MAC| address ``08:00:27:dl:63:c9``,
|
||||
use the string ``01-08-00-27-d1-63-c9`` in the file name.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /export/pxeboot/pxelinux.cfg/
|
||||
$ cp pxeboot.cfg 01-08-00-27-d1-63-c9
|
||||
|
||||
If the boot loader does not find a file named using this convention, it
|
||||
looks for a file with the name default.
|
||||
|
||||
- For a |UEFI| client, use the |MAC| address string prefixed by
|
||||
"grub.cfg-". To ensure the file is found, copy it to both search
|
||||
directories used by the |UEFI| convention.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd <tftp-boot-dir>
|
||||
$ cp grub.cfg grub.cfg-<mac-address-string>
|
||||
$ cp grub.cfg ./EFI/grub.cfg-<mac-address-string>
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /export/pxeboot
|
||||
$ cp grub.cfg grub.cfg-01-08-00-27-d1-63-c9
|
||||
$ cp grub.cfg ./EFI/grub.cfg-01-08-00-27-d1-63-c9
|
||||
|
||||
.. note::
|
||||
Alternatively, you can use symlinks in the search directories to
|
||||
ensure the file is found.
|
@ -1,22 +0,0 @@
|
||||
=============================================================
|
||||
Bare metal Standard with Controller Storage Installation R6.0
|
||||
=============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_controller_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller_storage_hardware
|
||||
controller_storage_install_kubernetes
|
@ -1,67 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum Requirement | Controller Node | Worker Node |
|
||||
+=========================+=============================+=============================+
|
||||
| Number of servers | 2 | 2-10 |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum memory | 64 GB | 32 GB |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (Minimum 10k RPM) |
|
||||
| | :ref:`nvme_config`) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
|
||||
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
|
||||
| | - Recommended, but not | 10K RPM) for VM local |
|
||||
| | required: 1 or more SSDs | ephemeral storage |
|
||||
| | or NVMe drives for Ceph | |
|
||||
| | journals (min. 1024 MiB | |
|
||||
| | per OSD journal) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png
|
||||
:scale: 50%
|
||||
:alt: Controller storage deployment configuration
|
||||
|
||||
*Controller storage deployment configuration*
|
@ -1,941 +0,0 @@
|
||||
|
||||
.. Greg updates required for -High Security Vulnerability Document Updates
|
||||
|
||||
.. _controller_storage_install_kubernetes_r6:
|
||||
|
||||
===============================================================
|
||||
Install Kubernetes Platform on Standard with Controller Storage
|
||||
===============================================================
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes
|
||||
platform on a **StarlingX R6.0 Standard with Controller Storage**
|
||||
deployment configuration.
|
||||
|
||||
-------------------
|
||||
Create bootable USB
|
||||
-------------------
|
||||
|
||||
Refer to :ref:`Bootable USB <bootable_usb>` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-standard-start
|
||||
:end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-start:
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
|
||||
When logging in for the first time, you will be forced to change the
|
||||
password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will |DHCP| out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a |DHCP| server
|
||||
is present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for
|
||||
Ansible configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example:
|
||||
``$HOME/<hostname>.yml``.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
.. note::
|
||||
|
||||
This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml)
|
||||
contains security sensitive information, use the
|
||||
:command:`ansible-vault create $HOME/localhost.yml` command to create it.
|
||||
You will be prompted for a password to protect/encrypt the file.
|
||||
Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the
|
||||
file needs to be edited after it is created.
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file
|
||||
comments.
|
||||
|
||||
To use this method, run the :command:`ansible-vault create $HOME/localhost.yml`
|
||||
command and copy the contents of the ``default.yml`` file into the
|
||||
ansible-vault editor, and edit the configurable values as required.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file with
|
||||
the :command:`ansible-vault create $HOME/localhost.yml`
|
||||
command and provide the minimum required parameters for the deployment
|
||||
configuration as shown in the example below. Use the OAM IP SUBNET and IP
|
||||
ADDRESSing applicable to your deployment environment.
|
||||
|
||||
.. include:: /_includes/min-bootstrap-overrides-non-simplex.rest
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
In either of the above options, the bootstrap playbook’s default
|
||||
values will pull all container images required for the |prod-p| from
|
||||
Docker hub.
|
||||
|
||||
If you have setup a private Docker registry to use for bootstrapping
|
||||
then you will need to add the following lines in $HOME/localhost.yml:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: docker-reg-begin
|
||||
:end-before: docker-reg-end
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
docker_registries:
|
||||
quay.io:
|
||||
url: myprivateregistry.abc.com:9001/quay.io
|
||||
docker.elastic.co:
|
||||
url: myprivateregistry.abc.com:9001/docker.elastic.co
|
||||
gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/gcr.io
|
||||
ghcr.io:
|
||||
url: myprivateregistry.abc.com:9001/gcr.io
|
||||
k8s.gcr.io:
|
||||
url: myprivateregistry.abc.com:9001/k8s.ghcr.io
|
||||
docker.io:
|
||||
url: myprivateregistry.abc.com:9001/docker.io
|
||||
defaults:
|
||||
type: docker
|
||||
username: <your_myprivateregistry.abc.com_username>
|
||||
password: <your_myprivateregistry.abc.com_password>
|
||||
|
||||
# Add the CA Certificate that signed myprivateregistry.abc.com’s
|
||||
# certificate as a Trusted CA
|
||||
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
|
||||
|
||||
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r6>`
|
||||
for more information.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
If a firewall is blocking access to Docker hub or your private
|
||||
registry from your StarlingX deployment, you will need to add the
|
||||
following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
|
||||
Configuration <docker_proxy_config>` for more details about Docker
|
||||
proxy settings):
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
|
||||
:start-after: firewall-begin
|
||||
:end-before: firewall-end
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations
|
||||
<ansible_bootstrap_configs_r6>` for information on additional Ansible
|
||||
bootstrap configurations for advanced Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
.. include:: /shared/_includes/ntp-update-note.rest
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-storage-start:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the |OAM| interface of controller-0 and specify the
|
||||
attached network as "oam".
|
||||
|
||||
The following example configures the |OAM| interface on a physical untagged
|
||||
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||
environment, for example eth0:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure the MGMT interface of controller-0 and specify the attached
|
||||
networks of both "mgmt" and "cluster-host".
|
||||
|
||||
The following example configures the MGMT interface on a physical untagged
|
||||
ethernet port, use the MGMT port name that is applicable to your deployment
|
||||
environment, for example eth1:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
|
||||
# De-provision loopback interface and
|
||||
# remove mgmt and cluster-host networks from loopback interface
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
|
||||
# Configure management interface and assign mgmt and cluster-host networks to it
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. Configure |NTP| servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
|
||||
<ptp-server-config-index>`.
|
||||
|
||||
#. If required, configure Ceph storage backend:
|
||||
|
||||
A persistent storage backend is required if your application requires |PVCs|.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
.. important::
|
||||
|
||||
The StarlingX OpenStack application **requires** |PVCs|.
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, |OVS-DPDK| (|OVS| with the Data
|
||||
Plane Development Kit, which is supported only on bare metal hardware)
|
||||
should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||
function.
|
||||
|
||||
To deploy the default containerized |OVS|:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
This does not run any vSwitch directly on the host, instead, it uses
|
||||
the containerized |OVS| defined in the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
|
||||
To deploy |OVS-DPDK|, run the following command:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system modify --vswitch_type |ovs-dpdk|
|
||||
|
||||
Once vswitch_type is set to |OVS-DPDK|, any subsequent |AIO|-controller
|
||||
or worker nodes created will default to automatically assigning 1 vSwitch
|
||||
core for |AIO| controllers and 2 vSwitch cores (both on numa-node 0;
|
||||
physical |NICs| are typically on first numa-node) for compute-labeled
|
||||
worker nodes.
|
||||
|
||||
.. note::
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking controller-0 to apply the change.
|
||||
|
||||
|
||||
.. incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
* **For OpenStack only:** Due to the additional openstack services’
|
||||
containers running on the controller host, the size of the docker
|
||||
filesystem needs to be increased from the default size of 30G to 60G.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# check existing size of docker fs
|
||||
system host-fs-list controller-0
|
||||
|
||||
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||
system host-lvg-list controller-0
|
||||
|
||||
# if existing docker fs size + cgts-vg available space is less than
|
||||
# 60G, you will need to add a new disk partition to cgts-vg.
|
||||
|
||||
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||
# ( if not use another unused disk )
|
||||
|
||||
# Get device path of ROOT DISK
|
||||
system host-show controller-0 | fgrep rootfs
|
||||
|
||||
# Get UUID of ROOT DISK by listing disks
|
||||
system host-disk-list controller-0
|
||||
|
||||
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||
PARTITION_SIZE=30
|
||||
system system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘cgts-vg’ local volume group
|
||||
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
|
||||
sleep 2 # wait for partition to be added
|
||||
|
||||
# Increase docker filesystem to 60G
|
||||
system host-fs-modify controller-0 docker=60
|
||||
|
||||
-------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and worker-1
|
||||
to complete, for all servers to reboot, and for all to show as
|
||||
locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-start:
|
||||
|
||||
#. Configure the |OAM| interface of controller-1 and specify the
|
||||
attached network of "oam".
|
||||
|
||||
The following example configures the |OAM| interface on a physical untagged
|
||||
ethernet port, use the |OAM| port name that is applicable to your deployment
|
||||
environment, for example eth0:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
|
||||
To configure a vlan or aggregated ethernet interface, see :ref:`Node
|
||||
Interfaces <node-interfaces-index>`.
|
||||
|
||||
#. The MGMT interface is partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
|
||||
Complete the MGMT interface configuration of controller-1 by specifying the
|
||||
attached network of "cluster-host".
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-start:
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
* **For OpenStack only:** Due to the additional openstack services’ containers
|
||||
running on the controller host, the size of the docker filesystem needs to be
|
||||
increased from the default size of 30G to 60G.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# check existing size of docker fs
|
||||
system host-fs-list controller-1
|
||||
|
||||
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
|
||||
system host-lvg-list controller-1
|
||||
|
||||
# if existing docker fs size + cgts-vg available space is less than
|
||||
# 60G, you will need to add a new disk partition to cgts-vg.
|
||||
|
||||
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
|
||||
# ( if not use another unused disk )
|
||||
|
||||
# Get device path of ROOT DISK
|
||||
system host-show controller-1 | fgrep rootfs
|
||||
|
||||
# Get UUID of ROOT DISK by listing disks
|
||||
system host-disk-list controller-1
|
||||
|
||||
# Create new PARTITION on ROOT DISK, and take note of new partition’s ‘uuid’ in response
|
||||
# Use a partition size such that you’ll be able to increase docker fs size from 30G to 60G
|
||||
PARTITION_SIZE=30
|
||||
system host disk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘cgts-vg’ local volume group
|
||||
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
|
||||
sleep 2 # wait for partition to be added
|
||||
|
||||
# Increase docker filesystem to 60G
|
||||
system host-fs-modify controller-1 docker=60
|
||||
|
||||
.. incl-unlock-controller-1-end:
|
||||
|
||||
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. Add the third Ceph monitor to a worker node:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add worker-0
|
||||
|
||||
#. Wait for the worker node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
done
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable it by
|
||||
using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vswitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use two cores on numa-node 0
|
||||
for |OVS-DPDK| vSwitch; physical NICs are typically on first numa-node.
|
||||
This should have been automatically configured, if not run the following
|
||||
command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with the
|
||||
property ``hw:mem_page_size=large``
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||
# but is limited by the size and space available on the physical disk you chose above.
|
||||
# The following example uses a small PARTITION size such that you can fit it on the
|
||||
# root disk, if that is what you chose above.
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system host disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class
|
||||
interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
*****************************************
|
||||
|
||||
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not already created, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
|
||||
|
||||
--------------------
|
||||
Unlock worker nodes
|
||||
--------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-0
|
||||
|
||||
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the `sdb` disk:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=controller-1
|
||||
|
||||
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/72hr-to-license.rest
|
@ -1,22 +0,0 @@
|
||||
|
||||
============================================================
|
||||
Bare metal Standard with Dedicated Storage Installation R6.0
|
||||
============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_dedicated_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dedicated_storage_hardware
|
||||
dedicated_storage_install_kubernetes
|
@ -1,72 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum Requirement | Controller Node | Storage Node | Worker Node |
|
||||
+=====================+===========================+=======================+=======================+
|
||||
| Number of servers | 2 | 2-9 | 2-100 |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
|
||||
| class | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum memory | 64 GB | 64 GB | 32 GB |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
|
||||
| | :ref:`nvme_config`) | | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
|
||||
| | | (min. 10K RPM) for | recommend 1 or more |
|
||||
| | | Ceph OSD | 500 GB (min. 10K |
|
||||
| | | - Recommended, but | RPM) for VM |
|
||||
| | | not required: 1 or | ephemeral storage |
|
||||
| | | more SSDs or NVMe | |
|
||||
| | | drives for Ceph | |
|
||||
| | | journals (min. 1024 | |
|
||||
| | | MiB per OSD | |
|
||||
| | | journal) | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
|
||||
| ports | 1x10GE | 1x10GE | 1x10GE |
|
||||
| | - OAM: 1x1GE | | - Data: 1 or more |
|
||||
| | | | x 10GE |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-dedicated-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with dedicated storage
|
||||
|
||||
*Standard with dedicated storage*
|
@ -1,536 +0,0 @@
|
||||
|
||||
.. Greg updates required for -High Security Vulnerability Document Updates
|
||||
|
||||
.. _dedicated_storage_install_kubernetes_r6:
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/install-kubernetes-null-labels.rest
|
||||
|
||||
==============================================================
|
||||
Install Kubernetes Platform on Standard with Dedicated Storage
|
||||
==============================================================
|
||||
|
||||
This section describes the steps to install the |prod| Kubernetes platform on a
|
||||
**Standard with Dedicated Storage** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
-------------------
|
||||
Create bootable USB
|
||||
-------------------
|
||||
|
||||
Refer to :ref:`Bootable USB <bootable_usb>` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
:start-after: incl-install-software-controller-0-standard-start
|
||||
:end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start:
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-storage-start:
|
||||
:end-before: incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. important::
|
||||
|
||||
Make sure the Ceph storage backend is configured. If it is
|
||||
not configured, you will not be able to configure storage
|
||||
nodes.
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the storage-0 and
|
||||
storage-1 servers. Set the personality to 'storage' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on storage-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for storage-1. Power on storage-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates the software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker-0 and worker-1.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all servers to reboot, and for all to
|
||||
show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | worker-0 | worker | locked | disabled | online |
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-start:
|
||||
:end-before: incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-start:
|
||||
:end-before: incl-unlock-controller-1-end:
|
||||
|
||||
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add |OSDs| to storage-0.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-0
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
#. Add |OSDs| to storage-1.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
HOST=storage-1
|
||||
|
||||
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
|
||||
# By default, /dev/sda is being used as system disk and can not be used for OSD.
|
||||
system host-disk-list ${HOST}
|
||||
|
||||
# Add disk as an OSD storage
|
||||
system host-stor-add ${HOST} osd <disk-uuid>
|
||||
|
||||
# List OSD storage devices and wait for configuration of newly added OSD to complete.
|
||||
system host-stor-list ${HOST}
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. The MGMT interfaces are partially set up by the network install procedure;
|
||||
configuring the port used for network install as the MGMT port and
|
||||
specifying the attached network of "mgmt".
|
||||
|
||||
Complete the MGMT interface configuration of the worker nodes by specifying
|
||||
the attached network of "cluster-host".
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
These steps are required only if the |prod-os| application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure the host settings for the vSwitch.
|
||||
|
||||
If using |OVS-DPDK| vSwitch, run the following commands:
|
||||
|
||||
Default recommendation for worker node is to use node two cores on
|
||||
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
|
||||
numa-node. This should have been automatically configured, if not run
|
||||
the following command.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-cpu-modify -f vswitch -p0 2 $NODE
|
||||
|
||||
done
|
||||
|
||||
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
|
||||
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
|
||||
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
|
||||
|
||||
However, due to a limitation with Kubernetes, only a single huge page
|
||||
size is supported on any one host. If your application |VMs| require 2M
|
||||
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
|
||||
memory on each |NUMA| node on the host.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 0
|
||||
|
||||
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
|
||||
system host-memory-modify -f vswitch -1G 1 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
|
||||
.. important::
|
||||
|
||||
|VMs| created in an |OVS-DPDK| environment must be configured to use
|
||||
huge pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
|
||||
this host, the following commands are an example that assumes that 1G
|
||||
huge page size is being used on this host:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 0
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application -1G 10 $NODE 1
|
||||
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
|
||||
# Get UUID of DISK to create PARTITION to be added to ‘nova-local’ local volume group
|
||||
# CEPH OSD Disks can NOT be used
|
||||
# For best performance, do NOT use system/root disk, use a separate physical disk.
|
||||
|
||||
# List host’s disks and take note of UUID of disk to be used
|
||||
system host-disk-list ${NODE}
|
||||
# ( if using ROOT DISK, select disk with device_path of
|
||||
# ‘system host-show ${NODE} | fgrep rootfs’ )
|
||||
|
||||
# Create new PARTITION on selected disk, and take note of new partition’s ‘uuid’ in response
|
||||
# The size of the PARTITION needs to be large enough to hold the aggregate size of
|
||||
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
|
||||
# but is limited by the size and space available on the physical disk you chose above.
|
||||
# The following example uses a small PARTITION size such that you can fit it on the
|
||||
# root disk, if that is what you chose above.
|
||||
# Additional PARTITION(s) from additional disks can be added later if required.
|
||||
PARTITION_SIZE=30
|
||||
|
||||
system hostdisk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
|
||||
|
||||
# Add new partition to ‘nova-local’ local volume group
|
||||
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
|
||||
sleep 2
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Configure data interfaces for worker nodes.
|
||||
Data class interfaces are vswitch interfaces used by vswitch to provide
|
||||
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|
||||
underlying assigned Data Network.
|
||||
|
||||
.. important::
|
||||
|
||||
A compute-labeled worker host **MUST** have at least one Data class
|
||||
interface.
|
||||
|
||||
* Configure the data interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘data’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘data’ class interfaces, MTU of 1500 and named data#
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
|
||||
|
||||
# Create Data Networks that vswitch 'data' interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to Data Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
|
||||
|
||||
*****************************************
|
||||
Optionally Configure PCI-SRIOV Interfaces
|
||||
*****************************************
|
||||
|
||||
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
|
||||
|
||||
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
.. only:: openstack
|
||||
|
||||
This step is **optional** for OpenStack. Do this step if using |SRIOV|
|
||||
vNICs in hosted application |VMs|. Note that pci-sriov interfaces can
|
||||
have the same Data Networks assigned to them as vswitch data interfaces.
|
||||
|
||||
|
||||
* Configure the pci-sriov interfaces for worker nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Execute the following lines with
|
||||
export NODE=worker-0
|
||||
# and then repeat with
|
||||
export NODE=worker-1
|
||||
|
||||
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
|
||||
# based on displayed linux port name, pci address and device type.
|
||||
system host-port-list ${NODE}
|
||||
|
||||
# List host’s auto-configured ‘ethernet’ interfaces,
|
||||
# find the interfaces corresponding to the ports identified in previous step, and
|
||||
# take note of their UUID
|
||||
system host-if-list -a ${NODE}
|
||||
|
||||
# Modify configuration for these interfaces
|
||||
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
|
||||
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
|
||||
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
|
||||
|
||||
# If not created already, create Data Networks that the 'pci-sriov'
|
||||
# interfaces will be connected to
|
||||
DATANET0='datanet0'
|
||||
DATANET1='datanet1'
|
||||
system datanetwork-add ${DATANET0} vlan
|
||||
system datanetwork-add ${DATANET1} vlan
|
||||
|
||||
# Assign Data Networks to PCI-SRIOV Interfaces
|
||||
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
|
||||
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
|
||||
|
||||
|
||||
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
|
||||
the above interfaces in Kubernetes hosted application containers:
|
||||
|
||||
* Configure the Kubernetes |SRIOV| device plugin.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in Kubernetes hosted application
|
||||
containers on this host, configure the number of 1G Huge pages required
|
||||
on both |NUMA| nodes.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 0 -1G 10
|
||||
|
||||
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
|
||||
system host-memory-modify -f application $NODE 1 -1G 10
|
||||
|
||||
done
|
||||
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
||||
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/72hr-to-license.rest
|
@ -1,33 +0,0 @@
|
||||
.. _delete-hosts-using-the-host-delete-command-1729d2e3153b:
|
||||
|
||||
===================================
|
||||
Delete Hosts Using the Command Line
|
||||
===================================
|
||||
|
||||
You can delete hosts from the system inventory using the :command:`host-delete` command.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Check for alarms related to the host.
|
||||
|
||||
Use the :command:`fm alarm-list` command to check for any alarms (major
|
||||
or critical events). You can also type :command:`fm event-list` to see a log
|
||||
of events. For more information on alarms, see :ref:`Fault Management
|
||||
Overview <fault-management-overview>`.
|
||||
|
||||
#. Lock the host that will be deleted.
|
||||
|
||||
Use the :command:`system host-lock` command. Only locked hosts can be deleted.
|
||||
|
||||
#. Delete the host from the system inventory.
|
||||
|
||||
Use the command :command:`system host-delete`. This command accepts one
|
||||
parameter: the hostname or ID. Make sure that the remaining hosts have
|
||||
sufficient capacity and workload to account for the deleted host.
|
||||
|
||||
#. Verify that the host has been deleted successfully.
|
||||
|
||||
Use the :command:`fm alarm-list` command to check for any alarms (major
|
||||
or critical events). You can also type :command:`fm event-list` to see a log
|
||||
of events. For more information on alarms, see :ref:`Fault Management
|
||||
Overview <fault-management-overview>`.
|
@ -1,53 +0,0 @@
|
||||
|
||||
.. fdm1552927801987
|
||||
.. _exporting-host-configurations-r6:
|
||||
|
||||
==========================
|
||||
Export Host Configurations
|
||||
==========================
|
||||
|
||||
You can generate a host configuration file from an existing system for
|
||||
re-installation, upgrade, or maintenance purposes.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
You can generate a host configuration file using the :command:`system
|
||||
host-bulk-export` command, and then use this file with the :command:`system
|
||||
host-bulk-add` command to re-create the system. If required, you can modify the
|
||||
file before using it.
|
||||
|
||||
The configuration settings \(management |MAC| address, BM IP address, and so
|
||||
on\) for all nodes except **controller-0** are written to the file.
|
||||
|
||||
.. note::
|
||||
To ensure that the hosts are not powered on unexpectedly, the **power-on**
|
||||
element for each host is commented out by default.
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
To perform this procedure, you must be logged in as the **admin** user.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
.. _exporting-host-configurations-steps-unordered-ntw-nw1-c2b-r6:
|
||||
|
||||
- Run the :command:`system host-bulk-export` command to create the host
|
||||
configuration file.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-bulk-export [--filename <FILENAME]>
|
||||
|
||||
|
||||
- where <FILENAME> is the path and name of the output file. If the
|
||||
``--filename`` option is not present, the default path ./hosts.xml is
|
||||
used.
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
To use the host configuration file, see :ref:`Reinstall a System Using an
|
||||
Exported Host Configuration File
|
||||
<reinstalling-a-system-using-an-exported-host-configuration-file-r6>`.
|
||||
|
||||
For details on the structure and elements of the file, see :ref:`Bulk Host XML
|
||||
File Format <bulk-host-xml-file-format-r6>`.
|
@ -1,72 +0,0 @@
|
||||
====================================
|
||||
Bare metal Standard with Ironic R6.0
|
||||
====================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
Ironic is an OpenStack project that provisions bare metal machines. For
|
||||
information about the Ironic project, see
|
||||
`Ironic Documentation <https://docs.openstack.org/ironic>`__.
|
||||
|
||||
End user applications can be deployed on bare metal servers (instead of
|
||||
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
|
||||
more bare metal servers.
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :ref:`docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-ironic.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Ironic deployment configuration
|
||||
|
||||
*Figure 1: Standard with Ironic deployment configuration*
|
||||
|
||||
Bare metal servers must be connected to:
|
||||
|
||||
* IPMI for OpenStack Ironic control
|
||||
* ironic-provisioning-net tenant network via their untagged physical interface,
|
||||
which supports PXE booting
|
||||
|
||||
As part of configuring OpenStack Ironic in StarlingX:
|
||||
|
||||
* An ironic-provisioning-net tenant network must be identified as the boot
|
||||
network for bare metal nodes.
|
||||
* An additional untagged physical interface must be configured on controller
|
||||
nodes and connected to the ironic-provisioning-net tenant network. The
|
||||
OpenStack Ironic tftpboot server will PXE boot the bare metal servers over
|
||||
this interface.
|
||||
|
||||
.. note::
|
||||
|
||||
Bare metal servers are NOT:
|
||||
|
||||
* Running any OpenStack / StarlingX software; they are running end user
|
||||
applications (for example, Glance Images).
|
||||
* To be connected to the internal management network.
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
StarlingX currently supports only a bare metal installation of Ironic with a
|
||||
standard configuration, either:
|
||||
|
||||
* :doc:`controller_storage`
|
||||
|
||||
* :doc:`dedicated_storage`
|
||||
|
||||
|
||||
This guide assumes that you have a standard deployment installed and configured
|
||||
with 2x controllers and at least 1x compute-labeled worker node, with the
|
||||
StarlingX OpenStack application (|prefix|-openstack) applied.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
ironic_hardware
|
||||
ironic_install
|
@ -1,51 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal Ironic** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
|
||||
|
||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
|
||||
address of bare metal hosts.
|
||||
|
||||
For controller nodes:
|
||||
|
||||
* Additional NIC port on both controller nodes for connecting to the
|
||||
ironic-provisioning-net.
|
||||
|
||||
For worker nodes:
|
||||
|
||||
* If using a flat data network for the Ironic provisioning network, an additional
|
||||
NIC port on one of the worker nodes is required.
|
||||
|
||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
||||
simply add the new data network to an existing interface on the worker node.
|
||||
|
||||
* Additional switch ports / configuration for new ports on controller, worker,
|
||||
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
||||
|
||||
-----------------------------------
|
||||
BMC configuration of Ironic node(s)
|
||||
-----------------------------------
|
||||
|
||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
|
||||
For example, set:
|
||||
|
||||
IP address
|
||||
10.10.10.126
|
||||
|
||||
username
|
||||
root
|
||||
|
||||
password
|
||||
test123
|
@ -1,392 +0,0 @@
|
||||
================================
|
||||
Install Ironic on StarlingX R6.0
|
||||
================================
|
||||
|
||||
This section describes the steps to install Ironic on a standard configuration,
|
||||
either:
|
||||
|
||||
* **StarlingX R6.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration
|
||||
|
||||
* **StarlingX R6.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Enable Ironic service
|
||||
---------------------
|
||||
|
||||
This section describes the pre-configuration required to enable the Ironic service.
|
||||
All the commands in this section are for the StarlingX platform.
|
||||
|
||||
First acquire administrative privileges:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
********************************
|
||||
Download Ironic deployment image
|
||||
********************************
|
||||
|
||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
|
||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
|
||||
by the deployment image wipes the disks and tests connectivity to the Ironic
|
||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
|
||||
|
||||
The latest Ironic deployment image (**Ironic-kernel** and **Ironic-ramdisk**)
|
||||
can be found here:
|
||||
|
||||
* `Ironic-kernel ipa-centos8-master.kernel
|
||||
<https://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-master.kernel>`__
|
||||
* `Ironic-ramdisk ipa-centos8.initramfs
|
||||
<https://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-master.initramfs>`__
|
||||
|
||||
|
||||
*******************************************************
|
||||
Configure Ironic network on deployed standard StarlingX
|
||||
*******************************************************
|
||||
|
||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
|
||||
|
||||
#. Add the Ironic platform network. This example uses `ironic-net`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
|
||||
|
||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
|
||||
|
||||
.. note::
|
||||
|
||||
The tenant network is not the same as the platform network described in
|
||||
the previous step. You can specify any name for the tenant network other
|
||||
than ‘ironic’. If the name 'ironic' is used, a user override must be
|
||||
generated to indicate the tenant network name.
|
||||
|
||||
Refer to section `Generate user Helm overrides`_ for details.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ironic-data flat
|
||||
|
||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
|
||||
them to the platform network. Host must be locked. This example uses the
|
||||
platform network `ironic-net` that was named in a previous step.
|
||||
|
||||
These new interfaces to the controllers are used to connect to the Ironic
|
||||
provisioning network:
|
||||
|
||||
**controller-0**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-0 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-0 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-0
|
||||
|
||||
|
||||
**controller-1**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-1 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-1 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-1
|
||||
|
||||
#. Configure the new interface (for Ironic) on one of the compute-labeled worker
|
||||
nodes and assign it to the Ironic data network. This example uses the data
|
||||
network `ironic-data` that was named in a previous step.
|
||||
|
||||
::
|
||||
|
||||
system interface-datanetwork-assign worker-0 eno1 ironic-data
|
||||
system host-if-modify -n ironicdata -c data worker-0 eno1
|
||||
|
||||
****************************
|
||||
Generate user Helm overrides
|
||||
****************************
|
||||
|
||||
Ironic Helm Charts are included in the |prefix|-openstack application. By
|
||||
default, Ironic is disabled.
|
||||
|
||||
To enable Ironic, update the following Ironic Helm Chart attributes:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system helm-override-update |prefix|-openstack ironic openstack \
|
||||
--set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
|
||||
--set network.pxe.neutron_subnet_gateway=10.10.20.1 \
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
|
||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
|
||||
|
||||
If the data network name for Ironic is changed, modify
|
||||
:command:`network.pxe.neutron_provider_network` to the command above:
|
||||
|
||||
::
|
||||
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
***************************
|
||||
Apply OpenStack application
|
||||
***************************
|
||||
|
||||
Re-apply the |prefix|-openstack application to apply the changes to Ironic:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system helm-chart-attribute-modify |prefix|-openstack ironic openstack \
|
||||
--enabled true
|
||||
|
||||
system application-apply |prefix|-openstack
|
||||
|
||||
--------------------
|
||||
Start an Ironic node
|
||||
--------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application with
|
||||
administrative privileges.
|
||||
|
||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: 'Li69nux*'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
********************
|
||||
Create Glance images
|
||||
********************
|
||||
|
||||
#. Create the **ironic-kernel** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe-stable-stein.vmlinuz \
|
||||
--disk-format aki \
|
||||
--container-format aki \
|
||||
--public \
|
||||
ironic-kernel
|
||||
|
||||
#. Create the **ironic-ramdisk** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
|
||||
--disk-format ari \
|
||||
--container-format ari \
|
||||
--public \
|
||||
ironic-ramdisk
|
||||
|
||||
#. Create the end user application image (for example, CentOS):
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
|
||||
--public --disk-format \
|
||||
qcow2 --container-format bare centos
|
||||
|
||||
*********************
|
||||
Create an Ironic node
|
||||
*********************
|
||||
|
||||
#. Create a node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node create --driver ipmi --name ironic-test0
|
||||
|
||||
#. Add IPMI information:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info ipmi_address=10.10.10.126 \
|
||||
--driver-info ipmi_username=root \
|
||||
--driver-info ipmi_password=test123 \
|
||||
--driver-info ipmi_terminal_port=623 ironic-test0
|
||||
|
||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
|
||||
on this bare metal node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
|
||||
--driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
|
||||
ironic-test0
|
||||
|
||||
#. Set resource properties on this bare metal node based on actual Ironic node
|
||||
capacities:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--property cpus=4 \
|
||||
--property cpu_arch=x86_64\
|
||||
--property capabilities="boot_option:local" \
|
||||
--property memory_mb=65536 \
|
||||
--property local_gb=400 \
|
||||
--resource-class bm ironic-test0
|
||||
|
||||
#. Add pxe_template location:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set --driver-info \
|
||||
pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \
|
||||
ironic-test0
|
||||
|
||||
#. Create a port to identify the specific port used by the Ironic node.
|
||||
Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
|
||||
port which connects to the Ironic network:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal port create \
|
||||
--node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
|
||||
--pxe-enabled true a4:bf:01:2b:3b:c8
|
||||
|
||||
#. Change node state to `manage`:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node manage ironic-test0
|
||||
|
||||
#. Make node available for deployment:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node provide ironic-test0
|
||||
|
||||
#. Wait for ironic-test0 provision-state: available:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node show ironic-test0
|
||||
|
||||
---------------------------------
|
||||
Deploy an instance on Ironic node
|
||||
---------------------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application, but this
|
||||
time with *tenant* specific privileges.
|
||||
|
||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'joeuser'
|
||||
password: 'mypasswrd'
|
||||
project_name: 'intel'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
#. Create flavor.
|
||||
|
||||
Set resource CUSTOM_BM corresponding to **--resource-class bm**:
|
||||
|
||||
::
|
||||
|
||||
openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
|
||||
--property resources:CUSTOM_BM=1 \
|
||||
--property resources:VCPU=0 \
|
||||
--property resources:MEMORY_MB=0 \
|
||||
--property resources:DISK_GB=0 \
|
||||
--property capabilities:boot_option='local' \
|
||||
bm-flavor
|
||||
|
||||
See `Adding scheduling information
|
||||
<https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
|
||||
and `Configure Nova flavors
|
||||
<https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
|
||||
for more information.
|
||||
|
||||
#. Enable service
|
||||
|
||||
List the compute services:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service list
|
||||
|
||||
Set compute service properties:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service set --enable controller-0 nova-compute
|
||||
|
||||
#. Create instance
|
||||
|
||||
.. note::
|
||||
|
||||
The :command:`keypair create` command is optional. It is not required to
|
||||
enable a bare metal instance.
|
||||
|
||||
::
|
||||
|
||||
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
|
||||
|
||||
|
||||
Create 2 new servers, one bare metal and one virtual:
|
||||
|
||||
::
|
||||
|
||||
openstack server create --image centos --flavor bm-flavor \
|
||||
--network baremetal --key-name mykey bm
|
||||
|
||||
openstack server create --image centos --flavor m1.small \
|
||||
--network baremetal --key-name mykey vm
|
@ -1,39 +0,0 @@
|
||||
|
||||
.. deo1552927844327
|
||||
.. _reinstalling-a-system-or-a-host-r6:
|
||||
|
||||
============================
|
||||
Reinstall a System or a Host
|
||||
============================
|
||||
|
||||
You can reinstall individual hosts or the entire system if necessary.
|
||||
Reinstalling host software or deleting and re-adding a host node may be
|
||||
required to complete certain configuration changes.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
For a summary of changes that require system or host reinstallation, see
|
||||
|node-doc|: :ref:`Configuration Changes Requiring Re-installation
|
||||
<configuration-changes-requiring-re-installation>`.
|
||||
|
||||
To reinstall an entire system, refer to the Installation Guide for your system
|
||||
type \(for example, Standard or All-in-one\).
|
||||
|
||||
.. note::
|
||||
To simplify system reinstallation, you can export and reuse an existing
|
||||
system configuration. For more information, see :ref:`Reinstalling a System
|
||||
Using an Exported Host Configuration File
|
||||
<reinstalling-a-system-using-an-exported-host-configuration-file-r6>`.
|
||||
|
||||
To reinstall the software on a host using the Host Inventory controls, see
|
||||
|node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete
|
||||
the host instead, and then re-add it using the standard host installation
|
||||
procedure. This applies if the system inventory record must be corrected to
|
||||
complete the configuration change \(for example, if the |MAC| address of the
|
||||
management interface has changed\).
|
||||
|
||||
- :ref:`Reinstalling a System Using an Exported Host Configuration File
|
||||
<reinstalling-a-system-using-an-exported-host-configuration-file-r6>`
|
||||
|
||||
- :ref:`Exporting Host Configurations <exporting-host-configurations-r6>`
|
||||
|
@ -1,45 +0,0 @@
|
||||
|
||||
.. wuh1552927822054
|
||||
.. _reinstalling-a-system-using-an-exported-host-configuration-file-r6:
|
||||
|
||||
============================================================
|
||||
Reinstall a System Using an Exported Host Configuration File
|
||||
============================================================
|
||||
|
||||
You can reinstall a system using the host configuration file that is generated
|
||||
using the :command:`host-bulk-export` command.
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
For the following procedure, **controller-0** must be the active controller.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Create a host configuration file using the :command:`system
|
||||
host-bulk-export` command, as described in :ref:`Exporting Host
|
||||
Configurations <exporting-host-configurations-r6>`.
|
||||
|
||||
#. Copy the host configuration file to a USB drive or somewhere off the
|
||||
controller hard disk.
|
||||
|
||||
#. Edit the host configuration file as needed, for example to specify power-on
|
||||
or |BMC| information.
|
||||
|
||||
#. Delete all the hosts except **controller-0** from the inventory.
|
||||
|
||||
#. Reinstall the |prod| software on **controller-0**, which must be the active
|
||||
controller.
|
||||
|
||||
#. Run :command:`Ansible Bootstrap playbook`.
|
||||
|
||||
#. Follow the instructions for using the :command:`system host-bulk-add`
|
||||
command, as detailed in :ref:`Adding Hosts in Bulk <adding-hosts-in-bulk-r6>`.
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
After adding the host, you must provision it according to the requirements of
|
||||
the personality.
|
||||
|
||||
.. xbooklink For more information, see :ref:`Installing, Configuring, and
|
||||
Unlocking Nodes <installing-configuring-and-unlocking-nodes>`, for your system,
|
||||
and follow the *Configure* steps for the appropriate node personality.
|
@ -1,22 +0,0 @@
|
||||
=======================================================
|
||||
Bare metal Standard with Rook Storage Installation R6.0
|
||||
=======================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_rook_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
rook_storage_hardware
|
||||
rook_storage_install_kubernetes
|
@ -1,73 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R6.0 bare metal Standard with Rook Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum Requirement | Controller Node | Worker Node for rook | Worker Node for |
|
||||
| | | storage | application |
|
||||
+=====================+===========================+=======================+=======================+
|
||||
| Number of servers | 2 | 2-9 | 2-100 |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
|
||||
| class | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum memory | 64 GB | 64 GB | 32 GB |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
|
||||
| | :ref:`nvme_config`) | | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
|
||||
| | | (min. 10K RPM) for | recommend 1 or more |
|
||||
| | | Ceph OSD | 500 GB (min. 10K |
|
||||
| | | - Recommended, but | RPM) for VM |
|
||||
| | | not required: 1 or | ephemeral storage |
|
||||
| | | more SSDs or NVMe | |
|
||||
| | | drives for Ceph | |
|
||||
| | | journals (min. 1024 | |
|
||||
| | | MiB per OSD | |
|
||||
| | | journal) | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
|
||||
| ports | 1x10GE | 1x10GE | 1x10GE |
|
||||
| | - OAM: 1x1GE | | - Data: 1 or more |
|
||||
| | | | x 10GE |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Rook Storage deployment configuration
|
||||
|
||||
*Standard with Rook Storage deployment configuration*
|
@ -1,752 +0,0 @@
|
||||
=====================================================================
|
||||
Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage
|
||||
=====================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 bare metal Standard with Rook Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-------------------
|
||||
Create bootable USB
|
||||
-------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-install-software-controller-0-standard-start:
|
||||
|
||||
#. Insert the bootable USB into a bootable USB port on the host you are
|
||||
configuring as controller-0.
|
||||
|
||||
#. Power on the host.
|
||||
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
StarlingX Installer Menus.
|
||||
|
||||
#. Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
|
||||
your terminal access to the console port
|
||||
|
||||
#. Wait for non-interactive install of software to complete and server to reboot.
|
||||
This can take 5-10 minutes, depending on the performance of the server.
|
||||
|
||||
.. incl-install-software-controller-0-standard-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-start:
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
|
||||
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
|
||||
for details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-storage-start:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :ref:`docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of |prefix|-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane
|
||||
Development Kit, which is supported only on bare metal hardware) should be
|
||||
used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch
|
||||
function.
|
||||
|
||||
To deploy the default containerized |OVS|:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||
|OVS| defined in the helm charts of |prefix|-openstack manifest.
|
||||
|
||||
To deploy |OVS|-|DPDK|, run the following command:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
|
||||
default to automatically assigning 1 vSwitch core for |AIO| controllers and
|
||||
2 vSwitch cores for compute-labeled worker nodes.
|
||||
|
||||
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with the
|
||||
following command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||
|
||||
|VMs| created in an |OVS|-|DPDK| environment must be configured to use huge
|
||||
pages to enable networking and must use a flavor with property:
|
||||
hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with the
|
||||
command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify worker-0 0 -1G 10
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking all compute-labeled worker nodes (and/or |AIO|
|
||||
controllers) to apply the change.
|
||||
|
||||
.. incl-config-controller-0-storage-end:
|
||||
|
||||
********************************
|
||||
Rook-specific host configuration
|
||||
********************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX Rook application will be
|
||||
installed.**
|
||||
|
||||
**For Rook only:** Assign Rook host labels to controller-0 in support of
|
||||
installing the rook-ceph-apps manifest/helm-charts later and add ceph-rook
|
||||
as storage backend:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||
system storage-backend-add ceph-rook --confirmed
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
For rook storage, there is no storage personality. Some hosts with worker
|
||||
personality providers storage service. Here we still named these worker host
|
||||
storage-x. Repeat for storage-0 and storage-1. Power on storage-0, storage-1
|
||||
and wait for the new host (hostname=None) to be discovered by checking
|
||||
'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=storage-0
|
||||
system host-update 6 personality=worker hostname=storage-1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and worker-1
|
||||
to complete, for all servers to reboot, and for all to show as
|
||||
locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
| 5 | storage-0 | worker | locked | disabled | online |
|
||||
| 6 | storage-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-start:
|
||||
|
||||
Configure the |OAM| and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the |OAM| and MGMT port names, for example eth0, that
|
||||
are applicable to your deployment environment.
|
||||
|
||||
(Note that the MGMT interface is partially set up automatically by the network
|
||||
install procedure.)
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 $MGMT_IF cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-end:
|
||||
|
||||
********************************
|
||||
Rook-specific host configuration
|
||||
********************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX Rook application will be
|
||||
installed.**
|
||||
|
||||
**For Rook only:** Assign Rook host labels to controller-1 in support of
|
||||
installing the rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-start:
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. incl-unlock-controller-1-end:
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using |SRIOV|
|
||||
network attachments in hosted application containers.
|
||||
|
||||
For Kubernetes |SRIOV| network attachments:
|
||||
|
||||
* Configure |SRIOV| device plug in:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign ${NODE} sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running |DPDK| in containers on this host, configure the
|
||||
number of 1G Huge pages required on both |NUMA| nodes:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-memory-modify ${NODE} 0 -1G 100
|
||||
system host-memory-modify ${NODE} 1 -1G 100
|
||||
done
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest and helm-charts later.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/rook_storage_install_kubernetes.rest
|
||||
:start-after: ref1-begin
|
||||
:end-before: ref1-end
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock worker nodes
|
||||
--------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network install
|
||||
procedure.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. **For Rook only:** Assign Rook host labels to storage-0 in support
|
||||
of installing the rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign storage-0 ceph-mon-placement=enabled
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install Rook application manifest and helm-charts
|
||||
-------------------------------------------------
|
||||
|
||||
On host storage-0 and storage-1:
|
||||
|
||||
#. Erase gpt header of disk sdb.
|
||||
|
||||
::
|
||||
|
||||
$ system host-disk-wipe -s --confirm storage-0 /dev/sdb
|
||||
$ system host-disk-wipe -s --confirm storage-1 /dev/sdb
|
||||
|
||||
#. Wait for application "rook-ceph-apps" uploaded
|
||||
|
||||
::
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
$ system application-list
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
|
||||
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
|
||||
#. Edit values.yaml for rook-ceph-apps.
|
||||
|
||||
::
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: storage-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
- name: storage-1
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
#. Update rook-ceph-apps override value.
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
|
||||
|
||||
#. Apply the rook-ceph-apps application.
|
||||
|
||||
::
|
||||
|
||||
system application-apply rook-ceph-apps
|
||||
|
||||
#. Wait for OSDs pod ready.
|
||||
|
||||
::
|
||||
|
||||
kubectl get pods -n kube-system
|
||||
rook-ceph-mgr-a-ddffc8fbb-zkvln 1/1 Running 0 66s
|
||||
rook-ceph-mon-a-c67fdb6c8-tlbvk 1/1 Running 0 2m11s
|
||||
rook-ceph-mon-b-76969d8685-wcq62 1/1 Running 0 2m2s
|
||||
rook-ceph-mon-c-5bc47c6cb9-vm4j8 1/1 Running 0 97s
|
||||
rook-ceph-operator-6fc8cfb68b-bb57z 1/1 Running 1 7m9s
|
||||
rook-ceph-osd-0-689b6f65b-2nvcx 1/1 Running 0 12s
|
||||
rook-ceph-osd-1-7bfd69fdf9-vjqmp 1/1 Running 0 4s
|
||||
rook-ceph-osd-prepare-rook-storage-0-hf28p 0/1 Completed 0 50s
|
||||
rook-ceph-osd-prepare-rook-storage-1-r6lsd 0/1 Completed 0 50s
|
||||
rook-ceph-tools-84c7fff88c-x5trx 1/1 Running 0 6m11s
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,317 +0,0 @@
|
||||
.. _index-install-r6-distcloud-46f4880ec78b:
|
||||
|
||||
===================================
|
||||
Distributed Cloud Installation R6.0
|
||||
===================================
|
||||
|
||||
This section describes how to install and configure the StarlingX distributed
|
||||
cloud deployment.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
Distributed cloud configuration supports an edge computing solution by
|
||||
providing central management and orchestration for a geographically
|
||||
distributed network of StarlingX Kubernetes edge systems/clusters.
|
||||
|
||||
The StarlingX distributed cloud implements the OpenStack Edge Computing
|
||||
Groups's MVP `Edge Reference Architecture
|
||||
<https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures>`_,
|
||||
specifically the "Distributed Control Plane" scenario.
|
||||
|
||||
The StarlingX distributed cloud deployment is designed to meet the needs of
|
||||
edge-based data centers with centralized orchestration and independent control
|
||||
planes, and in which Network Function Cloudification (NFC) worker resources
|
||||
are localized for maximum responsiveness. The architecture features:
|
||||
|
||||
- Centralized orchestration of edge cloud control planes.
|
||||
- Full synchronized control planes at edge clouds (that is, Kubernetes cluster
|
||||
master and nodes), with greater benefits for local services, such as:
|
||||
|
||||
- Reduced network latency.
|
||||
- Operational availability, even if northbound connectivity
|
||||
to the central cloud is lost.
|
||||
|
||||
The system supports a scalable number of StarlingX Kubernetes edge
|
||||
systems/clusters, which are centrally managed and synchronized over L3
|
||||
networks from a central cloud. Each edge system is also highly scalable, from
|
||||
a single node StarlingX Kubernetes deployment to a full standard cloud
|
||||
configuration with controller, worker and storage nodes.
|
||||
|
||||
------------------------------
|
||||
Distributed cloud architecture
|
||||
------------------------------
|
||||
|
||||
A distributed cloud system consists of a central cloud, and one or more
|
||||
subclouds connected to the SystemController region central cloud over L3
|
||||
networks, as shown in Figure 1.
|
||||
|
||||
- **Central cloud**
|
||||
|
||||
The central cloud provides a *RegionOne* region for managing the physical
|
||||
platform of the central cloud and the *SystemController* region for managing
|
||||
and orchestrating over the subclouds.
|
||||
|
||||
- **RegionOne**
|
||||
|
||||
In the Horizon GUI, RegionOne is the name of the access mode, or region,
|
||||
used to manage the nodes in the central cloud.
|
||||
|
||||
- **SystemController**
|
||||
|
||||
In the Horizon GUI, SystemController is the name of the access mode, or
|
||||
region, used to manage the subclouds.
|
||||
|
||||
You can use the System Controller to add subclouds, synchronize select
|
||||
configuration data across all subclouds and monitor subcloud operations
|
||||
and alarms. System software updates for the subclouds are also centrally
|
||||
managed and applied from the System Controller.
|
||||
|
||||
DNS, NTP, and other select configuration settings are centrally managed
|
||||
at the System Controller and pushed to the subclouds in parallel to
|
||||
maintain synchronization across the distributed cloud.
|
||||
|
||||
- **Subclouds**
|
||||
|
||||
The subclouds are StarlingX Kubernetes edge systems/clusters used to host
|
||||
containerized applications. Any type of StarlingX Kubernetes configuration,
|
||||
(including simplex, duplex, or standard with or without storage nodes), can
|
||||
be used for a subcloud.
|
||||
|
||||
The two edge clouds shown in Figure 1 are subclouds.
|
||||
|
||||
Alarms raised at the subclouds are sent to the System Controller for
|
||||
central reporting.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-distributed-cloud.png
|
||||
:scale: 45%
|
||||
:alt: Distributed cloud deployment configuration
|
||||
|
||||
*Figure 1: Distributed cloud deployment configuration*
|
||||
|
||||
|
||||
--------------------
|
||||
Network requirements
|
||||
--------------------
|
||||
|
||||
Subclouds are connected to the System Controller through both the OAM and the
|
||||
Management interfaces. Because each subcloud is on a separate L3 subnet, the
|
||||
OAM, Management and PXE boot L2 networks are local to the subclouds. They are
|
||||
not connected via L2 to the central cloud, they are only connected via L3
|
||||
routing. The settings required to connect a subcloud to the System Controller
|
||||
are specified when a subcloud is defined. A gateway router is required to
|
||||
complete the L3 connections, which will provide IP routing between the
|
||||
subcloud Management and OAM IP subnet and the System Controller Management and
|
||||
OAM IP subnet, respectively. The System Controller bootstraps the subclouds via
|
||||
the OAM network, and manages them via the management network. For more
|
||||
information, see the `Install a Subcloud`_ section later in this guide.
|
||||
|
||||
.. note::
|
||||
|
||||
All messaging between System Controllers and Subclouds uses the ``admin``
|
||||
REST API service endpoints which, in this distributed cloud environment,
|
||||
are all configured for secure HTTPS. Certificates for these HTTPS
|
||||
connections are managed internally by StarlingX.
|
||||
|
||||
---------------------------------------
|
||||
Install and provision the central cloud
|
||||
---------------------------------------
|
||||
|
||||
Installing the central cloud is similar to installing a standard
|
||||
StarlingX Kubernetes system. The central cloud supports either an AIO-duplex
|
||||
deployment configuration or a standard with dedicated storage nodes deployment
|
||||
configuration.
|
||||
|
||||
To configure controller-0 as a distributed cloud central controller, you must
|
||||
set certain system parameters during the initial bootstrapping of
|
||||
controller-0. Set the system parameter *distributed_cloud_role* to
|
||||
*systemcontroller* in the Ansible bootstrap override file. Also, set the
|
||||
management network IP address range to exclude IP addresses reserved for
|
||||
gateway routers providing routing to the subclouds' management subnets.
|
||||
|
||||
Procedure:
|
||||
|
||||
- Follow the StarlingX R6.0 installation procedures with the extra step noted below:
|
||||
|
||||
- AIO-duplex:
|
||||
`Bare metal All-in-one Duplex Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex.html>`_
|
||||
|
||||
- Standard with dedicated storage nodes:
|
||||
`Bare metal Standard with Dedicated Storage Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/dedicated_storage.html>`_
|
||||
|
||||
- For the step "Bootstrap system on controller-0", add the following
|
||||
parameters to the Ansible bootstrap override file.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
distributed_cloud_role: systemcontroller
|
||||
management_start_address: <X.Y.Z.2>
|
||||
management_end_address: <X.Y.Z.50>
|
||||
|
||||
------------------
|
||||
Install a subcloud
|
||||
------------------
|
||||
|
||||
At the subcloud location:
|
||||
|
||||
#. Physically install and cable all subcloud servers.
|
||||
|
||||
#. Physically install the top of rack switch and configure it for the
|
||||
required networks.
|
||||
|
||||
#. Physically install the gateway routers which will provide IP routing
|
||||
between the subcloud OAM and Management subnets and the System Controller
|
||||
OAM and management subnets.
|
||||
|
||||
#. On the server designated for controller-0, install the StarlingX
|
||||
Kubernetes software from USB or a PXE Boot server.
|
||||
|
||||
#. Establish an L3 connection to the System Controller by enabling the OAM
|
||||
interface (with OAM IP/subnet) on the subcloud controller using the
|
||||
``config_management`` script. This step is for subcloud ansible bootstrap
|
||||
preparation.
|
||||
|
||||
.. note:: This step should **not** use an interface that uses the MGMT
|
||||
IP/subnet because the MGMT IP subnet will get moved to the loopback
|
||||
address by the Ansible bootstrap playbook during installation.
|
||||
|
||||
Be prepared to provide the following information:
|
||||
|
||||
- Subcloud OAM interface name (for example, enp0s3).
|
||||
- Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24).
|
||||
|
||||
.. note:: This must match the *external_oam_floating_address* supplied in
|
||||
the subcloud's ansible bootstrap override file.
|
||||
|
||||
- Subcloud gateway address on the OAM network
|
||||
(for example, 10.10.10.1). A default value is shown.
|
||||
- System Controller OAM subnet (for example, 10,10.10.0/24).
|
||||
|
||||
.. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for
|
||||
the script to finish.
|
||||
|
||||
.. note:: The `config_management` in the code snippet configures the OAM
|
||||
interface/address/gateway.
|
||||
|
||||
.. code:: sh
|
||||
|
||||
$ sudo config_management
|
||||
Enabling interfaces... DONE
|
||||
Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE
|
||||
Available interfaces:
|
||||
local interface remote port
|
||||
--------------- ----------
|
||||
enp0s3 08:00:27:c4:6c:7a
|
||||
enp0s8 08:00:27:86:7a:13
|
||||
enp0s9 unknown
|
||||
|
||||
Enter management interface name: enp0s3
|
||||
Enter management address CIDR: 10.10.10.12/24
|
||||
Enter management gateway address [10.10.10.1]:
|
||||
Enter System Controller subnet: 10.10.10.0/24
|
||||
Disabling non-management interfaces... DONE
|
||||
Configuring management interface... DONE
|
||||
RTNETLINK answers: File exists
|
||||
Adding route to System Controller... DONE
|
||||
|
||||
At the System Controller:
|
||||
|
||||
#. Create a ``bootstrap-values.yml`` override file for the subcloud. For
|
||||
example:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
system_mode: duplex
|
||||
name: "subcloud1"
|
||||
description: "Ottawa Site"
|
||||
location: "YOW"
|
||||
|
||||
management_subnet: 192.168.101.0/24
|
||||
management_start_address: 192.168.101.2
|
||||
management_end_address: 192.168.101.50
|
||||
management_gateway_address: 192.168.101.1
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.12
|
||||
|
||||
systemcontroller_gateway_address: 192.168.204.101
|
||||
|
||||
.. important:: The `management_*` entries in the override file are required
|
||||
for all installation types, including AIO-Simplex.
|
||||
|
||||
.. important:: The `management_subnet` must not overlap with any other subclouds.
|
||||
|
||||
.. note:: The `systemcontroller_gateway_address` is the address of central
|
||||
cloud management network gateway.
|
||||
|
||||
#. Add the subcloud using the CLI command below:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
dcmanager subcloud add --bootstrap-address <ip_address>
|
||||
--bootstrap-values <config-file>
|
||||
|
||||
Where:
|
||||
|
||||
- *<ip_address>* is the OAM interface address set earlier on the subcloud.
|
||||
- *<config_file>* is the Ansible override configuration file, ``bootstrap-values.yml``,
|
||||
created earlier in step 1.
|
||||
|
||||
You will be prompted for the Linux password of the subcloud. This command
|
||||
will take 5- 10 minutes to complete. You can monitor the progress of the
|
||||
subcloud bootstrap through logs:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
tail –f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log
|
||||
|
||||
3. Confirm that the subcloud was deployed successfully:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
dcmanager subcloud list
|
||||
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
| id | name | management | availability | deploy status | sync |
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
| 1 | subcloud1 | unmanaged | offline | complete | unknown |
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
|
||||
#. Continue provisioning the subcloud system as required using the StarlingX
|
||||
R6.0 Installation procedures and starting from the 'Configure controller-0'
|
||||
step.
|
||||
|
||||
- For AIO-Simplex:
|
||||
`Bare metal All-in-one Simplex Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_simplex.html>`_
|
||||
|
||||
- For AIO-Duplex:
|
||||
`Bare metal All-in-one Duplex Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex.html>`_
|
||||
|
||||
- For Standard with controller storage:
|
||||
`Bare metal Standard with Controller Storage Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/controller_storage.html>`_
|
||||
|
||||
- For Standard with dedicated storage nodes:
|
||||
`Bare metal Standard with Dedicated Storage Installation R6.0 <https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/dedicated_storage.html>`_
|
||||
|
||||
On the active controller for each subcloud:
|
||||
|
||||
#. Add a route from the subcloud to the controller management network to enable
|
||||
the subcloud to go online. For each host in the subcloud:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
system host-route-add <host id> <mgmt.interface> \
|
||||
<system controller mgmt.subnet> <prefix> <subcloud mgmt.gateway ip>
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
|
||||
|
@ -1,146 +0,0 @@
|
||||
.. _index-install-r6-8966076f0e81:
|
||||
|
||||
===========================
|
||||
StarlingX R6.0 Installation
|
||||
===========================
|
||||
|
||||
StarlingX provides a pre-defined set of standard :doc:`deployment
|
||||
configurations </introduction/deploy_options>`. Most deployment options may be
|
||||
installed in a virtual environment or on bare metal.
|
||||
|
||||
-----------------------------------------------------
|
||||
Install StarlingX Kubernetes in a virtual environment
|
||||
-----------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
virtual/aio_simplex
|
||||
virtual/aio_duplex
|
||||
virtual/controller_storage
|
||||
virtual/dedicated_storage
|
||||
virtual/rook_storage
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
virtual/config_virtualbox_netwk
|
||||
virtual/install_virtualbox
|
||||
|
||||
------------------------------------------
|
||||
Install StarlingX Kubernetes on bare metal
|
||||
------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/aio_simplex
|
||||
bare_metal/aio_duplex
|
||||
bare_metal/controller_storage
|
||||
bare_metal/dedicated_storage
|
||||
bare_metal/ironic
|
||||
bare_metal/rook_storage
|
||||
|
||||
**********
|
||||
Appendixes
|
||||
**********
|
||||
|
||||
|
||||
.. _use-private-docker-registry-r6:
|
||||
|
||||
Use a private Docker registry
|
||||
*****************************
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/bootstrapping-from-a-private-docker-registry
|
||||
|
||||
Set up a Simple DNS Server in Lab
|
||||
*********************************
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
setup-simple-dns-server-in-lab
|
||||
|
||||
Install controller-0 from a PXE boot server
|
||||
*******************************************
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/configuring-a-pxe-boot-server
|
||||
bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration
|
||||
|
||||
|
||||
Add and reinstall a host
|
||||
************************
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/adding-hosts-using-the-host-add-command
|
||||
bare_metal/delete-hosts-using-the-host-delete-command-1729d2e3153b
|
||||
|
||||
|
||||
Add hosts in bulk
|
||||
,,,,,,,,,,,,,,,,,
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/adding-hosts-in-bulk
|
||||
bare_metal/bulk-host-xml-file-format
|
||||
|
||||
|
||||
Reinstall a system or a host
|
||||
,,,,,,,,,,,,,,,,,,,,,,,,,,,,
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/reinstalling-a-system-or-a-host
|
||||
bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file
|
||||
bare_metal/exporting-host-configurations
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
ansible_bootstrap_configs
|
||||
|
||||
-------------------------------------------------
|
||||
Install StarlingX Distributed Cloud on bare metal
|
||||
-------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
distributed_cloud/index-install-r6-distcloud-46f4880ec78b
|
||||
|
||||
-----------------
|
||||
Access Kubernetes
|
||||
-----------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
kubernetes_access
|
||||
|
||||
---------------------------
|
||||
Install StarlingX OpenStack
|
||||
---------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
openstack/index-install-r6-os-adc44604968c
|
||||
|
||||
--------------------------
|
||||
Access StarlingX OpenStack
|
||||
--------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
openstack/access
|
@ -1,190 +0,0 @@
|
||||
.. _kubernetes_access_r6:
|
||||
|
||||
================================
|
||||
Access StarlingX Kubernetes R6.0
|
||||
================================
|
||||
|
||||
This section describes how to use local/remote CLIs, GUIs, and/or REST APIs to
|
||||
access and manage StarlingX Kubernetes and hosted containerized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----------
|
||||
Local CLIs
|
||||
----------
|
||||
|
||||
To access the StarlingX and Kubernetes commands on controller-0, follow these
|
||||
steps:
|
||||
|
||||
#. Log in to controller-0 via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
#. Acquire Keystone admin and Kubernetes admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
*********************************************
|
||||
StarlingX system and host management commands
|
||||
*********************************************
|
||||
|
||||
Access StarlingX system and host management commands using the :command:`system`
|
||||
command. For example:
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
Use the :command:`system help` command for the full list of options.
|
||||
|
||||
***********************************
|
||||
StarlingX fault management commands
|
||||
***********************************
|
||||
|
||||
Access StarlingX fault management commands using the :command:`fm` command, for
|
||||
example:
|
||||
|
||||
::
|
||||
|
||||
fm alarm-list
|
||||
|
||||
*******************
|
||||
Kubernetes commands
|
||||
*******************
|
||||
|
||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
|
||||
|
||||
::
|
||||
|
||||
kubectl get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
controller-0 Ready master 5d19h v1.13.5
|
||||
|
||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
|
||||
|
||||
-----------
|
||||
Remote CLIs
|
||||
-----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
.. note::
|
||||
|
||||
For a virtual installation, run the browser on the host machine.
|
||||
|
||||
*********************
|
||||
StarlingX Horizon GUI
|
||||
*********************
|
||||
|
||||
Access the StarlingX Horizon GUI with the following steps:
|
||||
|
||||
#. Enter the OAM floating IP address in your browser:
|
||||
``\http://<oam-floating-ip-address>:8080``.
|
||||
|
||||
Discover your OAM floating IP address with the :command:`system oam-show`
|
||||
command.
|
||||
|
||||
#. Log in to Horizon with an admin/<sysadmin-password>.
|
||||
|
||||
********************
|
||||
Kubernetes dashboard
|
||||
********************
|
||||
|
||||
The Kubernetes dashboard is not installed by default.
|
||||
|
||||
To install the Kubernetes dashboard, execute the following steps on
|
||||
controller-0:
|
||||
|
||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
|
||||
the override values shown below:
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > dashboard-values.yaml
|
||||
service:
|
||||
type: NodePort
|
||||
nodePort: 32000
|
||||
|
||||
rbac:
|
||||
create: true
|
||||
clusterAdminRole: true
|
||||
|
||||
serviceAccount:
|
||||
create: true
|
||||
name: kubernetes-dashboard
|
||||
EOF
|
||||
|
||||
helm repo update
|
||||
|
||||
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
|
||||
|
||||
helm install dashboard kubernetes-dashboard/kubernetes-dashboard -f dashboard-values.yaml
|
||||
|
||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges,
|
||||
and display its token for logging into the Kubernetes dashboard.
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > admin-login.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
EOF
|
||||
|
||||
kubectl apply -f admin-login.yaml
|
||||
|
||||
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||
|
||||
|
||||
Access the Kubernetes dashboard GUI with the following steps:
|
||||
|
||||
#. Enter the OAM floating IP address in your browser:
|
||||
``\https://<oam-floating-ip-address>:32000``.
|
||||
|
||||
Discover your OAM floating IP address with the :command:`system oam-show`
|
||||
command.
|
||||
|
||||
#. Log in to the Kubernetes dashboard using the ``admin-user`` token.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
List the StarlingX platform-related public REST API endpoints using the
|
||||
following command:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list | grep public
|
||||
|
||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
|
||||
REST API messages.
|
@ -1,105 +0,0 @@
|
||||
.. _convert-worker-nodes-0007b1532308:
|
||||
|
||||
====================
|
||||
Convert Worker Nodes
|
||||
====================
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
In a hybrid (Kubernetes and OpenStack) cluster scenario you may need to convert
|
||||
worker nodes to/from ``openstack-compute-nodes``.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Convert a k8s-only worker into a OpenStack compute
|
||||
|
||||
#. Lock the worker host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-lock <host>
|
||||
|
||||
#. Add the ``openstack-compute-node`` taint.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
kubectl taint nodes <kubernetes-node-name> openstack-compute-node:NoSchedule
|
||||
|
||||
#. Assign OpenStack labels:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign <host> --overwrite openstack-compute-node=enabled avs=enabled sriov=enabled
|
||||
|
||||
#. Allocate vswitch huge pages:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-memory-modify -1G 1 -f vswitch <host> 0
|
||||
system host-memory-modify -1G 1 -f vswitch <host> 1
|
||||
|
||||
#. Change the class of the data network interface:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-if-modify -c data <host> <if_name_or_uuid>
|
||||
|
||||
.. note::
|
||||
|
||||
If data network interface does not exist yet, refer to |prod-os|
|
||||
documentation on creating it.
|
||||
|
||||
#. Change Kubernetes CPU Manager Policy to allow |VMs| to use application
|
||||
cores:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-remove <host> kube-cpu-mgr-policy
|
||||
|
||||
#. Unlock the worker host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-unlock <host>
|
||||
|
||||
#. Convert a OpenStack compute into a k8s-only worker.
|
||||
|
||||
#. Lock the worker host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-lock <host>
|
||||
|
||||
#. Remove OpenStack labels:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-remove <host> openstack-compute-node avs sriov
|
||||
|
||||
.. note::
|
||||
|
||||
The labels have to be removed, not to have its values changed.
|
||||
|
||||
#. Deallocate vswitch huge pages:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-memory-modify -1G 0 -f vswitch <host> 0
|
||||
system host-memory-modify -1G 0 -f vswitch <host> 1
|
||||
|
||||
#. Change the class of the data network interface:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-if-modify -c none <host> <if_name_or_uuid>
|
||||
|
||||
.. note::
|
||||
|
||||
This change is needed to avoid raising a permanent alarm for the
|
||||
interface without the need to delete it.
|
||||
|
||||
#. Unlock the worker host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-unlock <host>
|
@ -1,49 +0,0 @@
|
||||
.. _hybrid-cluster-c7a3134b6f2a:
|
||||
|
||||
==============
|
||||
Hybrid Cluster
|
||||
==============
|
||||
|
||||
A Hybrid Cluster occurs when the hosts with a worker function (|AIO|
|
||||
controllers and worker nodes) are split between two groups, one running
|
||||
|prod-os| for hosting |VM| payloads and the other for hosting containerized
|
||||
payloads.
|
||||
|
||||
The host labels are used to define each worker function on the Hybrid Cluster
|
||||
setup. For example, a standard configuration (2 controllers and 2 computes) can
|
||||
be split into (2 controllers, 1 openstack-compute and 1 kubernetes-worker).
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/hybrid-cluster.rest
|
||||
:start-after: begin-prepare-cloud-platform
|
||||
:end-before: end-prepare-cloud-platform
|
||||
|
||||
-----------
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
- Worker function on |AIO| controllers MUST both be either
|
||||
Kubernetes or OpenStack.
|
||||
|
||||
- Hybrid Cluster does not apply to |AIO-SX| or |AIO-DX| setups.
|
||||
|
||||
- A worker must have only one function, either it is OpenStack compute or
|
||||
k8s-only worker, never both at the same time.
|
||||
|
||||
- The ``sriov`` and ``sriovdp`` labels cannot coexist on the same host,
|
||||
in order to prevent the |SRIOV| device plugin from conflicting with the
|
||||
OpenStack |SRIOV| driver.
|
||||
|
||||
- No host will assign |VMs| and application containers to application cores
|
||||
at the same time.
|
||||
|
||||
- Standard Controllers cannot have ``openstack-compute-node`` label;
|
||||
only |AIO| Controllers can have ``openstack-compute-node`` label.
|
||||
|
||||
- Taints must be added to OpenStack compute hosts (i.e. worker nodes or
|
||||
|AIO|-Controller nodes with the ``openstack-compute-node`` label) to
|
||||
prevent end users' hosted containerized workloads/pods from being scheduled on
|
||||
OpenStack compute hosts.
|
||||
|
||||
|
@ -1,29 +0,0 @@
|
||||
.. _index-install-r6-os-adc44604968c:
|
||||
|
||||
===================
|
||||
StarlingX OpenStack
|
||||
===================
|
||||
|
||||
This section describes the steps to install and access StarlingX OpenStack.
|
||||
Other than the OpenStack-specific configurations required in the underlying
|
||||
StarlingX Kubernetes infrastructure (described in the installation steps for
|
||||
StarlingX Kubernetes), the installation of containerized OpenStack for
|
||||
StarlingX is independent of deployment configuration.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
install
|
||||
uninstall_delete
|
||||
|
||||
--------------
|
||||
Hybrid Cluster
|
||||
--------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
hybrid-cluster-c7a3134b6f2a
|
||||
convert-worker-nodes-0007b1532308
|
||||
|
||||
|
@ -1,134 +0,0 @@
|
||||
===========================
|
||||
Install StarlingX OpenStack
|
||||
===========================
|
||||
|
||||
These instructions assume that you have completed the following
|
||||
OpenStack-specific configuration tasks that are required by the underlying
|
||||
StarlingX Kubernetes platform:
|
||||
|
||||
* All nodes have been labeled appropriately for their OpenStack role(s).
|
||||
* The vSwitch type has been configured.
|
||||
* The nova-local volume group has been configured on any node's host, if running
|
||||
the compute function.
|
||||
|
||||
--------------------------------------------
|
||||
Install application manifest and helm-charts
|
||||
--------------------------------------------
|
||||
|
||||
#. Modify the size of the docker_lv filesystem. By default, the size of the
|
||||
docker_lv filesystem is 30G, which is not enough for |prefix|-openstack
|
||||
installation. Use the ``host-fs-modify`` CLI to increase the filesystem size.
|
||||
|
||||
The syntax is:
|
||||
|
||||
::
|
||||
|
||||
system host-fs-modify <hostname or id> <fs name=size>
|
||||
|
||||
|
||||
Where:
|
||||
|
||||
* ``hostname or id`` is the location where the file system will be added.
|
||||
* ``fs name`` is the file system name.
|
||||
* ``size`` is an integer indicating the file system size in Gigabytes.
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-fs-modify controller-0 docker=60
|
||||
|
||||
#. Get the latest StarlingX OpenStack application (|prefix|-openstack) manifest and
|
||||
helm charts. Use one of the following options:
|
||||
|
||||
* Private StarlingX build. See :ref:`Build-stx-openstack-app` for details.
|
||||
* Public download from
|
||||
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||
|
||||
After you select a release, helm charts are located in ``centos/outputs/helm-charts``.
|
||||
|
||||
#. Load the |prefix|-openstack application's package into StarlingX. The tarball
|
||||
package contains |prefix|-openstack's manifest and
|
||||
|prefix|-openstack's set of helm charts. For example:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system application-upload |prefix|-openstack-<version>-centos-stable-versioned.tgz
|
||||
|
||||
This will:
|
||||
|
||||
* Load the manifest and helm charts.
|
||||
* Internally manage helm chart override values for each chart.
|
||||
* Automatically generate system helm chart overrides for each chart based on
|
||||
the current state of the underlying StarlingX Kubernetes platform and the
|
||||
recommended StarlingX configuration of OpenStack services.
|
||||
|
||||
|
||||
#. OPTIONAL: In order to enable secure HTTPS connectivity for OpenStack REST
|
||||
APIs and OpenStack Horizon, install an HTTPS Certificate for OpenStack.
|
||||
|
||||
#. Obtain an Intermediate or Root CA-signed certificate and key from a
|
||||
trusted Intermediate or Root CA. The OpenStack certificate should be
|
||||
created with a wildcard SAN, e.g.
|
||||
|
||||
.. code-block::
|
||||
|
||||
X509v3 extensions:
|
||||
X509v3 Subject Alternative Name:
|
||||
DNS:*.my-wro.my-company.com
|
||||
|
||||
#. Put the PEM encoded versions of the openstack certificate and key in a
|
||||
single file (e.g. ``openstack-cert-key.pem``), and put the certificate of
|
||||
the Root CA in a separate file (e.g. ``openstack-ca-cert.pem``), and copy
|
||||
the files to the controller host.
|
||||
|
||||
#. Install/update this certificate as the OpenStack REST API / Horizon
|
||||
certificate:
|
||||
|
||||
.. code-block::
|
||||
|
||||
~(keystone_admin)]$ system certificate-install -m ssl_ca openstack-ca-cert.pem
|
||||
~(keystone_admin)]$ system certificate-install -m openstack_ca openstack-ca-cert.pem
|
||||
~(keystone_admin)]$ system certificate-install -m openstack openstack-cert-key.pem
|
||||
|
||||
The above command will make the appropriate overrides to the OpenStack
|
||||
Helm Charts to enable HTTPS on the OpenStack Services REST API
|
||||
endpoints.
|
||||
|
||||
#. OPTIONAL: Configure the domain name.
|
||||
|
||||
For details, see :ref:`update-the-domain-name`.
|
||||
|
||||
#. Apply the |prefix|-openstack application in order to bring StarlingX OpenStack into
|
||||
service. If your environment is preconfigured with a proxy server, then
|
||||
make sure HTTPS proxy is set before applying |prefix|-openstack.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system application-apply |prefix|-openstack
|
||||
|
||||
.. note::
|
||||
|
||||
To set the HTTPS proxy at bootstrap time, refer to
|
||||
`Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#docker-proxy>`_.
|
||||
|
||||
To set the HTTPS proxy after installation, refer to
|
||||
`Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_.
|
||||
|
||||
#. Wait for the activation of |prefix|-openstack to complete.
|
||||
|
||||
This can take 5-10 minutes depending on the performance of your host machine.
|
||||
|
||||
Monitor progress with the command:
|
||||
|
||||
::
|
||||
|
||||
watch -n 5 system application-list
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
Your OpenStack cloud is now up and running.
|
||||
|
||||
See :doc:`access` for details on how to access StarlingX OpenStack.
|
@ -1,48 +0,0 @@
|
||||
|
||||
.. _uninstall_delete-r6:
|
||||
|
||||
===================
|
||||
Uninstall OpenStack
|
||||
===================
|
||||
|
||||
This section provides commands for uninstalling and deleting the
|
||||
|prod| OpenStack application.
|
||||
|
||||
.. warning::
|
||||
|
||||
Uninstalling the OpenStack application will terminate all OpenStack services.
|
||||
|
||||
------------------------------
|
||||
Remove all OpenStack resources
|
||||
------------------------------
|
||||
|
||||
In order to ensure that all resources are properly released, use the OpenStack
|
||||
|CLI| to remove all resources created in the OpenStack environment. This
|
||||
includes:
|
||||
|
||||
- Terminating/Deleting all servers/instances/|VMs|
|
||||
- Removing all volumes, volume backups, volume snapshots
|
||||
- Removing all Glance images
|
||||
|
||||
-----------------------------
|
||||
Bring down OpenStack services
|
||||
-----------------------------
|
||||
|
||||
Use the system CLI to uninstall the OpenStack application:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system application-remove |prefix|-openstack
|
||||
system application-list
|
||||
|
||||
---------------------------------------
|
||||
Delete OpenStack application definition
|
||||
---------------------------------------
|
||||
|
||||
Use the system CLI to delete the OpenStack application definition:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system application-delete |prefix|-openstack
|
||||
system application-list
|
||||
|
@ -1,99 +0,0 @@
|
||||
.. _setup-simple-dns-server-in-lab-r6:
|
||||
|
||||
=====================================
|
||||
Set up a Simple DNS Server in the Lab
|
||||
=====================================
|
||||
|
||||
While installing or using |prod|, you may require a |DNS| server that you can add
|
||||
entries to for name resolution.
|
||||
|
||||
If you don't have access to such a DNS server, here is an example procedure for
|
||||
standing up a simple Bind server on an Ubuntu 20.04 server.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
#. Run the following commands to install.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo apt update
|
||||
$ sudo apt install bind9
|
||||
|
||||
#. Use the following commands for a basic setup.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo ufw allow Bind9
|
||||
|
||||
$ sudo vi /etc/bind/named.conf.options
|
||||
…
|
||||
dnssec-validation no;
|
||||
|
||||
listen-on {
|
||||
10.10.10.0/24; # this ubuntu server's address is 10.10.10.9/24
|
||||
};
|
||||
|
||||
allow-query { any; };
|
||||
|
||||
# If this DNS Server can't find name, forward to …
|
||||
forwarders {
|
||||
8.8.8.8;
|
||||
8.8.4.4;
|
||||
};
|
||||
|
||||
…
|
||||
|
||||
$ sudo named-checkconf
|
||||
|
||||
$ sudo systemctl restart bind9
|
||||
|
||||
# Test
|
||||
$ nslookup ubuntu.com 10.10.10.9
|
||||
|
||||
#. Add a domain, e.g. mydomain.com.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo vi /etc/bind/named.conf.local
|
||||
…
|
||||
zone "mydomain.com" {
|
||||
type master;
|
||||
file "/etc/bind/db.mydomain.com";
|
||||
};
|
||||
|
||||
$ sudo systemctl reload bind9
|
||||
|
||||
$ sudo cp /etc/bind/db.local /etc/bind/db.mydomain.com
|
||||
|
||||
# Edit db.mydomain.com … where HOSTNAME is hostname of the dns bind server
|
||||
$ sudo vi /etc/bind/db.mydomain.com
|
||||
;
|
||||
;
|
||||
;
|
||||
$TTL 604800
|
||||
@ IN SOA HOSTNAME. admin.HOSTNAME. (
|
||||
2 ; Serial
|
||||
604800 ; Refresh
|
||||
86400 ; Retry
|
||||
2419200 ; Expire
|
||||
604800 ) ; Negative Cache TTL
|
||||
;
|
||||
@ IN NS HOSTNAME.
|
||||
|
||||
@ IN A 10.10.10.9
|
||||
|
||||
wrcp IN A 10.10.10.2
|
||||
horizon.wrcp IN A 10.10.10.2
|
||||
|
||||
registry IN A 10.10.10.10
|
||||
|
||||
|
||||
$ sudo rndc reload
|
||||
$ sudo systemctl reload bind9
|
||||
$ sudo systemctl restart bind9
|
||||
$ sudo systemctl status bind9
|
||||
|
||||
# test
|
||||
$ nslookup mydomain.com 10.10.10.9
|
||||
$ nslookup wrcp.mydomain.com 10.10.10.9
|
||||
$ nslookup registry.mydomain.com 10.10.10.9
|
@ -1,21 +0,0 @@
|
||||
===========================================
|
||||
Virtual All-in-one Duplex Installation R6.0
|
||||
===========================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_duplex.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_duplex_environ
|
||||
aio_duplex_install_kubernetes
|
@ -1,57 +0,0 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R6.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* duplex-controller-0
|
||||
* duplex-controller-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'duplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c duplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
||||
|
@ -1,590 +0,0 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-DX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server 'duplex-controller-0' was started by the :command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
|
||||
for details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
**************************************************************
|
||||
Optionally, initialize a Ceph-based Persistent Storage Backend
|
||||
**************************************************************
|
||||
|
||||
.. important::
|
||||
|
||||
A persistent storage backend is required if your application requires
|
||||
Persistent Volume Claims (PVCs). The StarlingX OpenStack application
|
||||
(|prefix|-openstack) requires PVCs, therefore if you plan on using the
|
||||
|prefix|-openstack application, then you must configure a persistent storage
|
||||
backend.
|
||||
|
||||
There are two options for persistent storage backend:
|
||||
1) the host-based Ceph solution and
|
||||
2) the Rook container-based Ceph solution.
|
||||
|
||||
The Rook container-based Ceph backend is installed after both
|
||||
AIO-Controllers are configured and unlocked.
|
||||
|
||||
For host-based Ceph,
|
||||
|
||||
#. Initialize with add ceph backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Add an OSD on controller-0 for host-based Ceph:
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||
additional info on configuring the Ceph storage backend such as supporting
|
||||
SSD-backed journals, multiple storage tiers, and so on.
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
#. Initialize with add ceph-rook backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph-rook --confirmed
|
||||
|
||||
#. Assign Rook host labels to controller-0 in support of installing the
|
||||
rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :ref:`docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
:end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------
|
||||
Install software on controller-1 node
|
||||
-------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
|
||||
automatically attempt to network boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start duplex-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
|
||||
reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up
|
||||
automatically by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
#. Configure data interfaces for controller-1.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-1
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
*************************************************************************************
|
||||
Optionally, configure host-specific details for Ceph-based Persistent Storage Backend
|
||||
*************************************************************************************
|
||||
|
||||
For host-based Ceph:
|
||||
|
||||
#. Add an OSD on controller-1 for host-based Ceph:
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-1
|
||||
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
|
||||
system host-stor-list controller-1
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
#. Assign Rook host labels to controller-1 in support of installing the
|
||||
rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable it by
|
||||
using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
|
||||
--------------------------------------------------------------------------
|
||||
Optionally, finish configuration of Ceph-based Persistent Storage Backend
|
||||
--------------------------------------------------------------------------
|
||||
|
||||
For host-based Ceph: Nothing else is required.
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
On **virtual** controller-0 and controller-1:
|
||||
|
||||
#. Wait for the ``rook-ceph-apps`` application to be uploaded
|
||||
|
||||
::
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
$ system application-list
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
|
||||
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
|
||||
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph osd
|
||||
|
||||
::
|
||||
|
||||
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: controller-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
- name: controller-1
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
|
||||
|
||||
#. Apply the rook-ceph-apps application.
|
||||
|
||||
::
|
||||
|
||||
system application-apply rook-ceph-apps
|
||||
|
||||
#. Wait for OSDs pod to be ready.
|
||||
|
||||
::
|
||||
|
||||
kubectl get pods -n kube-system
|
||||
rook-ceph-crashcollector-controller-0-f984688ff-jsr8t 1/1 Running 0 4m9s
|
||||
rook-ceph-crashcollector-controller-1-7f9b6f55b6-699bb 1/1 Running 0 2m5s
|
||||
rook-ceph-mgr-a-7f9d588c5b-49cbg 1/1 Running 0 3m5s
|
||||
rook-ceph-mon-a-75bcbd8664-pvq99 1/1 Running 0 4m27s
|
||||
rook-ceph-mon-b-86c67658b4-f4snf 1/1 Running 0 4m10s
|
||||
rook-ceph-mon-c-7f48b58dfb-4nx2n 1/1 Running 0 3m30s
|
||||
rook-ceph-operator-77b64588c5-bhfg7 1/1 Running 0 7m6s
|
||||
rook-ceph-osd-0-6949657cf7-dkfp2 1/1 Running 0 2m6s
|
||||
rook-ceph-osd-1-5d4b58cf69-kdg82 1/1 Running 0 2m4s
|
||||
rook-ceph-osd-prepare-controller-0-wcvsn 0/1 Completed 0 2m27s
|
||||
rook-ceph-osd-prepare-controller-1-98h76 0/1 Completed 0 2m26s
|
||||
rook-ceph-tools-5778d7f6c-2h8s8 1/1 Running 0 5m55s
|
||||
rook-discover-xc22t 1/1 Running 0 6m2s
|
||||
rook-discover-xndld 1/1 Running 0 6m2s
|
||||
storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,55 +0,0 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R6.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* simplex-controller-0
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'simplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c simplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -1,426 +0,0 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-SX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`aio_simplex_environ`, the controller-0 virtual server 'simplex-controller-0'
|
||||
was started by the :command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the
|
||||
appropriate installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console simplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: simplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
|
||||
for details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||
as "oam". Use the OAM port name, for example eth0, that is applicable to your
|
||||
deployment environment:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
**************************************************************
|
||||
Optionally, initialize a Ceph-based Persistent Storage Backend
|
||||
**************************************************************
|
||||
|
||||
.. important::
|
||||
|
||||
A persistent storage backend is required if your application requires
|
||||
Persistent Volume Claims (PVCs). The StarlingX OpenStack application
|
||||
(|prefix|-openstack) requires PVCs, therefore if you plan on using the
|
||||
|prefix|-openstack application, then you must configure a persistent storage
|
||||
backend.
|
||||
|
||||
There are two options for persistent storage backend:
|
||||
1) the host-based Ceph solution and
|
||||
2) the Rook container-based Ceph solution.
|
||||
|
||||
The Rook container-based Ceph backend is installed after both
|
||||
AIO-Controllers are configured and unlocked.
|
||||
|
||||
For host-based Ceph,
|
||||
|
||||
#. Initialize with add ceph backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Add an OSD on controller-0 for host-based Ceph:
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
See :ref:`configure-ceph-osds-on-a-host <configure-ceph-osds-on-a-host>` for
|
||||
additional info on configuring the Ceph storage backend such as supporting
|
||||
SSD-backed journals, multiple storage tiers, and so on.
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
#. Initialize with add ceph-rook backend:
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph-rook --confirmed
|
||||
|
||||
#. Assign Rook host labels to controller-0 in support of installing the
|
||||
rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :ref:`docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later.
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 |vswitch-label|
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable it by
|
||||
using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized |OVS| that is packaged with the
|
||||
|prefix|-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
|OVS-DPDK| on the host, however, in the virtual environment |OVS-DPDK| is
|
||||
NOT supported, only |OVS| is supported. Therefore, simply use the default
|
||||
|OVS| vSwitch here.
|
||||
|
||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot to apply configuration changes and come into service.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
--------------------------------------------------------------------------
|
||||
Optionally, finish configuration of Ceph-based Persistent Storage Backend
|
||||
--------------------------------------------------------------------------
|
||||
|
||||
For host-based Ceph: Nothing else is required.
|
||||
|
||||
For Rook container-based Ceph:
|
||||
|
||||
On **virtual** controller-0:
|
||||
|
||||
#. Wait for application rook-ceph-apps uploaded
|
||||
|
||||
::
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
$ system application-list
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
|
||||
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
|
||||
#. Configure rook to use /dev/sdb disk on controller-0 as a ceph osd
|
||||
|
||||
::
|
||||
|
||||
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
|
||||
|
||||
|
||||
values.yaml for rook-ceph-apps.
|
||||
::
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: controller-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
|
||||
|
||||
#. Apply the rook-ceph-apps application.
|
||||
|
||||
::
|
||||
|
||||
system application-apply rook-ceph-apps
|
||||
|
||||
#. Wait for |OSDs| pod ready.
|
||||
|
||||
::
|
||||
|
||||
kubectl get pods -n kube-system
|
||||
rook--ceph-crashcollector-controller-0-764c7f9c8-bh5c7 1/1 Running 0 62m
|
||||
rook--ceph-mgr-a-69df96f57-9l28p 1/1 Running 0 63m
|
||||
rook--ceph-mon-a-55fff49dcf-ljfnx 1/1 Running 0 63m
|
||||
rook--ceph-operator-77b64588c5-nlsf2 1/1 Running 0 66m
|
||||
rook--ceph-osd-0-7d5785889f-4rgmb 1/1 Running 0 62m
|
||||
rook--ceph-osd-prepare-controller-0-cmwt5 0/1 Completed 0 2m14s
|
||||
rook--ceph-tools-5778d7f6c-22tms 1/1 Running 0 64m
|
||||
rook--discover-kmv6c 1/1 Running 0 65m
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,21 +0,0 @@
|
||||
==========================================================
|
||||
Virtual Standard with Controller Storage Installation R6.0
|
||||
==========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_controller_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller_storage_environ
|
||||
controller_storage_install_kubernetes
|
@ -1,59 +0,0 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R6.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* controllerstorage-controller-0
|
||||
* controllerstorage-controller-1
|
||||
* controllerstorage-worker-0
|
||||
* controllerstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'controllerstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -1,609 +0,0 @@
|
||||
========================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
|
||||
========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
|
||||
server 'controllerstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: /shared/_includes/ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r6>`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>`
|
||||
for details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instance clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP here.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
If you want to install the StarlingX Openstack application
|
||||
(|prefix|-openstack) this step is mandatory.
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :ref:`docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
|prefix|-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'controllerstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'controllerstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for 'controllerstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||
complete, for all virtual servers to reboot, and for all to show as
|
||||
locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-start:
|
||||
|
||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add the third Ceph monitor to a worker node:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add worker-0
|
||||
|
||||
#. Wait for the worker node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
done
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a |NIC| that supports |SRIOV|, then you can enable it by
|
||||
using the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual worker nodes to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------------------------
|
||||
Add Ceph OSDs to controllers
|
||||
----------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add OSDs to controller-0:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to controller-1:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,21 +0,0 @@
|
||||
=========================================================
|
||||
Virtual Standard with Dedicated Storage Installation R6.0
|
||||
=========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_dedicated_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dedicated_storage_environ
|
||||
dedicated_storage_install_kubernetes
|
@ -1,61 +0,0 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R6.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* dedicatedstorage-controller-0
|
||||
* dedicatedstorage-controller-1
|
||||
* dedicatedstorage-storage-0
|
||||
* dedicatedstorage-storage-1
|
||||
* dedicatedstorage-worker-0
|
||||
* dedicatedstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'dedicatedstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -1,403 +0,0 @@
|
||||
=======================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
|
||||
=======================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`dedicated_storage_environ`, the controller-0 virtual
|
||||
server 'dedicatedstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. important::
|
||||
|
||||
Make sure the Ceph storage backend is configured. If it is
|
||||
not configured, you will not be able to configure storage
|
||||
nodes.
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-1
|
||||
|
||||
#. As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates software installation on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the
|
||||
personality to 'storage' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-storage-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for 'dedicatedstorage-storage-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates software installation on worker-0 and worker-1.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all virtual servers to reboot, and for all
|
||||
to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | worker-0 | worker | locked | disabled | online |
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add OSDs to storage-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to storage-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock virtual storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
Configure the datanetworks in sysinv, prior to referencing it in the
|
||||
:command:`system host-if-modify` command.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE |vswitch-label|
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,366 +0,0 @@
|
||||
===============================
|
||||
Install StarlingX in VirtualBox
|
||||
===============================
|
||||
|
||||
This guide describes how to run StarlingX in a set of VirtualBox :abbr:`VMs
|
||||
(Virtual Machines)`, which is an alternative to the default StarlingX
|
||||
instructions using libvirt.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-------------
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
* A Windows or Linux computer for running VirtualBox.
|
||||
* VirtualBox is installed on your computer. The latest verified version is
|
||||
5.2.22. Download from: http://www.virtualbox.org/wiki/Downloads
|
||||
* VirtualBox Extension Pack is installed.
|
||||
To boot worker nodes via the controller, you must install the
|
||||
VirtualBox Extension Pack to add support for PXE boot of Intel cards. Download
|
||||
the extension pack from: https://www.virtualbox.org/wiki/Downloads
|
||||
|
||||
.. note::
|
||||
|
||||
A set of scripts for deploying VirtualBox VMs can be found in the
|
||||
`STX tools repository
|
||||
<https://opendev.org/starlingx/tools/src/branch/master/deployment/virtualbox>`_,
|
||||
however, the scripts may not be updated to the latest StarlingX
|
||||
recommendations.
|
||||
|
||||
---------------------------------------------------
|
||||
Create VMs for controller, worker and storage hosts
|
||||
---------------------------------------------------
|
||||
|
||||
For each StarlingX host, configure a VirtualBox VM with the following settings.
|
||||
|
||||
.. note::
|
||||
|
||||
The different settings for controller, worker, and storage nodes are
|
||||
embedded in the particular sections below.
|
||||
|
||||
***************************
|
||||
OS type and memory settings
|
||||
***************************
|
||||
|
||||
* Type: Linux
|
||||
|
||||
* Version: Other Linux (64-bit)
|
||||
|
||||
* Memory size:
|
||||
|
||||
* Controller node: 16384 MB
|
||||
* Worker node: 8192MB
|
||||
* Storage node: 4096 MB
|
||||
* All-in-one node: 20480 MB
|
||||
|
||||
****************
|
||||
Disk(s) settings
|
||||
****************
|
||||
|
||||
Use the default disk controller and default disk format (for example IDE/vdi)
|
||||
for VirtualBox VMs.
|
||||
|
||||
* Minimum disk size requirements:
|
||||
|
||||
* Controller nodes (minimum of 2 disks required):
|
||||
|
||||
* Disk 1: 240GB disk
|
||||
* Disk 2: 10GB disk (Note: Use 30GB if you are planning to work on the
|
||||
analytics.)
|
||||
|
||||
* Worker nodes: 80GB root disk (Note: Use 100GB if you are installing
|
||||
StarlingX AIO node.)
|
||||
|
||||
* When the node is configured for local storage, this will provide ~12GB of
|
||||
local storage space for disk allocation to VM instances.
|
||||
* Additional disks can be added to the node to extend the local storage
|
||||
but are not required.
|
||||
|
||||
* Storage nodes (minimum of 2 disks required):
|
||||
|
||||
* 80GB disk for rootfs.
|
||||
* 10GB disk (or larger) for each OSD. The size depends on how many VMs you
|
||||
plan to run.
|
||||
|
||||
* Storage tree, see empty CD-ROM. Click cd-rom, click ``+`` to choose a CD/DVD
|
||||
iso, and browse to ISO location. Use this ISO only for the first controller
|
||||
node. The second controller node and worker nodes will network boot from the
|
||||
first controller node.
|
||||
|
||||
***************
|
||||
System settings
|
||||
***************
|
||||
|
||||
* System->Motherboard:
|
||||
|
||||
* Boot Order: Enable the Network option. Order should be: Floppy, CD/DVD,
|
||||
Hard Disk, Network.
|
||||
|
||||
* System->Processors:
|
||||
|
||||
* Controller node: 4 CPU
|
||||
* Worker node: 3 CPU
|
||||
|
||||
.. note::
|
||||
|
||||
This will allow only a single instance to be launched. More processors
|
||||
are required to launch more instances. If more than 4 CPUs are
|
||||
allocated, you must limit vswitch to a single CPU before unlocking your
|
||||
worker node, otherwise your worker node will **reboot in a loop**
|
||||
(vswitch will fail to start, in-test will detect that a critical service
|
||||
failed to start and reboot the node). Use the following command to limit
|
||||
vswitch:
|
||||
|
||||
::
|
||||
|
||||
system host-cpu-modify worker-0 -f vswitch -p0 1
|
||||
|
||||
* Storage node: 1 CPU
|
||||
|
||||
****************
|
||||
Network settings
|
||||
****************
|
||||
|
||||
The OAM network has the following options:
|
||||
|
||||
* Host Only Network - **Strongly Recommended.** This option
|
||||
requires the router VM to forward packets from the controllers to the external
|
||||
network. Follow the instructions at :doc:`Install VM as a router <config_virtualbox_netwk>`
|
||||
to set it up. Create one network adapter for external OAM. The IP addresses
|
||||
in the example below match the default configuration.
|
||||
|
||||
* VirtualBox: File -> Preferences -> Network -> Host-only Networks. Click
|
||||
``+`` to add Ethernet Adapter.
|
||||
|
||||
* Windows: This creates a ``VirtualBox Host-only Adapter`` and prompts
|
||||
with the Admin dialog box. Click ``Accept`` to create an interface.
|
||||
* Linux: This creates a ``vboxnet<x>`` per interface.
|
||||
|
||||
* External OAM: IPv4 Address: 10.10.10.254, IPv4 Network Mask: 255.255.255.0,
|
||||
DHCP Server: unchecked.
|
||||
|
||||
* NAT Network - This option provides external network access to the controller
|
||||
VMs. Follow the instructions at :doc:`Add NAT Network in VirtualBox <config_virtualbox_netwk>`.
|
||||
|
||||
Adapter settings for the different node types are as follows:
|
||||
|
||||
* Controller nodes:
|
||||
|
||||
* Adapter 1 setting depends on your choice for the OAM network above. It can
|
||||
be either of the following:
|
||||
|
||||
* Adapter 1: Host-Only Adapter; VirtualBox Host-Only Ethernet Adapter 1),
|
||||
Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Deny
|
||||
* Adapter 1: NAT Network; Name: NatNetwork
|
||||
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT
|
||||
Desktop, Advanced: Promiscuous Mode: Allow All
|
||||
|
||||
* Worker nodes:
|
||||
|
||||
* Adapter 1:
|
||||
|
||||
Internal Network, Name: intnet-unused; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
* Adapter 3: Internal Network, Name: intnet-data1; Advanced:
|
||||
Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
|
||||
|
||||
* Windows: If you have a separate Ubuntu VM for Linux work, then add
|
||||
another interface to your Ubuntu VM and add it to the same
|
||||
intnet-data1 internal network.
|
||||
* Linux: If you want to access the VM instances directly, create a new
|
||||
``Host-only`` network called ``vboxnet<x>`` similar to the external OAM
|
||||
one above. Ensure DHCP Server is unchecked, and that the IP address is
|
||||
on a network unrelated to the rest of the addresses we're configuring.
|
||||
(The default will often be fine.) Now attach adapter-3 to the new
|
||||
Host-only network.
|
||||
* Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized
|
||||
Network (virtio-net), Promiscuous Mode: Allow All
|
||||
|
||||
Additional adapters can be added via command line, for :abbr:`LAG (Link
|
||||
Aggregation Group)` purposes. For example:
|
||||
|
||||
::
|
||||
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic5 intnet --nictype5 virtio --intnet5 intnet-data1 --nicpromisc5 allow-all
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic6 intnet --nictype6 virtio --intnet6 intnet-data2 --nicpromisc6 allow-all
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic7 intnet --nictype7 82540EM --intnet7 intnet-infra --nicpromisc7 allow-all
|
||||
|
||||
* Storage nodes:
|
||||
|
||||
* Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Advanced:
|
||||
Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
|
||||
* Set the boot priority for interface 2 (eth1) on ALL VMs (controller, worker
|
||||
and storage):
|
||||
|
||||
::
|
||||
|
||||
# First list the VMs
|
||||
bwensley@yow-bwensley-lx:~$ VBoxManage list vms
|
||||
"YOW-BWENSLEY-VM" {f6d4df83-bee5-4471-9497-5a229ead8750}
|
||||
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
|
||||
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
|
||||
"worker-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
|
||||
"worker-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
|
||||
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
|
||||
# Then set the priority for interface 2. Do this for ALL VMs.
|
||||
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
|
||||
bwensley@yow-bwensley-lx:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
|
||||
# OR do them all with a foreach loop in linux
|
||||
bwensley@yow-bwensley-lx:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
|
||||
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
||||
|
||||
* Alternative method for debugging:
|
||||
|
||||
* Turn on VM and press F12 for the boot menu.
|
||||
* Press ``L`` for LAN boot.
|
||||
* Press CTL+B for the iPXE CLI (this has a short timeout).
|
||||
* The autoboot command opens a link with each interface sequentially
|
||||
and tests for netboot.
|
||||
|
||||
|
||||
********************
|
||||
Serial port settings
|
||||
********************
|
||||
|
||||
To use serial ports, you must select Serial Console during initial boot using
|
||||
one of the following methods:
|
||||
|
||||
* Windows: Select ``Enable Serial Port``, port mode to ``Host Pipe``. Select
|
||||
``Create Pipe`` (or deselect ``Connect to existing pipe/socket``). Enter
|
||||
a Port/File Path in the form ``\\.\pipe\controller-0`` or
|
||||
``\\.\pipe\worker-1``. Later, you can use this in PuTTY to connect to the
|
||||
console. Choose speed of 9600 or 38400.
|
||||
|
||||
* Linux: Select ``Enable Serial Port`` and set the port mode to ``Host Pipe``.
|
||||
Select ``Create Pipe`` (or deselect ``Connect to existing pipe/socket``).
|
||||
Enter a Port/File Path in the form ``/tmp/controller_serial``. Later, you can
|
||||
use this with ``socat`` as shown in this example:
|
||||
|
||||
::
|
||||
|
||||
socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
|
||||
|
||||
***********
|
||||
Other notes
|
||||
***********
|
||||
|
||||
If you're using a Dell PowerEdge R720 system, it's important to execute the
|
||||
command below to avoid any kernel panic issues:
|
||||
|
||||
::
|
||||
|
||||
VBoxManage? setextradata VBoxInternal?/CPUM/EnableHVP 1
|
||||
|
||||
|
||||
----------------------------------------
|
||||
Start controller VM and allow it to boot
|
||||
----------------------------------------
|
||||
|
||||
Console usage:
|
||||
|
||||
#. To use a serial console: Select ``Serial Controller Node Install``, then
|
||||
follow the instructions above in the ``Serial Port`` section to connect to
|
||||
it.
|
||||
#. To use a graphical console: Select ``Graphics Text Controller Node
|
||||
Install`` and continue using the Virtual Box console.
|
||||
|
||||
For details on how to specify installation parameters such as rootfs device
|
||||
and console port, see :ref:`config_install_parms_r6`.
|
||||
|
||||
Follow the :ref:`StarlingX Installation and Deployment Guides <index-install-e083ca818006>`
|
||||
to continue.
|
||||
|
||||
* Ensure that boot priority on all VMs is changed using the commands in the "Set
|
||||
the boot priority" step above.
|
||||
* In an AIO-DX and standard configuration, additional
|
||||
hosts must be booted using controller-0 (rather than ``bootimage.iso`` file).
|
||||
* On Virtual Box, click F12 immediately when the VM starts to select a different
|
||||
boot option. Select the ``lan`` option to force a network boot.
|
||||
|
||||
.. _config_install_parms_r6:
|
||||
|
||||
------------------------------------
|
||||
Configurable installation parameters
|
||||
------------------------------------
|
||||
|
||||
StarlingX allows you to specify certain configuration parameters during
|
||||
installation:
|
||||
|
||||
* Boot device: This is the device that is to be used for the boot partition. In
|
||||
most cases, this must be ``sda``, which is the default, unless the BIOS
|
||||
supports using a different disk for the boot partition. This is specified with
|
||||
the ``boot_device`` option.
|
||||
|
||||
* Rootfs device: This is the device that is to be used for the rootfs and
|
||||
various platform partitions. The default is ``sda``. This is specified with
|
||||
the ``rootfs_device`` option.
|
||||
|
||||
* Install output: Text mode vs graphical. The default is ``text``. This is
|
||||
specified with the ``install_output`` option.
|
||||
|
||||
* Console: This is the console specification, allowing the user to specify the
|
||||
console port and/or baud. The default value is ``ttyS0,115200``. This is
|
||||
specified with the ``console`` option.
|
||||
|
||||
*********************************
|
||||
Install controller-0 from ISO/USB
|
||||
*********************************
|
||||
|
||||
The initial boot menu for controller-0 is built-in, so modification of the
|
||||
installation parameters requires direct modification of the boot command line.
|
||||
This is done by scrolling to the boot option you want (for example, Serial
|
||||
Controller Node Install vs Graphics Controller Node Install), and hitting the
|
||||
tab key to allow command line modification. The example below shows how to
|
||||
modify the ``rootfs_device`` specification.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/install_virtualbox_configparms.png
|
||||
:scale: 100%
|
||||
:alt: Install controller-0
|
||||
|
||||
|
||||
************************************
|
||||
Install nodes from active controller
|
||||
************************************
|
||||
|
||||
The installation parameters are part of the system inventory host details for
|
||||
each node, and can be specified when the host is added or updated. These
|
||||
parameters can be set as part of a host-add or host-bulk-add, host-update, or
|
||||
via the GUI when editing a host.
|
||||
|
||||
For example, if you prefer to see the graphical installation, you can enter the
|
||||
following command when setting the personality of a newly discovered host:
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller install_output=graphical console=
|
||||
|
||||
If you don’t set up a serial console, but prefer the text installation, you
|
||||
can clear out the default console setting with the command:
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller install_output=text console=
|
||||
|
||||
If you’d prefer to install to the second disk on your node, use the command:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=compute hostname=compute-0 rootfs_device=sdb
|
||||
|
||||
Alternatively, these values can be set from the GUI via the ``Edit Host``
|
||||
option.
|
||||
|
||||
.. figure:: /deploy_install_guides/r6_release/figures/install_virtualbox_guiscreen.png
|
||||
:scale: 100%
|
||||
:alt: Install controller-0
|
@ -1,21 +0,0 @@
|
||||
====================================================
|
||||
Virtual Standard with Rook Storage Installation R6.0
|
||||
====================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_rook_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
rook_storage_environ
|
||||
rook_storage_install_kubernetes
|
@ -1,61 +0,0 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R6.0 virtual Standard with Rook Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* rookstorage-controller-0
|
||||
* rookstorage-controller-1
|
||||
* rookstorage-worker-0
|
||||
* rookstorage-worker-1
|
||||
* rookstorage-worker-2
|
||||
* rookstorage-worker-3
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'rookstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
export WORKER_NODES_NUMBER=4 ; bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -1,547 +0,0 @@
|
||||
==================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Rook Storage
|
||||
==================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R6.0 virtual Standard with Rook Storage** deployment configuration,
|
||||
deploy rook ceph cluster replacing default native ceph cluster.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`rook_storage_environ`, the controller-0 virtual
|
||||
server 'rookstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console rookstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instance clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP here.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
|prefix|-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
********************************
|
||||
Rook-specific host configuration
|
||||
********************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX Rook application will be
|
||||
installed.**
|
||||
|
||||
**For Rook only:** Assign Rook host labels to controller-0 in support of
|
||||
installing the rook-ceph-apps manifest/helm-charts later and add ceph-rook
|
||||
as storage backend:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-0 ceph-mgr-placement=enabled
|
||||
system storage-backend-add ceph-rook --confirmed
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'rookstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start rookstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console rookstorage-controller-1
|
||||
|
||||
#. As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates software installation on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'rookstorage-worker-0' and 'rookstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'rookstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start rookstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=rook-storage-0
|
||||
|
||||
Repeat for 'rookstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start rookstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=rook-storage-1
|
||||
|
||||
This initiates software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'rookstorage-worker-2' and 'rookstorage-worker-3'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'rookstorage-worker-2' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start rookstorage-worker-2
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for 'rookstorage-worker-3'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-3
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates software installation on worker-0 and worker-1.
|
||||
|
||||
.. only:: starlingx
|
||||
|
||||
.. Note::
|
||||
|
||||
A node with Edgeworker personality is also available. See
|
||||
:ref:`deploy-edgeworker-nodes` for details.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all virtual servers to reboot, and for all
|
||||
to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+----------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+----------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | rook-storage-0 | worker | locked | disabled | offline |
|
||||
| 4 | rook-storage-1 | worker | locked | disabled | offline |
|
||||
| 5 | worker-0 | worker | locked | disabled | offline |
|
||||
| 6 | worker-1 | worker | locked | disabled | offline |
|
||||
+----+----------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
********************************
|
||||
Rook-specific host configuration
|
||||
********************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX Rook application will be
|
||||
installed.**
|
||||
|
||||
**For Rook only:** Assign Rook host labels to controller-1 in
|
||||
support of installing the rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 ceph-mon-placement=enabled
|
||||
system host-label-assign controller-1 ceph-mgr-placement=enabled
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in rook-storage-0 rook-storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. **For Rook only:** Assign Rook host labels to storage-0 in
|
||||
support of installing the rook-ceph-apps manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign rook-storage-0 ceph-mon-placement=enabled
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock virtual storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in rook-storage-0 rook-storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
Configure the datanetworks in sysinv, prior to referencing it in the
|
||||
:command:`system host-if-modify` command.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if the StarlingX OpenStack application
|
||||
(|prefix|-openstack) will be installed.
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the |prefix|-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for |prefix|-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock virtual worker nodes to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install Rook application manifest and helm-charts
|
||||
-------------------------------------------------
|
||||
|
||||
On virtual storage-0 and storage-1:
|
||||
|
||||
#. Erase gpt header of disk sdb.
|
||||
|
||||
::
|
||||
|
||||
$ system host-disk-wipe -s --confirm rook-storage-0 /dev/sdb
|
||||
$ system host-disk-wipe -s --confirm rook-storage-1 /dev/sdb
|
||||
|
||||
#. Wait for application "rook-ceph-apps" uploaded
|
||||
|
||||
::
|
||||
|
||||
$ source /etc/platform/openrc
|
||||
$ system application-list
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| application | version | manifest name | manifest file | status | progress |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
|
||||
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
|
||||
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
|
||||
+---------------------+---------+-------------------------------+---------------+----------+-----------+
|
||||
|
||||
#. Edit values.yaml for rook-ceph-apps.
|
||||
|
||||
::
|
||||
|
||||
cluster:
|
||||
storage:
|
||||
nodes:
|
||||
- name: rook-storage-0
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
- name: rook-storage-1
|
||||
devices:
|
||||
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
|
||||
|
||||
#. Update rook-ceph-apps override value.
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
|
||||
|
||||
#. Apply the rook-ceph-apps application.
|
||||
|
||||
::
|
||||
|
||||
system application-apply rook-ceph-apps
|
||||
|
||||
#. Wait for OSDs pod ready.
|
||||
|
||||
::
|
||||
|
||||
kubectl get pods -n kube-system
|
||||
rook-ceph-mgr-a-ddffc8fbb-zkvln 1/1 Running 0 66s
|
||||
rook-ceph-mon-a-c67fdb6c8-tlbvk 1/1 Running 0 2m11s
|
||||
rook-ceph-mon-b-76969d8685-wcq62 1/1 Running 0 2m2s
|
||||
rook-ceph-mon-c-5bc47c6cb9-vm4j8 1/1 Running 0 97s
|
||||
rook-ceph-operator-6fc8cfb68b-bb57z 1/1 Running 1 7m9s
|
||||
rook-ceph-osd-0-689b6f65b-2nvcx 1/1 Running 0 12s
|
||||
rook-ceph-osd-1-7bfd69fdf9-vjqmp 1/1 Running 0 4s
|
||||
rook-ceph-osd-prepare-rook-storage-0-hf28p 0/1 Completed 0 50s
|
||||
rook-ceph-osd-prepare-rook-storage-1-r6lsd 0/1 Completed 0 50s
|
||||
rook-ceph-tools-84c7fff88c-x5trx 1/1 Running 0 6m11s
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: /_includes/kubernetes_install_next.txt
|
@ -1,60 +0,0 @@
|
||||
|
||||
.. jow1442253584837
|
||||
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-r7:
|
||||
|
||||
=======================================================
|
||||
Access PXE Boot Server Files for a Custom Configuration
|
||||
=======================================================
|
||||
|
||||
If you prefer, you can create a custom |PXE| boot configuration using the
|
||||
installation files provided with |prod|.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
You can use the setup script included with the ISO image to copy the boot
|
||||
configuration files and distribution content to a working directory. You can
|
||||
use the contents of the working directory to construct a |PXE| boot environment
|
||||
according to your own requirements or preferences.
|
||||
|
||||
For more information about using a |PXE| boot server, see :ref:`Configure a
|
||||
PXE Boot Server <configuring-a-pxe-boot-server-r7>`.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t-r7:
|
||||
|
||||
#. Copy the ISO image from the source \(product DVD, USB device, or
|
||||
|dnload-loc|\) to a temporary location on the |PXE| boot server.
|
||||
|
||||
This example assumes that the copied image file is
|
||||
tmp/TS-host-installer-1.0.iso.
|
||||
|
||||
#. Mount the ISO image and make it executable.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
|
||||
$ mount -o remount,exec,dev /media/iso
|
||||
|
||||
#. Create and populate a working directory.
|
||||
|
||||
Use a command of the following form:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> <-w <working directory>>
|
||||
|
||||
where:
|
||||
|
||||
**ip-addr**
|
||||
is the Apache listening address.
|
||||
|
||||
**symlink**
|
||||
is a name for a symbolic link to be created under the Apache document
|
||||
root directory, pointing to the directory specified by <working-dir>.
|
||||
|
||||
**working-dir**
|
||||
is the path to the working directory.
|
||||
|
||||
#. Copy the required files from the working directory to your custom |PXE|
|
||||
boot server directory.
|
@ -1,26 +0,0 @@
|
||||
==============================================
|
||||
Bare metal All-in-one Duplex Installation R7.0
|
||||
==============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_duplex.txt
|
||||
|
||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||
worker nodes (not shown in the diagram). Installation instructions for
|
||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_duplex_hardware
|
||||
aio_duplex_install_kubernetes
|
||||
aio_duplex_extend
|
@ -1,69 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R7.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 2 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
|
||||
| | for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-duplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Duplex deployment configuration
|
||||
|
||||
*All-in-one Duplex deployment configuration*
|
@ -1,21 +0,0 @@
|
||||
===============================================
|
||||
Bare metal All-in-one Simplex Installation R7.0
|
||||
===============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_simplex.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_simplex_hardware
|
||||
aio_simplex_install_kubernetes
|
@ -1,71 +0,0 @@
|
||||
.. _aio_simplex_hardware_r7:
|
||||
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R7.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 1 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :ref:`nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD |
|
||||
| | journal) |
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
|
||||
| | RPM) for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the diagram above.
|
||||
|
||||
.. .. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-simplex.png
|
||||
.. :scale: 50%
|
||||
.. :alt: All-in-one Simplex deployment configuration
|
||||
|
||||
.. *All-in-one Simplex deployment configuration*
|
@ -1,209 +0,0 @@
|
||||
|
||||
.. jow1440534908675
|
||||
|
||||
.. _configuring-a-pxe-boot-server-r7:
|
||||
|
||||
|
||||
|
||||
===========================
|
||||
Configure a PXE Boot Server
|
||||
===========================
|
||||
|
||||
You can optionally set up a |PXE| Boot Server to support **controller-0**
|
||||
initialization.
|
||||
|
||||
.. rubric:: |context|
|
||||
|
||||
|prod| includes a setup script to simplify configuring a |PXE| boot server. If
|
||||
you prefer, you can manually apply a custom configuration; for more
|
||||
information, see :ref:`Access PXE Boot Server Files for a Custom Configuration
|
||||
<accessing-pxe-boot-server-files-for-a-custom-configuration-r7>`.
|
||||
|
||||
The |prod| setup script accepts a path to the root TFTP directory as a
|
||||
parameter, and copies all required files for BIOS and |UEFI| clients into this
|
||||
directory.
|
||||
|
||||
The |PXE| boot server serves a boot loader file to the requesting client from a
|
||||
specified path on the server. The path depends on whether the client uses BIOS
|
||||
or |UEFI|. The appropriate path is selected by conditional logic in the |DHCP|
|
||||
configuration file.
|
||||
|
||||
The boot loader runs on the client, and reads boot parameters, including the
|
||||
location of the kernel and initial ramdisk image files, from a boot file
|
||||
contained on the server. To find the boot file, the boot loader searches a
|
||||
known directory on the server. This search directory can contain more than one
|
||||
entry, supporting the use of separate boot files for different clients.
|
||||
|
||||
The file names and locations depend on the BIOS or |UEFI| implementation.
|
||||
|
||||
.. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb-r7:
|
||||
|
||||
.. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations
|
||||
:widths: auto
|
||||
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| Resource | BIOS | UEFI |
|
||||
+==========================================+========================+===============================+
|
||||
| **boot loader** | ./pxelinux.0 | ./EFI/grubx64.efi |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| **boot file search directory** | ./pxelinux.cfg | ./ or ./EFI |
|
||||
| | | |
|
||||
| | | \(system-dependent\) |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| **boot file** and path | ./pxelinux.cfg/default | ./grub.cfg and ./EFI/grub.cfg |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
| \(./ indicates the root TFTP directory\) |
|
||||
+------------------------------------------+------------------------+-------------------------------+
|
||||
|
||||
.. rubric:: |prereq|
|
||||
|
||||
Use a Linux workstation as the |PXE| Boot server.
|
||||
|
||||
|
||||
.. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt-r7:
|
||||
|
||||
- On the workstation, install the packages required to support |DHCP|, TFTP,
|
||||
and Apache.
|
||||
|
||||
- Configure |DHCP|, TFTP, and Apache according to your system requirements.
|
||||
For details, refer to the documentation included with the packages.
|
||||
|
||||
- Additionally, configure |DHCP| to support both BIOS and |UEFI| client
|
||||
architectures. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
option arch code 93 = unsigned integer 16; # ref RFC4578
|
||||
# ...
|
||||
subnet 192.168.1.0 netmask 255.255.255.0 {
|
||||
if option arch = 00:07 {
|
||||
filename "EFI/grubx64.efi";
|
||||
# NOTE: substitute the full tftp-boot-dir specified in the setup script
|
||||
}
|
||||
else {
|
||||
filename "pxelinux.0";
|
||||
}
|
||||
# ...
|
||||
}
|
||||
|
||||
|
||||
- Start the |DHCP|, TFTP, and Apache services.
|
||||
|
||||
- Connect the |PXE| boot server to the |prod| management or |PXE| boot
|
||||
network.
|
||||
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
|
||||
.. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb-r7:
|
||||
|
||||
#. Copy the ISO image from the source \(product DVD, USB device, or
|
||||
|dnload-loc| to a temporary location on the |PXE| boot server.
|
||||
|
||||
This example assumes that the copied image file is
|
||||
``tmp/TS-host-installer-1.0.iso``.
|
||||
|
||||
#. Mount the ISO image and make it executable.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
|
||||
$ mount -o remount,exec,dev /media/iso
|
||||
|
||||
#. Set up the |PXE| boot configuration.
|
||||
|
||||
.. important::
|
||||
|
||||
|PXE| configuration steps differ for |prod| |deb-eval-release|
|
||||
evaluation on the Debian distribution. See the :ref:`Debian Technology
|
||||
Preview <deb-grub-deltas>` |PXE| configuration procedure for details.
|
||||
|
||||
The ISO image includes a setup script, which you can run to complete the
|
||||
configuration.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> \
|
||||
-t <tftp-boot-dir>
|
||||
|
||||
where
|
||||
|
||||
``ip-addr``
|
||||
is the Apache listening address.
|
||||
|
||||
``symlink``
|
||||
is the name of a user-created symbolic link under the Apache document
|
||||
root directory, pointing to the directory specified by <tftp-boot-dir>.
|
||||
|
||||
``tftp-boot-dir``
|
||||
is the path from which the boot loader is served \(the TFTP root
|
||||
directory\).
|
||||
|
||||
The script creates the directory specified by <tftp-boot-dir>.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ /media/iso/pxeboot_setup.sh -u http://192.168.100.100/BIOS-client -t /export/pxeboot
|
||||
|
||||
#. To serve a specific boot file to a specific controller, assign a special
|
||||
name to the file.
|
||||
|
||||
The boot loader searches for a file name that uses a string based on the
|
||||
client interface |MAC| address. The string uses lower case, substitutes
|
||||
dashes for colons, and includes the prefix "01-".
|
||||
|
||||
|
||||
- For a BIOS client, use the |MAC| address string as the file name:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd <tftp-boot-dir>/pxelinux.cfg/
|
||||
$ cp pxeboot.cfg <mac-address-string>
|
||||
|
||||
where:
|
||||
|
||||
``<tftp-boot-dir>``
|
||||
is the path from which the boot loader is served.
|
||||
|
||||
``<mac-address-string>``
|
||||
is a lower-case string formed from the |MAC| address of the client
|
||||
|PXE| boot interface, using dashes instead of colons, and prefixed
|
||||
by "01-".
|
||||
|
||||
For example, to represent the |MAC| address ``08:00:27:dl:63:c9``,
|
||||
use the string ``01-08-00-27-d1-63-c9`` in the file name.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /export/pxeboot/pxelinux.cfg/
|
||||
$ cp pxeboot.cfg 01-08-00-27-d1-63-c9
|
||||
|
||||
If the boot loader does not find a file named using this convention, it
|
||||
looks for a file with the name default.
|
||||
|
||||
- For a |UEFI| client, use the |MAC| address string prefixed by
|
||||
"grub.cfg-". To ensure the file is found, copy it to both search
|
||||
directories used by the |UEFI| convention.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd <tftp-boot-dir>
|
||||
$ cp grub.cfg grub.cfg-<mac-address-string>
|
||||
$ cp grub.cfg ./EFI/grub.cfg-<mac-address-string>
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /export/pxeboot
|
||||
$ cp grub.cfg grub.cfg-01-08-00-27-d1-63-c9
|
||||
$ cp grub.cfg ./EFI/grub.cfg-01-08-00-27-d1-63-c9
|
||||
|
||||
.. note::
|
||||
Alternatively, you can use symlinks in the search directories to
|
||||
ensure the file is found.
|
@ -1,22 +0,0 @@
|
||||
=============================================================
|
||||
Bare metal Standard with Controller Storage Installation R7.0
|
||||
=============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_controller_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller_storage_hardware
|
||||
controller_storage_install_kubernetes
|
@ -1,67 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R7.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum Requirement | Controller Node | Worker Node |
|
||||
+=========================+=============================+=============================+
|
||||
| Number of servers | 2 | 2-10 |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum memory | 64 GB | 32 GB |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (Minimum 10k RPM) |
|
||||
| | :ref:`nvme_config`) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
|
||||
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
|
||||
| | - Recommended, but not | 10K RPM) for VM local |
|
||||
| | required: 1 or more SSDs | ephemeral storage |
|
||||
| | or NVMe drives for Ceph | |
|
||||
| | journals (min. 1024 MiB | |
|
||||
| | per OSD journal) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-controller-storage.png
|
||||
:scale: 50%
|
||||
:alt: Controller storage deployment configuration
|
||||
|
||||
*Controller storage deployment configuration*
|
@ -1,22 +0,0 @@
|
||||
|
||||
============================================================
|
||||
Bare metal Standard with Dedicated Storage Installation R7.0
|
||||
============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_dedicated_storage.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dedicated_storage_hardware
|
||||
dedicated_storage_install_kubernetes
|
@ -1,72 +0,0 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R7.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum Requirement | Controller Node | Storage Node | Worker Node |
|
||||
+=====================+===========================+=======================+=======================+
|
||||
| Number of servers | 2 | 2-9 | 2-100 |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
|
||||
| class | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum memory | 64 GB | 64 GB | 32 GB |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
|
||||
| | :ref:`nvme_config`) | | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
|
||||
| | | (min. 10K RPM) for | recommend 1 or more |
|
||||
| | | Ceph OSD | 500 GB (min. 10K |
|
||||
| | | - Recommended, but | RPM) for VM |
|
||||
| | | not required: 1 or | ephemeral storage |
|
||||
| | | more SSDs or NVMe | |
|
||||
| | | drives for Ceph | |
|
||||
| | | journals (min. 1024 | |
|
||||
| | | MiB per OSD | |
|
||||
| | | journal) | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
|
||||
| ports | 1x10GE | 1x10GE | 1x10GE |
|
||||
| | - OAM: 1x1GE | | - Data: 1 or more |
|
||||
| | | | x 10GE |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in the following diagram.
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-deployment-options-dedicated-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with dedicated storage
|
||||
|
||||
*Standard with dedicated storage*
|
@ -1,12 +0,0 @@
|
||||
Prior to starting the StarlingX installation, the bare metal servers must be in
|
||||
the following condition:
|
||||
|
||||
* Physically installed
|
||||
|
||||
* Cabled for power
|
||||
|
||||
* All disks wiped
|
||||
|
||||
* Ensures that servers will boot from either the network or USB storage (if present)
|
||||
|
||||
* Powered off
|
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 23 KiB |
Before Width: | Height: | Size: 358 KiB |
Before Width: | Height: | Size: 437 KiB |
Before Width: | Height: | Size: 96 KiB |
Before Width: | Height: | Size: 109 KiB |
Before Width: | Height: | Size: 313 KiB |
Before Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 101 KiB |
Before Width: | Height: | Size: 127 KiB |
Before Width: | Height: | Size: 70 KiB |
@ -1,357 +0,0 @@
|
||||
==========================
|
||||
Access StarlingX OpenStack
|
||||
==========================
|
||||
|
||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage |prod|
|
||||
OpenStack and hosted virtualized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------
|
||||
Local CLI
|
||||
---------
|
||||
|
||||
Access OpenStack using the local CLI with one of the following methods.
|
||||
|
||||
**Method 1**
|
||||
|
||||
You can use this method on either controller, active or standby.
|
||||
|
||||
#. Log in to the desired controller via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
**Do not** use ``source /etc/platform/openrc``.
|
||||
|
||||
#. Set the CLI context to the |prod-os| Cloud Application and set up
|
||||
OpenStack admin credentials:
|
||||
|
||||
::
|
||||
|
||||
sudo su -
|
||||
mkdir -p /etc/openstack
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: '<sysadmin-password>'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
exit
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
**Method 2**
|
||||
|
||||
Use this method to access |prod| Kubernetes commands and |prod-os|
|
||||
commands in the same shell. You can only use this method on the active
|
||||
controller.
|
||||
|
||||
#. Log in to the active controller via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
#. Set the CLI context to the |prod-os| Cloud Application and set up
|
||||
OpenStack admin credentials:
|
||||
|
||||
::
|
||||
|
||||
sed '/export OS_AUTH_URL/c\export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3' /etc/platform/openrc > ~/openrc.os
|
||||
source ./openrc.os
|
||||
|
||||
.. note::
|
||||
|
||||
To switch between |prod| Kubernetes/Platform credentials and |prod-os|
|
||||
credentials, use ``source /etc/platform/openrc`` or ``source
|
||||
./openrc.os`` respectively.
|
||||
|
||||
|
||||
**********************
|
||||
OpenStack CLI commands
|
||||
**********************
|
||||
|
||||
Access OpenStack CLI commands for the |prod-os| cloud application
|
||||
using the :command:`openstack` command. For example:
|
||||
|
||||
::
|
||||
|
||||
controller-0:~$ export OS_CLOUD=openstack_helm
|
||||
controller-0:~$ openstack flavor list
|
||||
controller-0:~$ openstack image list
|
||||
|
||||
.. note::
|
||||
|
||||
If you are using Method 2 described above, use these commands:
|
||||
|
||||
::
|
||||
|
||||
controller-0:~$ source ./openrc.os
|
||||
controller-0:~$ openstack flavor list
|
||||
controller-0:~$ openstack image list
|
||||
|
||||
The image below shows a typical successful run.
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-access-openstack-flavorlist.png
|
||||
:alt: starlingx-access-openstack-flavorlist
|
||||
:scale: 50%
|
||||
|
||||
Figure 1: |prod-os| Flavorlist
|
||||
|
||||
|
||||
.. figure:: /deploy_install_guides/r7_release/figures/starlingx-access-openstack-command.png
|
||||
:alt: starlingx-access-openstack-command
|
||||
:scale: 50%
|
||||
|
||||
Figure 2: |prod-os| Commands
|
||||
|
||||
------------------------------
|
||||
Configure Helm endpoint domain
|
||||
------------------------------
|
||||
|
||||
Containerized OpenStack services in |prod| are deployed behind an ingress
|
||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
|
||||
The ingress controller routes packets to the specific OpenStack service, such as
|
||||
the Cinder service, or the Neutron service, by parsing the |FQDN| in the packet.
|
||||
For example, ``neutron.openstack.svc.cluster.local`` is for the Neutron service,
|
||||
``cinder‐api.openstack.svc.cluster.local`` is for the Cinder service.
|
||||
|
||||
This routing requires that access to OpenStack REST APIs must be via a |FQDN|
|
||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
|
||||
OpenStack REST APIs using an IP address.
|
||||
|
||||
FQDNs (such as ``cinder‐api.openstack.svc.cluster.local``) must be in a DNS
|
||||
server that is publicly accessible.
|
||||
|
||||
.. note::
|
||||
|
||||
There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
|
||||
server configuration so that you don’t need to update the DNS server every
|
||||
time an OpenStack service is added. Check your particular DNS server for
|
||||
details on how to wild-card a set of FQDNs.
|
||||
|
||||
In a “real” deployment, that is, not a lab scenario, you cannot use the default
|
||||
``openstack.svc.cluster.local`` domain name externally. You must set a unique
|
||||
domain name for your |prod| system. |prod| provides the
|
||||
:command:`system service‐parameter-add` command to configure and set the
|
||||
OpenStack domain name:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=<domain_name>
|
||||
|
||||
``<domain_name>`` should be a fully qualified domain name that you own, such that
|
||||
you can configure the DNS Server that owns ``<domain_name>`` with the OpenStack
|
||||
service names underneath the domain.
|
||||
|
||||
For example:
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
|
||||
system application-apply |prefix|-openstack
|
||||
|
||||
This command updates the Helm charts of all OpenStack services and restarts them.
|
||||
For example it would change ``cinder‐api.openstack.svc.cluster.local`` to
|
||||
``cinder‐api.my-starlingx-domain.my-company.com``, and so on for all OpenStack
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
This command also changes the containerized OpenStack Horizon to listen on
|
||||
``horizon.my-starlingx-domain.my-company.com:80`` instead of the initial
|
||||
``<oam‐floating‐ip>:31000``.
|
||||
|
||||
You must configure { ``*.my-starlingx-domain.my-company.com: --> oam‐floating‐ip‐address }``
|
||||
in the external DNS server that owns ``my-company.com``.
|
||||
|
||||
---------------------------
|
||||
Configure HTTPS Certificate
|
||||
---------------------------
|
||||
|
||||
This certificate must be valid for the domain configured for |prod-os|.
|
||||
|
||||
|
||||
#. Enable HTTPS for |prod|, see :ref:`Enable HTTPS Access for StarlingX REST
|
||||
and Web Server Endpoints
|
||||
<enable-https-access-for-starlingx-rest-and-web-server-endpoints>`.
|
||||
|
||||
.. note::
|
||||
|
||||
IF AND ONLY IF |prod-os| application is currently APPLIED when you do
|
||||
this, a |prod-os| application (re-)apply is internally triggered and
|
||||
fails because you have not setup the |prod-os| certificate yet.
|
||||
|
||||
#. Set the |prod-os| domain and configure your external DNS server, see
|
||||
:ref:`Update the Domain Name <update-the-domain-name>`.
|
||||
|
||||
#. Configure the |prod-os| certificate and configure |prod-os| services to use
|
||||
it, see :ref:`Install REST API and Horizon Certificate
|
||||
<install-rest-api-and-horizon-certificate>`.
|
||||
|
||||
#. Open port 443 in |prod| firewall, see :ref:`Modify Firewall Options
|
||||
<security-firewall-options>`.
|
||||
|
||||
----------
|
||||
Remote CLI
|
||||
----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
Access the |prod| containerized OpenStack Horizon GUI in your browser at the
|
||||
following address:
|
||||
|
||||
::
|
||||
|
||||
http://<oam-floating-ip-address>:31000
|
||||
|
||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
This section provides an overview of accessing REST APIs with examples of
|
||||
`curl`-based REST API commands.
|
||||
|
||||
****************
|
||||
Public endpoints
|
||||
****************
|
||||
|
||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list
|
||||
|
||||
The public endpoints will look like:
|
||||
|
||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
|
||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.openstack.svc.cluster.local:80/`
|
||||
* `etc.`
|
||||
|
||||
If you have set a unique domain name, then the public endpoints will look like:
|
||||
|
||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
|
||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
|
||||
* `etc.`
|
||||
|
||||
Documentation for the OpenStack REST APIs is available at
|
||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
|
||||
|
||||
***********
|
||||
Get a token
|
||||
***********
|
||||
|
||||
The following command will request the Keystone token:
|
||||
|
||||
::
|
||||
|
||||
curl -i -H "Content-Type: application/json" -d
|
||||
'{ "auth": {
|
||||
"identity": {
|
||||
"methods": ["password"],
|
||||
"password": {
|
||||
"user": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" },
|
||||
"password": "St8rlingX*"
|
||||
}
|
||||
}
|
||||
},
|
||||
"scope": {
|
||||
"project": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}' http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
|
||||
|
||||
The token will be returned in the "X-Subject-Token" header field of the response:
|
||||
|
||||
::
|
||||
|
||||
HTTP/1.1 201 CREATED
|
||||
Date: Wed, 02 Oct 2019 18:27:38 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 8128
|
||||
Connection: keep-alive
|
||||
X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
|
||||
Vary: X-Auth-Token
|
||||
x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
|
||||
|
||||
{"token": {"is_domain": false,
|
||||
|
||||
...
|
||||
|
||||
You can set an environment variable to hold the token value from the response.
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
|
||||
|
||||
*****************
|
||||
List Nova flavors
|
||||
*****************
|
||||
|
||||
The following command will request a list of all Nova flavors:
|
||||
|
||||
::
|
||||
|
||||
curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
|
||||
|
||||
The list will be returned in the response. For example:
|
||||
|
||||
::
|
||||
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 2529 100 2529 0 0 24187 0 --:--:-- --:--:-- --:--:-- 24317
|
||||
{
|
||||
"flavors": [
|
||||
{
|
||||
"id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "m1.tiny"
|
||||
},
|
||||
{
|
||||
"id": "14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "medium.dpdk"
|
||||
},
|
||||
{
|
||||
|
||||
...
|
||||
|
@ -1,21 +0,0 @@
|
||||
============================================
|
||||
Virtual All-in-one Simplex Installation R7.0
|
||||
============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: /shared/_includes/desc_aio_simplex.txt
|
||||
|
||||
.. include:: /shared/_includes/ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_simplex_environ
|
||||
aio_simplex_install_kubernetes
|
@ -1,161 +0,0 @@
|
||||
===================================
|
||||
Configure VirtualBox Network Access
|
||||
===================================
|
||||
|
||||
This guide describes two alternatives for providing external network access
|
||||
to the controller :abbr:`VMs (Virtual Machines)` for VirtualBox:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----------------------
|
||||
Install VM as a router
|
||||
----------------------
|
||||
|
||||
|
||||
A router can be used to act as a gateway to allow your other VirtualBox VMs
|
||||
(for example, controllers) access to the external internet. The router needs to
|
||||
be able to forward traffic from the OAM network to the internet.
|
||||
|
||||
In VirtualBox, create a new Linux VM to act as your router. This example uses
|
||||
Ubuntu. For ease of use, we recommend downloading Ubuntu 18.04 Desktop
|
||||
version or higher.
|
||||
|
||||
**Installation tip**
|
||||
|
||||
Before you install the Ubuntu 18.04 Desktop version in a Virtual Box 5.2,
|
||||
configure the VM using Edit Settings as follows:
|
||||
|
||||
#. Go to Display and move the "Video memory" slider all the way to the right.
|
||||
Then tick the "Acceleration" checkbox "Enable 3D Acceleration".
|
||||
#. Go to General/Advanced and set "Shared Clipboard" and "Drag'n Drop" to
|
||||
Bidirectional.
|
||||
#. Go to User Interface/Devices and select "Devices/Insert Guest Additions CD
|
||||
image" from the drop down. Restart your VM.
|
||||
|
||||
The network configuration for this VM must include:
|
||||
|
||||
* NAT interface to allow installation and access to the external internet.
|
||||
* Host-only Adapter connected to the same network as the OAM interfaces on
|
||||
your controllers.
|
||||
|
||||
Once the router VM has been installed, enable forwarding. In Ubuntu, do the
|
||||
following steps:
|
||||
|
||||
::
|
||||
|
||||
# Edit sysctl.conf and uncomment the following line:
|
||||
# net.ipv4.ip_forward=1
|
||||
sudo vim /etc/sysctl.conf
|
||||
# Activate the change
|
||||
sudo sysctl -p
|
||||
|
||||
Then add the gateway IP address to the interface connected to the OAM host only
|
||||
network:
|
||||
|
||||
::
|
||||
|
||||
# Assuming that enp0s8 is connected to the OAM host only network:
|
||||
cat > /etc/netplan/99_config.yaml << EOF
|
||||
network:
|
||||
version: 2
|
||||
renderer: networkd
|
||||
ethernets:
|
||||
enp0s8:
|
||||
addresses:
|
||||
- 10.10.10.1/24
|
||||
EOF
|
||||
sudo netplan apply
|
||||
|
||||
# If netplan is not installed on your router you can use these commands instead of the above.
|
||||
ip addr add 10.10.10.1/24 dev enp0s8
|
||||
|
||||
Finally, set up iptables to forward packets from the host only network to the
|
||||
NAT network:
|
||||
|
||||
::
|
||||
|
||||
# This assumes the NAT is on enp0s3 and the host only network is on enp0s8
|
||||
sudo iptables -t nat -A POSTROUTING --out-interface enp0s3 -j MASQUERADE
|
||||
sudo iptables -A FORWARD --in-interface enp0s8 -j ACCEPT
|
||||
sudo apt-get install iptables-persistent
|
||||
|
||||
|
||||
-----------------------------
|
||||
Add NAT Network in VirtualBox
|
||||
-----------------------------
|
||||
|
||||
#. Select File -> Preferences menu.
|
||||
#. Choose Network, ``Nat Networks`` tab should be selected.
|
||||
#. Click on plus icon to add a network, which will add a network named
|
||||
NatNetwork.
|
||||
#. Edit the NatNetwork (gear or screwdriver icon).
|
||||
|
||||
* Network CIDR: 10.10.10.0/24 (to match OAM network specified in
|
||||
ansible bootstrap overrides file)
|
||||
* Disable ``Supports DHCP``
|
||||
* Enable ``Supports IPv6``
|
||||
* Select ``Port Forwarding`` and add any rules you desire. Here are some
|
||||
examples where 10.10.10.2 is the StarlingX OAM Floating IP address and
|
||||
10.10.10.3/.4 are the IP addresses of the two controller units:
|
||||
|
||||
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
|
||||
+=========================+===========+=========+===========+============+=============+
|
||||
| controller-0-ssh | TCP | | 3022 | 10.10.10.3 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| controller-1-ssh | TCP | | 3122 | 10.10.10.4 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| controller-ssh | TCP | | 22 | 10.10.10.2 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| platform-horizon-http | TCP | | 8080 | 10.10.10.2 | 8080 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| platform-horizon-https | TCP | | 8443 | 10.10.10.2 | 8443 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| openstack-horizon-http | TCP | | 80 | 10.10.10.2 | 80 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| openstack-horizon-https | TCP | | 443 | 10.10.10.2 | 443 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
|
||||
~~~~~~~~~~~~~
|
||||
Access the VM
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Once your VM is running, use your PC's host address and the forwarded port to
|
||||
access the VM.
|
||||
|
||||
Instead of these commands:
|
||||
|
||||
::
|
||||
|
||||
# ssh to controller-0
|
||||
ssh wrsroot@10.10.10.3
|
||||
# scp file to controller-0
|
||||
scp <filename> wrsroot@10.10.10.3:~
|
||||
|
||||
Enter these commands instead:
|
||||
|
||||
::
|
||||
|
||||
# ssh to controller-0
|
||||
ssh -p 3022 wrsroot@<PC hostname or IP>
|
||||
# scp file to controller-0
|
||||
scp -P 3022 <filename> wrsroot@<PC hostname or IP>:~
|
||||
|
||||
|
||||
To access your VM console from Horizon, you can update the VNC proxy address
|
||||
using service parameters. The worker nodes will require a reboot following
|
||||
this change, therefore it is best to perform this operation before unlocking
|
||||
the worker nodes.
|
||||
|
||||
|
||||
::
|
||||
|
||||
# update vnc proxy setting to use NatNetwork host name
|
||||
system service-parameter-add nova vnc vncproxy_host=<hostname or IP> --personality controller --resource nova::compute::vncproxy_host # aio
|
||||
system service-parameter-add nova vnc vncproxy_host=<hostname or IP> --personality compute --resource nova::compute::vncproxy_host # standard
|
||||
system service-parameter-apply nova
|
||||
|
||||
|
@ -1,72 +0,0 @@
|
||||
The following sections describe system requirements and host setup for a
|
||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
|
||||
|
||||
*********************
|
||||
Hardware requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
|
||||
virtualization extensions
|
||||
|
||||
* **Cores:** 8
|
||||
|
||||
* **Memory:** 32GB RAM
|
||||
|
||||
* **Hard Disk:** 500GB HDD
|
||||
|
||||
* **Network:** One network adapter with active Internet connection
|
||||
|
||||
*********************
|
||||
Software requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
|
||||
|
||||
All other required packages will be installed by scripts in the StarlingX tools repository.
|
||||
|
||||
**********
|
||||
Host setup
|
||||
**********
|
||||
|
||||
Set up the host with the following steps:
|
||||
|
||||
#. Update OS:
|
||||
|
||||
::
|
||||
|
||||
apt-get update
|
||||
|
||||
#. Clone the StarlingX tools repository:
|
||||
|
||||
::
|
||||
|
||||
apt-get install -y git
|
||||
cd $HOME
|
||||
git clone https://opendev.org/starlingx/tools.git
|
||||
|
||||
#. Install required packages:
|
||||
|
||||
::
|
||||
|
||||
cd $HOME/tools/deployment/libvirt/
|
||||
bash install_packages.sh
|
||||
apt install -y apparmor-profiles
|
||||
apt-get install -y ufw
|
||||
ufw disable
|
||||
ufw status
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
|
||||
the example above, you must reboot the server to fully install the
|
||||
apparmor-profile modules.
|
||||
|
||||
|
||||
#. Get the StarlingX ISO from the
|
||||
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||
Alternately, you can use an ISO from a private StarlingX build.
|
@ -59,109 +59,6 @@ Install Software on Controller-0
|
||||
:end-before: incl-install-software-controller-0-aio-end
|
||||
|
||||
|
||||
|
||||
.. .. figure:: /shared/figures/deploy_install_guides/starlingx-deployment-options-duplex.png
|
||||
.. :width: 800
|
||||
|
||||
.. All-in-one Duplex deployment configuration
|
||||
|
||||
.. _installation-prereqs-duplex:
|
||||
|
||||
.. --------------------------
|
||||
.. Installation Prerequisites
|
||||
.. --------------------------
|
||||
|
||||
.. .. include:: /_includes/installation-prereqs.rest
|
||||
.. :start-after: begin-install-prereqs
|
||||
.. :end-before: end-install-prereqs
|
||||
|
||||
.. --------------------------------
|
||||
.. Prepare Servers for Installation
|
||||
.. --------------------------------
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-common-intro
|
||||
.. :end-before: end-min-hw-reqs-common-intro
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-dx
|
||||
.. :end-before: end-min-hw-reqs-dx
|
||||
|
||||
.. The following requirements must be met if :ref:`extending a Duplex
|
||||
.. installation <extend-dx-with-workers>`.
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-worker-hw-reqs
|
||||
.. :end-before: end-worker-hw-reqs
|
||||
|
||||
.. --------------------------------
|
||||
.. Install Software on Controller-0
|
||||
.. --------------------------------
|
||||
|
||||
.. .. include:: /_includes/installing-software-on-controller-0.rest
|
||||
.. :start-after: begin-install-ctl-0
|
||||
.. :end-before: end-install-ctl-0
|
||||
|
||||
.. .. only:: starlingx
|
||||
|
||||
.. .. contents:: |minitoc|
|
||||
.. :local:
|
||||
.. :depth: 1
|
||||
|
||||
.. .. --------
|
||||
.. .. Overview
|
||||
.. .. --------
|
||||
|
||||
.. .. .. include:: /shared/_includes/installation-prereqs.rest
|
||||
.. .. :start-after: begin-install-prereqs-dx
|
||||
.. .. :end-before: end-install-prereqs-dx
|
||||
|
||||
.. ---------------------
|
||||
.. Hardware Requirements
|
||||
.. ---------------------
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-common-intro
|
||||
.. :end-before: end-min-hw-reqs-common-intro
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-dx
|
||||
.. :end-before: end-min-hw-reqs-dx
|
||||
|
||||
.. The following requirements must be met if :ref:`extending a Duplex
|
||||
.. installation <extend-dx-with-workers>`.
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-worker-hw-reqs
|
||||
.. :end-before: end-worker-hw-reqs
|
||||
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: start-prepare-servers-common
|
||||
.. :end-before: end-prepare-servers-common
|
||||
|
||||
.. This section describes the steps to install the StarlingX Kubernetes
|
||||
.. platform on a **StarlingX R7.0 All-in-one Duplex** deployment
|
||||
.. configuration.
|
||||
|
||||
.. .. contents::
|
||||
.. :local:
|
||||
.. :depth: 1
|
||||
|
||||
.. ---------------------
|
||||
.. Create a bootable USB
|
||||
.. ---------------------
|
||||
|
||||
.. Refer to :ref:`Bootable USB <bootable_usb>` for instructions on how
|
||||
.. to create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
.. --------------------------------
|
||||
.. Install software on controller-0
|
||||
.. --------------------------------
|
||||
|
||||
.. .. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
.. :start-after: incl-install-software-controller-0-aio-start
|
||||
.. :end-before: incl-install-software-controller-0-aio-end
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
@ -26,6 +26,8 @@ Overview
|
||||
|
||||
.. _installation-prereqs:
|
||||
|
||||
.. _aio_simplex_hardware_r7:
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
@ -58,68 +58,6 @@ Install Software on Controller-0
|
||||
:start-after: incl-install-software-controller-0-standard-start
|
||||
:end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
.. --------------------------------
|
||||
.. Install Software on Controller-0
|
||||
.. --------------------------------
|
||||
|
||||
.. .. include:: /_includes/installing-software-on-controller-0.rest
|
||||
.. :start-after: begin-install-ctl-0
|
||||
.. :end-before: end-install-ctl-0
|
||||
|
||||
|
||||
.. .. only:: starlingx
|
||||
..
|
||||
.. .. --------
|
||||
.. .. Overview
|
||||
.. .. --------
|
||||
..
|
||||
.. .. .. include:: /shared/_includes/installation-prereqs.rest
|
||||
.. .. :start-after: begin-install-prereqs-ded
|
||||
.. .. :end-before: end-install-prereqs-ded
|
||||
..
|
||||
.. ---------------------
|
||||
.. Hardware Requirements
|
||||
.. ---------------------
|
||||
..
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-common-intro
|
||||
.. :end-before: end-min-hw-reqs-common-intro
|
||||
..
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-min-hw-reqs-sx
|
||||
.. :end-before: end-min-hw-reqs-sx
|
||||
..
|
||||
.. The following requirements must be met for worker nodes.
|
||||
..
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-worker-hw-reqs
|
||||
.. :end-before: end-worker-hw-reqs
|
||||
..
|
||||
.. The following requirements must be met for storage nodes.
|
||||
..
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: begin-storage-hw-reqs
|
||||
.. :end-before: end-storage-hw-reqs
|
||||
..
|
||||
.. .. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
|
||||
.. :start-after: start-prepare-servers-common
|
||||
.. :end-before: end-prepare-servers-common
|
||||
..
|
||||
.. -------------------
|
||||
.. Create bootable USB
|
||||
.. -------------------
|
||||
..
|
||||
.. Refer to :ref:`Bootable USB <bootable_usb>` for instructions on how to
|
||||
.. create a bootable USB with the StarlingX ISO on your system.
|
||||
..
|
||||
.. --------------------------------
|
||||
.. Install software on controller-0
|
||||
.. --------------------------------
|
||||
..
|
||||
.. .. include:: /shared/_includes/inc-install-software-on-controller.rest
|
||||
.. :start-after: incl-install-software-controller-0-standard-start
|
||||
.. :end-before: incl-install-software-controller-0-standard-end
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|