Merge "R4 updates to landing page and install guides"
@ -8,28 +8,28 @@ Each guide provides instruction on a specific StarlingX configuration
|
||||
|
||||
.. _latest_release:
|
||||
|
||||
-----------------------
|
||||
Latest release (stable)
|
||||
-----------------------
|
||||
------------------------
|
||||
Supported release (R4.0)
|
||||
------------------------
|
||||
|
||||
StarlingX R3.0 is the latest officially released version of StarlingX.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
r3_release/index
|
||||
|
||||
---------------------
|
||||
Upcoming R4.0 release
|
||||
---------------------
|
||||
|
||||
StarlingX R4.0 is the forthcoming version of StarlingX under development.
|
||||
StarlingX R4.0 is the most recent supported release of StarlingX.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
r4_release/index
|
||||
|
||||
-------------------------
|
||||
Upcoming release (latest)
|
||||
-------------------------
|
||||
|
||||
StarlingX R5.0 is under development.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
r5_release/index
|
||||
|
||||
|
||||
-----------------
|
||||
Archived releases
|
||||
@ -38,6 +38,7 @@ Archived releases
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
r3_release/index
|
||||
r2_release/index
|
||||
r1_release/index
|
||||
|
||||
@ -53,24 +54,25 @@ Archived releases
|
||||
|
||||
|
||||
.. Making a new release
|
||||
.. 1. Archive the previous 'latest' release.
|
||||
Move the toctree link from the Latest release section into the Archived
|
||||
.. 1. Archive the previous 'supported' release.
|
||||
Move the toctree link from the Supported release section into the Archived
|
||||
releases toctree.
|
||||
.. 2. Make the previous 'upcoming' release the new 'latest.'
|
||||
Move the toctree link from the Upcoming release section into the Latest
|
||||
release. Update narrative text for the Latest release section to use the
|
||||
.. 2. Make the previous 'upcoming' release the new 'supported'.
|
||||
Move the toctree link from the Upcoming release section into the Supported
|
||||
release. Update intro text for the Supported release section to use the
|
||||
latest version.
|
||||
.. 3. Add new 'upcoming' release.
|
||||
If the new upcoming release docs arent ready, remove toctree from Upcoming
|
||||
section and just leave narrative text. Update text for the upcoming release
|
||||
version. Once the new upcoming docs are ready, add them in the toctree here.
|
||||
.. 3. Add new 'upcoming' release, aka 'Latest' on the version button.
|
||||
If new upcoming release docs aren't ready, remove toctree from Upcoming
|
||||
section and just leave intro text. Update text for the upcoming
|
||||
release version. Once the new upcoming docs are ready, add them in the
|
||||
toctree here.
|
||||
|
||||
.. Adding new release docs
|
||||
.. 1. Make sure the most recent release versioned docs are complete for that
|
||||
release.
|
||||
.. 2. Make a copy of the most recent release folder e.g. 'r2_release.' Rename the
|
||||
folder for the new release e.g. 'r3_release'.
|
||||
.. 3. Search and replace all references to previous release number with the new
|
||||
release number. For example replace all 'R2.0' with 'R3.0.' Also search and
|
||||
replease any links that may have a specific release in the path.
|
||||
.. 4. Link new version on this page (the index page).
|
||||
.. Adding new release docs
|
||||
.. 1. Make sure the most recent release versioned docs are complete for that
|
||||
release.
|
||||
.. 2. Make a copy of the most recent release folder e.g. 'r4_release.' Rename
|
||||
the folder for the new release e.g. 'r5_release'.
|
||||
.. 3. Search and replace all references to previous release number with the new
|
||||
release number. For example replace all 'R4.0' with 'R5.0.' Also search
|
||||
and replace any links that may have a specific release number in the path.
|
||||
.. 4. Link new version on this page (the index page).
|
||||
|
@ -8,11 +8,11 @@ StarlingX R1.0 Installation
|
||||
since the R1.0 release. Due to these changes, the R1.0 installation
|
||||
instructions may not work as described.
|
||||
|
||||
Installation of the the :ref:`latest_release` is recommended.
|
||||
Installation of the current :ref:`latest_release` is recommended.
|
||||
|
||||
This is the installation guide for the StarlingX R1.0 release. If this is not the
|
||||
installation guide you want to use, see the
|
||||
:doc:`available installation guides </deploy_install_guides/index>`.
|
||||
This is the installation guide for the StarlingX R1.0 release. If this is not
|
||||
the installation guide you want to use, see the :doc:`available installation
|
||||
guides </deploy_install_guides/index>`.
|
||||
|
||||
------------
|
||||
Introduction
|
||||
|
@ -8,11 +8,11 @@ StarlingX R2.0 Installation
|
||||
since the R2.0 release. Due to these changes, the R2.0 installation
|
||||
instructions may not work as described.
|
||||
|
||||
Installation of the the :ref:`latest_release` is recommended.
|
||||
Installation of the current :ref:`latest_release` is recommended.
|
||||
|
||||
StarlingX provides a pre-defined set of standard
|
||||
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
|
||||
be installed in a virtual environment or on bare metal.
|
||||
StarlingX provides a pre-defined set of standard :doc:`deployment configurations
|
||||
</introduction/deploy_options>`. Most deployment options may be installed in a
|
||||
virtual environment or on bare metal.
|
||||
|
||||
-----------------------------------------------------
|
||||
Install StarlingX Kubernetes in a virtual environment
|
||||
|
@ -0,0 +1,420 @@
|
||||
================================
|
||||
Ansible Bootstrap Configurations
|
||||
================================
|
||||
|
||||
This section describes Ansible bootstrap configuration options.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
|
||||
.. _install-time-only-params-r5:
|
||||
|
||||
----------------------------
|
||||
Install-time-only parameters
|
||||
----------------------------
|
||||
|
||||
Some Ansible bootstrap parameters can not be changed or are very difficult to
|
||||
change after installation is complete.
|
||||
|
||||
Review the set of install-time-only parameters before installation and confirm
|
||||
that your values for these parameters are correct for the desired installation.
|
||||
|
||||
.. note::
|
||||
|
||||
If you notice an incorrect install-time-only parameter value *before you
|
||||
unlock controller-0 for the first time*, you can re-run the Ansible bootstrap
|
||||
playbook with updated override values and the updated values will take effect.
|
||||
|
||||
****************************
|
||||
Install-time-only parameters
|
||||
****************************
|
||||
|
||||
**System Properties**
|
||||
|
||||
* ``system_mode``
|
||||
* ``distributed_cloud_role``
|
||||
|
||||
**Network Properties**
|
||||
|
||||
* ``pxeboot_subnet``
|
||||
* ``pxeboot_start_address``
|
||||
* ``pxeboot_end_address``
|
||||
* ``management_subnet``
|
||||
* ``management_start_address``
|
||||
* ``management_end_address``
|
||||
* ``cluster_host_subnet``
|
||||
* ``cluster_host_start_address``
|
||||
* ``cluster_host_end_address``
|
||||
* ``cluster_pod_subnet``
|
||||
* ``cluster_pod_start_address``
|
||||
* ``cluster_pod_end_address``
|
||||
* ``cluster_service_subnet``
|
||||
* ``cluster_service_start_address``
|
||||
* ``cluster_service_end_address``
|
||||
* ``management_multicast_subnet``
|
||||
* ``management_multicast_start_address``
|
||||
* ``management_multicast_end_address``
|
||||
|
||||
**Docker Proxies**
|
||||
|
||||
* ``docker_http_proxy``
|
||||
* ``docker_https_proxy``
|
||||
* ``docker_no_proxy``
|
||||
|
||||
**Docker Registry Overrides**
|
||||
|
||||
* ``docker_registries``
|
||||
|
||||
* ``k8s.gcr.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``gcr.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``quay.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``docker.io``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``docker.elastic.co``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
* ``defaults``
|
||||
|
||||
* ``url``
|
||||
* ``username``
|
||||
* ``password``
|
||||
* ``secure``
|
||||
|
||||
**Certificates**
|
||||
|
||||
* ``k8s_root_ca_cert``
|
||||
* ``k8s_root_ca_key``
|
||||
|
||||
**Kubernetes Parameters**
|
||||
|
||||
* ``apiserver_oidc``
|
||||
|
||||
----
|
||||
IPv6
|
||||
----
|
||||
|
||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
|
||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
|
||||
updated to IPv6 addressing.
|
||||
|
||||
Example IPv6 override values are shown below:
|
||||
|
||||
::
|
||||
|
||||
dns_servers:
|
||||
‐ 2001:4860:4860::8888
|
||||
‐ 2001:4860:4860::8844
|
||||
pxeboot_subnet: 169.254.202.0/24
|
||||
management_subnet: 2001:db8:2::/64
|
||||
cluster_host_subnet: 2001:db8:3::/64
|
||||
cluster_pod_subnet: 2001:db8:4::/64
|
||||
cluster_service_subnet: 2001:db8:4::/112
|
||||
external_oam_subnet: 2001:db8:1::/64
|
||||
external_oam_gateway_address: 2001:db8::1
|
||||
external_oam_floating_address: 2001:db8::2
|
||||
external_oam_node_0_address: 2001:db8::3
|
||||
external_oam_node_1_address: 2001:db8::4
|
||||
management_multicast_subnet: ff08::1:1:0/124
|
||||
|
||||
.. note::
|
||||
|
||||
The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
|
||||
are not required for the AIO‐SX installation.
|
||||
|
||||
----------------
|
||||
Private registry
|
||||
----------------
|
||||
|
||||
To bootstrap StarlingX you must pull container images for multiple system
|
||||
services. By default these container images are pulled from public registries:
|
||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
|
||||
|
||||
It may be required (or desired) to copy the container images to a private
|
||||
registry and pull the images from the private registry (instead of the public
|
||||
registries) as part of the StarlingX bootstrap. For example, a private registry
|
||||
would be required if a StarlingX system was deployed in an air-gapped network
|
||||
environment.
|
||||
|
||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
|
||||
alternate registry(s) for the public registries from which container images are
|
||||
pulled. These alternate registries are used during the bootstrapping of
|
||||
controller-0, and on :command:`system application-apply` of application packages.
|
||||
|
||||
The `docker_registries` structure is a map of public registries and the
|
||||
alternate registry values for each public registry. For each public registry the
|
||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
|
||||
and the alternate registry URL and username/password (if authenticated).
|
||||
|
||||
url
|
||||
The fully scoped registry name (and optionally namespace/) for the alternate
|
||||
registry location where the images associated with this public registry
|
||||
should now be pulled from.
|
||||
|
||||
Valid formats for the `url` value are:
|
||||
|
||||
* Domain. For example:
|
||||
|
||||
::
|
||||
|
||||
example.domain
|
||||
|
||||
* Domain with port. For example:
|
||||
|
||||
::
|
||||
|
||||
example.domain:5000
|
||||
|
||||
* IPv4 address. For example:
|
||||
|
||||
::
|
||||
|
||||
1.2.3.4
|
||||
|
||||
* IPv4 address with port. For example:
|
||||
|
||||
::
|
||||
|
||||
1.2.3.4:5000
|
||||
|
||||
* IPv6 address. For example:
|
||||
|
||||
::
|
||||
|
||||
FD01::0100
|
||||
|
||||
* IPv6 address with port. For example:
|
||||
|
||||
::
|
||||
|
||||
[FD01::0100]:5000
|
||||
|
||||
username
|
||||
The username for logging into the alternate registry, if authenticated.
|
||||
|
||||
password
|
||||
The password for logging into the alternate registry, if authenticated.
|
||||
|
||||
|
||||
Additional configuration options in the `docker_registries` structure are:
|
||||
|
||||
defaults
|
||||
A special public registry key which defines common values to be applied to
|
||||
all overrideable public registries. If only the `defaults` registry
|
||||
is defined, it will apply `url`, `username`, and `password` for all
|
||||
registries.
|
||||
|
||||
If values under specific registries are defined, they will override the
|
||||
values defined in the defaults registry.
|
||||
|
||||
.. note::
|
||||
|
||||
The `defaults` key was formerly called `unified`. It was renamed
|
||||
in StarlingX R3.0 and updated semantics were applied.
|
||||
|
||||
This change affects anyone with a StarlingX installation prior to R3.0 that
|
||||
specifies alternate Docker registries using the `unified` key.
|
||||
|
||||
secure
|
||||
Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
|
||||
Applies to all alternate registries. A boolean value. The default value is
|
||||
True (secure, HTTPS).
|
||||
|
||||
.. note::
|
||||
|
||||
The ``secure`` parameter was formerly called ``is_secure_registry``. It was
|
||||
renamed in StarlingX R3.0.
|
||||
|
||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
|
||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
|
||||
This results in the :command:`docker pull` of images from this registry to fail.
|
||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
|
||||
signed the alternate registry’s certificate. This will add the CA as a trusted
|
||||
CA to the StarlingX system.
|
||||
|
||||
ssl_ca_cert
|
||||
The `ssl_ca_cert` value is the absolute path of the certificate file. The
|
||||
certificate must be in PEM format and the file may contain a single CA
|
||||
certificate or multiple CA certificates in a bundle.
|
||||
|
||||
The following example will apply `url`, `username`, and `password` to all
|
||||
registries.
|
||||
|
||||
::
|
||||
|
||||
docker_registries:
|
||||
defaults:
|
||||
url: my.registry.io
|
||||
username: myreguser
|
||||
password: myregP@ssw0rd
|
||||
|
||||
The next example applies `username` and `password` from the defaults registry
|
||||
to all public registries. `url` is different for each public registry. It
|
||||
additionally specifies an alternate CA certificate.
|
||||
|
||||
::
|
||||
|
||||
docker_registries:
|
||||
k8s.gcr.io:
|
||||
url: my.k8sregistry.io
|
||||
gcr.io:
|
||||
url: my.gcrregistry.io
|
||||
quay.io:
|
||||
url: my.quayregistry.io
|
||||
docker.io:
|
||||
url: my.dockerregistry.io
|
||||
defaults:
|
||||
url: my.registry.io
|
||||
username: myreguser
|
||||
password: myregP@ssw0rd
|
||||
|
||||
ssl_ca_cert: /path/to/ssl_ca_cert_file
|
||||
|
||||
------------
|
||||
Docker proxy
|
||||
------------
|
||||
|
||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
|
||||
to the Docker registries used by StarlingX or applications running on StarlingX,
|
||||
then Docker within StarlingX must be configured to use these http/https proxies.
|
||||
|
||||
Use the following configuration overrides to configure your Docker proxy settings.
|
||||
|
||||
docker_http_proxy
|
||||
Specify the HTTP proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
|
||||
docker_https_proxy
|
||||
Specify the HTTPS proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
|
||||
docker_no_proxy
|
||||
A no-proxy address list can be provided for registries not on the other side
|
||||
of the proxies. This list will be added to the default no-proxy list derived
|
||||
from localhost, loopback, management, and OAM floating addresses at run time.
|
||||
Each address in the no-proxy list must neither contain a wildcard nor have
|
||||
subnet format. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
- 5.6.7.8
|
||||
|
||||
.. _k8s-root-ca-cert-key-r5:
|
||||
|
||||
--------------------------------------
|
||||
Kubernetes root CA certificate and key
|
||||
--------------------------------------
|
||||
|
||||
By default the Kubernetes Root CA Certificate and Key are auto-generated and
|
||||
result in the use of self-signed certificates for the Kubernetes API server. In
|
||||
the case where self-signed certificates are not acceptable, use the bootstrap
|
||||
override values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the
|
||||
certificate and key for the Kubernetes root CA.
|
||||
|
||||
k8s_root_ca_cert
|
||||
Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
k8s_root_ca_key
|
||||
Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
.. important::
|
||||
|
||||
The default length for the generated Kubernetes root CA certificate is 10
|
||||
years. Replacing the root CA certificate is an involved process so the custom
|
||||
certificate expiry should be as long as possible. We recommend ensuring root
|
||||
CA certificate has an expiry of at least 5-10 years.
|
||||
|
||||
The administrator can also provide values to add to the Kubernetes API server
|
||||
certificate Subject Alternative Name list using the 'apiserver_cert_sans`
|
||||
override parameter.
|
||||
|
||||
apiserver_cert_sans
|
||||
Specifies a list of Subject Alternative Name entries that will be added to the
|
||||
Kubernetes API server certificate. Each entry in the list must be an IP address
|
||||
or domain name. For example:
|
||||
|
||||
::
|
||||
|
||||
apiserver_cert_sans:
|
||||
- hostname.domain
|
||||
- 198.51.100.75
|
||||
|
||||
StarlingX automatically updates this parameter to include IP records for the OAM
|
||||
floating IP and both OAM unit IP addresses.
|
||||
|
||||
----------------------------------------------------
|
||||
OpenID Connect authentication for Kubernetes cluster
|
||||
----------------------------------------------------
|
||||
|
||||
The Kubernetes cluster can be configured to use an external OpenID Connect
|
||||
:abbr:`IDP (identity provider)`, such as Azure Active Directory, Salesforce, or
|
||||
Google, for Kubernetes API authentication.
|
||||
|
||||
By default, OpenID Connect authentication is disabled. To enable OpenID Connect,
|
||||
use the following configuration values in the Ansible bootstrap overrides file
|
||||
to specify the IDP for OpenID Connect:
|
||||
|
||||
::
|
||||
|
||||
apiserver_oidc:
|
||||
client_id:
|
||||
issuer_url:
|
||||
username_claim:
|
||||
|
||||
When the three required fields of the `apiserver_oidc` parameter are defined,
|
||||
OpenID Connect is considered active. The values will be used to configure the
|
||||
Kubernetes cluster to use the specified external OpenID Connect IDP for
|
||||
Kubernetes API authentication.
|
||||
|
||||
In addition, you will need to configure the external OpenID Connect IDP and any
|
||||
required OpenID client application according to the specific IDP's documentation.
|
||||
|
||||
If not configuring OpenID Connect, all values should be absent from the
|
||||
configuration file.
|
||||
|
||||
.. note::
|
||||
|
||||
Default authentication via service account tokens is always supported,
|
||||
even when OpenID Connect authentication is configured.
|
@ -0,0 +1,7 @@
|
||||
.. important::
|
||||
|
||||
Some Ansible bootstrap parameters cannot be changed or are very difficult to change after installation is complete.
|
||||
|
||||
Review the set of install-time-only parameters before installation and confirm that your values for these parameters are correct for the desired installation.
|
||||
|
||||
Refer to :ref:`Ansible install-time-only parameters <install-time-only-params-r5>` for details.
|
@ -0,0 +1,26 @@
|
||||
==============================================
|
||||
Bare metal All-in-one Duplex Installation R5.0
|
||||
==============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_duplex.txt
|
||||
|
||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||
worker nodes (not shown in the diagram). Installation instructions for
|
||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_duplex_hardware
|
||||
aio_duplex_install_kubernetes
|
||||
aio_duplex_extend
|
@ -0,0 +1,192 @@
|
||||
=================================
|
||||
Extend Capacity with Worker Nodes
|
||||
=================================
|
||||
|
||||
This section describes the steps to extend capacity with worker nodes on a
|
||||
**StarlingX R5.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on worker nodes
|
||||
--------------------------------
|
||||
|
||||
#. Power on the worker node servers and force them to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As the worker nodes boot, a message appears on their console instructing
|
||||
you to configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered worker node
|
||||
hosts (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 3 | None | None | locked | disabled | offline |
|
||||
| 4 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'worker':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker nodes.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. Wait for the install of software on the worker nodes to complete, for the
|
||||
worker nodes to reboot, and for both to show as locked/disabled/online in
|
||||
'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-1 0 -1G 100
|
||||
system host-memory-modify controller-1 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
@ -0,0 +1,58 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R5.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 2 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :doc:`../../nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
|
||||
| | for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,480 @@
|
||||
=================================================
|
||||
Install StarlingX Kubernetes on Bare Metal AIO-DX
|
||||
=================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-install-software-controller-0-aio-simplex-start:
|
||||
:end-before: incl-install-software-controller-0-aio-simplex-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
|
||||
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step is required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes.
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-0 0 -1G 100
|
||||
system host-memory-modify controller-0 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||
to the `sdb` disk:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
:end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-0-aio-simplex-start:
|
||||
:end-before: incl-unlock-controller-0-aio-simplex-end:
|
||||
|
||||
-------------------------------------
|
||||
Install software on controller-1 node
|
||||
-------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
#. Wait for the software installation on controller-1 to complete, for controller-1 to
|
||||
reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment:
|
||||
|
||||
(Note that the MGMT interface is partially set up automatically by the network
|
||||
install procedure.)
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
#. Configure data interfaces for controller-1. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-1 0 -1G 100
|
||||
system host-memory-modify controller-1 1 -1G 100
|
||||
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export NODE=controller-1
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-1 for Ceph:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-1
|
||||
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
|
||||
system host-stor-list controller-1
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
system host-label-assign controller-1 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
===============================================
|
||||
Bare metal All-in-one Simplex Installation R5.0
|
||||
===============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_simplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_simplex_hardware
|
||||
aio_simplex_install_kubernetes
|
@ -0,0 +1,58 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R5.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 1 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see :doc:`../../nvme_config`) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD |
|
||||
| | journal) |
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
|
||||
| | RPM) for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,413 @@
|
||||
=================================================
|
||||
Install StarlingX Kubernetes on Bare Metal AIO-SX
|
||||
=================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-install-software-controller-0-aio-simplex-start:
|
||||
|
||||
#. Insert the bootable USB into a bootable USB port on the host you are
|
||||
configuring as controller-0.
|
||||
|
||||
#. Power on the host.
|
||||
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
StarlingX Installer Menus.
|
||||
|
||||
#. Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
|
||||
your terminal access to the console port
|
||||
|
||||
#. Wait for non-interactive install of software to complete and server to reboot.
|
||||
This can take 5-10 minutes, depending on the performance of the server.
|
||||
|
||||
.. incl-install-software-controller-0-aio-simplex-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: simplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||
as "oam". Use the OAM port name that is applicable to your deployment
|
||||
environment, for example eth0:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes.
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-0 0 -1G 100
|
||||
system host-memory-modify controller-0 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
|
||||
to the `sdb` disk:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 openvswitch=enabled
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
StarlingX has OVS (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, OVS-DPDK (OVS with the Data Plane
|
||||
Development Kit, which is supported only on bare metal hardware) should be
|
||||
used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||
|
||||
To deploy the default containerized OVS:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||
OVS defined in the helm charts of stx-openstack manifest.
|
||||
|
||||
To deploy OVS-DPDK, run the following command:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||
vSwitch cores for compute-labeled worker nodes.
|
||||
|
||||
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||
command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||
|
||||
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for VMs in an OVS-DPDK environment with the command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify worker-0 0 -1G 10
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||
controllers) to apply the change.
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-start:
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-end:
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,22 @@
|
||||
=============================================================
|
||||
Bare metal Standard with Controller Storage Installation R5.0
|
||||
=============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_controller_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller_storage_hardware
|
||||
controller_storage_install_kubernetes
|
@ -0,0 +1,56 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R5.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum Requirement | Controller Node | Worker Node |
|
||||
+=========================+=============================+=============================+
|
||||
| Number of servers | 2 | 2-10 |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum memory | 64 GB | 32 GB |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (Minimum 10k RPM) |
|
||||
| | :doc:`../../nvme_config`) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
|
||||
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
|
||||
| | - Recommended, but not | 10K RPM) for VM local |
|
||||
| | required: 1 or more SSDs | ephemeral storage |
|
||||
| | or NVMe drives for Ceph | |
|
||||
| | journals (min. 1024 MiB | |
|
||||
| | per OSD journal) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,655 @@
|
||||
===========================================================================
|
||||
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
|
||||
===========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-------------------
|
||||
Create bootable USB
|
||||
-------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-install-software-controller-0-standard-start:
|
||||
|
||||
#. Insert the bootable USB into a bootable USB port on the host you are
|
||||
configuring as controller-0.
|
||||
|
||||
#. Power on the host.
|
||||
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
StarlingX Installer Menus.
|
||||
|
||||
#. Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
|
||||
your terminal access to the console port
|
||||
|
||||
#. Wait for non-interactive install of software to complete and server to reboot.
|
||||
This can take 5-10 minutes, depending on the performance of the server.
|
||||
|
||||
.. incl-install-software-controller-0-standard-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-start:
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
|
||||
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-storage-start:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
StarlingX has OVS (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, OVS-DPDK (OVS with the Data Plane
|
||||
Development Kit, which is supported only on bare metal hardware) should be
|
||||
used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||
|
||||
To deploy the default containerized OVS:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||
OVS defined in the helm charts of stx-openstack manifest.
|
||||
|
||||
To deploy OVS-DPDK, run the following command:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||
vSwitch cores for compute-labeled worker nodes.
|
||||
|
||||
When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
|
||||
command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -f vswitch -1G 1 worker-0 0
|
||||
|
||||
VMs created in an OVS-DPDK environment must be configured to use huge pages
|
||||
to enable networking and must use a flavor with property: hw:mem_page_size=large
|
||||
|
||||
Configure the huge pages for VMs in an OVS-DPDK environment with the command:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify worker-0 0 -1G 10
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking all compute-labeled worker nodes (and/or AIO
|
||||
controllers) to apply the change.
|
||||
|
||||
.. incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1-1
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||
complete, for all servers to reboot, and for all to show as locked/disabled/online in
|
||||
'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-start:
|
||||
|
||||
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
|
||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
|
||||
to your deployment environment.
|
||||
|
||||
(Note that the MGMT interface is partially set up automatically by the network
|
||||
install procedure.)
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 $MGMT_IF cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-start:
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. incl-unlock-controller-1-end:
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. Add the third Ceph monitor to a worker node:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add worker-0
|
||||
|
||||
#. Wait for the worker node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign ${NODE} sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-memory-modify ${NODE} 0 -1G 100
|
||||
system host-memory-modify ${NODE} 1 -1G 100
|
||||
done
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock worker nodes
|
||||
--------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------------
|
||||
Add Ceph OSDs to controllers
|
||||
----------------------------
|
||||
|
||||
#. Add OSDs to controller-0. The following example adds OSDs to the `sdb` disk:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend.
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend.
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,22 @@
|
||||
|
||||
============================================================
|
||||
Bare metal Standard with Dedicated Storage Installation R5.0
|
||||
============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_dedicated_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dedicated_storage_hardware
|
||||
dedicated_storage_install_kubernetes
|
@ -0,0 +1,61 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R5.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum Requirement | Controller Node | Storage Node | Worker Node |
|
||||
+=====================+===========================+=======================+=======================+
|
||||
| Number of servers | 2 | 2-9 | 2-100 |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
|
||||
| class | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum memory | 64 GB | 64 GB | 32 GB |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Primary disk | 500 GB SSD or NVMe (see | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
|
||||
| | :doc:`../../nvme_config`) | | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
|
||||
| | | (min. 10K RPM) for | recommend 1 or more |
|
||||
| | | Ceph OSD | 500 GB (min. 10K |
|
||||
| | | - Recommended, but | RPM) for VM |
|
||||
| | | not required: 1 or | ephemeral storage |
|
||||
| | | more SSDs or NVMe | |
|
||||
| | | drives for Ceph | |
|
||||
| | | journals (min. 1024 | |
|
||||
| | | MiB per OSD | |
|
||||
| | | journal) | |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
|
||||
| ports | 1x10GE | 1x10GE | 1x10GE |
|
||||
| | - OAM: 1x1GE | | - Data: 1 or more |
|
||||
| | | | x 10GE |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+---------------------+---------------------------+-----------------------+-----------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,367 @@
|
||||
==========================================================================
|
||||
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
|
||||
==========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-------------------
|
||||
Create bootable USB
|
||||
-------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-install-software-controller-0-standard-start:
|
||||
:end-before: incl-install-software-controller-0-standard-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start:
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-storage-start:
|
||||
:end-before: incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. important::
|
||||
|
||||
Make sure the Ceph storage backend is configured. If it is
|
||||
not configured, you will not be able to configure storage
|
||||
nodes.
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the storage-0 and
|
||||
storage-1 servers. Set the personality to 'storage' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on storage-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for storage-1. Power on storage-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates the software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the worker nodes.
|
||||
Set the personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, power on worker-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for worker-1. Power on worker-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates the install of software on worker-0 and worker-1.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all servers to reboot, and for all to
|
||||
show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | worker-0 | worker | locked | disabled | online |
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-start:
|
||||
:end-before: incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-start:
|
||||
:end-before: incl-unlock-controller-1-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign ${NODE} sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-memory-modify ${NODE} 0 -1G 100
|
||||
system host-memory-modify ${NODE} 1 -1G 100
|
||||
done
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
Unlock worker nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,72 @@
|
||||
====================================
|
||||
Bare metal Standard with Ironic R5.0
|
||||
====================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
Ironic is an OpenStack project that provisions bare metal machines. For
|
||||
information about the Ironic project, see
|
||||
`Ironic Documentation <https://docs.openstack.org/ironic>`__.
|
||||
|
||||
End user applications can be deployed on bare metal servers (instead of
|
||||
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
|
||||
more bare metal servers.
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-ironic.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Ironic deployment configuration
|
||||
|
||||
*Figure 1: Standard with Ironic deployment configuration*
|
||||
|
||||
Bare metal servers must be connected to:
|
||||
|
||||
* IPMI for OpenStack Ironic control
|
||||
* ironic-provisioning-net tenant network via their untagged physical interface,
|
||||
which supports PXE booting
|
||||
|
||||
As part of configuring OpenStack Ironic in StarlingX:
|
||||
|
||||
* An ironic-provisioning-net tenant network must be identified as the boot
|
||||
network for bare metal nodes.
|
||||
* An additional untagged physical interface must be configured on controller
|
||||
nodes and connected to the ironic-provisioning-net tenant network. The
|
||||
OpenStack Ironic tftpboot server will PXE boot the bare metal servers over
|
||||
this interface.
|
||||
|
||||
.. note::
|
||||
|
||||
Bare metal servers are NOT:
|
||||
|
||||
* Running any OpenStack / StarlingX software; they are running end user
|
||||
applications (for example, Glance Images).
|
||||
* To be connected to the internal management network.
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
StarlingX currently supports only a bare metal installation of Ironic with a
|
||||
standard configuration, either:
|
||||
|
||||
* :doc:`controller_storage`
|
||||
|
||||
* :doc:`dedicated_storage`
|
||||
|
||||
|
||||
This guide assumes that you have a standard deployment installed and configured
|
||||
with 2x controllers and at least 1x compute-labeled worker node, with the
|
||||
StarlingX OpenStack application (stx-openstack) applied.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
ironic_hardware
|
||||
ironic_install
|
@ -0,0 +1,51 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R5.0 bare metal Ironic** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
|
||||
|
||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
|
||||
address of bare metal hosts.
|
||||
|
||||
For controller nodes:
|
||||
|
||||
* Additional NIC port on both controller nodes for connecting to the
|
||||
ironic-provisioning-net.
|
||||
|
||||
For worker nodes:
|
||||
|
||||
* If using a flat data network for the Ironic provisioning network, an additional
|
||||
NIC port on one of the worker nodes is required.
|
||||
|
||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
||||
simply add the new data network to an existing interface on the worker node.
|
||||
|
||||
* Additional switch ports / configuration for new ports on controller, worker,
|
||||
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
||||
|
||||
-----------------------------------
|
||||
BMC configuration of Ironic node(s)
|
||||
-----------------------------------
|
||||
|
||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
|
||||
For example, set:
|
||||
|
||||
IP address
|
||||
10.10.10.126
|
||||
|
||||
username
|
||||
root
|
||||
|
||||
password
|
||||
test123
|
@ -0,0 +1,392 @@
|
||||
================================
|
||||
Install Ironic on StarlingX R5.0
|
||||
================================
|
||||
|
||||
This section describes the steps to install Ironic on a standard configuration,
|
||||
either:
|
||||
|
||||
* **StarlingX R5.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration
|
||||
|
||||
* **StarlingX R5.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Enable Ironic service
|
||||
---------------------
|
||||
|
||||
This section describes the pre-configuration required to enable the Ironic service.
|
||||
All the commands in this section are for the StarlingX platform.
|
||||
|
||||
First acquire administrative privileges:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
********************************
|
||||
Download Ironic deployment image
|
||||
********************************
|
||||
|
||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
|
||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
|
||||
by the deployment image wipes the disks and tests connectivity to the Ironic
|
||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
|
||||
|
||||
The latest Ironic deployment image (**Ironic-kernel** and **Ironic-ramdisk**)
|
||||
can be found here:
|
||||
|
||||
* `Ironic-kernel ipa-centos8-master.kernel
|
||||
<https://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-master.kernel>`__
|
||||
* `Ironic-ramdisk ipa-centos8.initramfs
|
||||
<https://tarballs.openstack.org/ironic-python-agent-builder/dib/files/ipa-centos8-master.initramfs>`__
|
||||
|
||||
|
||||
*******************************************************
|
||||
Configure Ironic network on deployed standard StarlingX
|
||||
*******************************************************
|
||||
|
||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
|
||||
|
||||
#. Add the Ironic platform network. This example uses `ironic-net`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
|
||||
|
||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
|
||||
|
||||
.. note::
|
||||
|
||||
The tenant network is not the same as the platform network described in
|
||||
the previous step. You can specify any name for the tenant network other
|
||||
than ‘ironic’. If the name 'ironic' is used, a user override must be
|
||||
generated to indicate the tenant network name.
|
||||
|
||||
Refer to section `Generate user Helm overrides`_ for details.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ironic-data flat
|
||||
|
||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
|
||||
them to the platform network. Host must be locked. This example uses the
|
||||
platform network `ironic-net` that was named in a previous step.
|
||||
|
||||
These new interfaces to the controllers are used to connect to the Ironic
|
||||
provisioning network:
|
||||
|
||||
**controller-0**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-0 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-0 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-0
|
||||
|
||||
|
||||
**controller-1**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-1 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-1 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-1
|
||||
|
||||
#. Configure the new interface (for Ironic) on one of the compute-labeled worker
|
||||
nodes and assign it to the Ironic data network. This example uses the data
|
||||
network `ironic-data` that was named in a previous step.
|
||||
|
||||
::
|
||||
|
||||
system interface-datanetwork-assign worker-0 eno1 ironic-data
|
||||
system host-if-modify -n ironicdata -c data worker-0 eno1
|
||||
|
||||
****************************
|
||||
Generate user Helm overrides
|
||||
****************************
|
||||
|
||||
Ironic Helm Charts are included in the stx-openstack application. By default,
|
||||
Ironic is disabled.
|
||||
|
||||
To enable Ironic, update the following Ironic Helm Chart attributes:
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update stx-openstack ironic openstack \
|
||||
--set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
|
||||
--set network.pxe.neutron_subnet_gateway=10.10.20.1 \
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
|
||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
|
||||
|
||||
If the data network name for Ironic is changed, modify
|
||||
:command:`network.pxe.neutron_provider_network` to the command above:
|
||||
|
||||
::
|
||||
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
*******************************
|
||||
Apply stx-openstack application
|
||||
*******************************
|
||||
|
||||
Re-apply the stx-openstack application to apply the changes to Ironic:
|
||||
|
||||
::
|
||||
|
||||
system helm-chart-attribute-modify stx-openstack ironic openstack \
|
||||
--enabled true
|
||||
|
||||
system application-apply stx-openstack
|
||||
|
||||
--------------------
|
||||
Start an Ironic node
|
||||
--------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application with
|
||||
administrative privileges.
|
||||
|
||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: 'Li69nux*'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
********************
|
||||
Create Glance images
|
||||
********************
|
||||
|
||||
#. Create the **ironic-kernel** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe-stable-stein.vmlinuz \
|
||||
--disk-format aki \
|
||||
--container-format aki \
|
||||
--public \
|
||||
ironic-kernel
|
||||
|
||||
#. Create the **ironic-ramdisk** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
|
||||
--disk-format ari \
|
||||
--container-format ari \
|
||||
--public \
|
||||
ironic-ramdisk
|
||||
|
||||
#. Create the end user application image (for example, CentOS):
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
|
||||
--public --disk-format \
|
||||
qcow2 --container-format bare centos
|
||||
|
||||
*********************
|
||||
Create an Ironic node
|
||||
*********************
|
||||
|
||||
#. Create a node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node create --driver ipmi --name ironic-test0
|
||||
|
||||
#. Add IPMI information:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info ipmi_address=10.10.10.126 \
|
||||
--driver-info ipmi_username=root \
|
||||
--driver-info ipmi_password=test123 \
|
||||
--driver-info ipmi_terminal_port=623 ironic-test0
|
||||
|
||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
|
||||
on this bare metal node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
|
||||
--driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
|
||||
ironic-test0
|
||||
|
||||
#. Set resource properties on this bare metal node based on actual Ironic node
|
||||
capacities:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--property cpus=4 \
|
||||
--property cpu_arch=x86_64\
|
||||
--property capabilities="boot_option:local" \
|
||||
--property memory_mb=65536 \
|
||||
--property local_gb=400 \
|
||||
--resource-class bm ironic-test0
|
||||
|
||||
#. Add pxe_template location:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set --driver-info \
|
||||
pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \
|
||||
ironic-test0
|
||||
|
||||
#. Create a port to identify the specific port used by the Ironic node.
|
||||
Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
|
||||
port which connects to the Ironic network:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal port create \
|
||||
--node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
|
||||
--pxe-enabled true a4:bf:01:2b:3b:c8
|
||||
|
||||
#. Change node state to `manage`:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node manage ironic-test0
|
||||
|
||||
#. Make node available for deployment:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node provide ironic-test0
|
||||
|
||||
#. Wait for ironic-test0 provision-state: available:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node show ironic-test0
|
||||
|
||||
---------------------------------
|
||||
Deploy an instance on Ironic node
|
||||
---------------------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application, but this
|
||||
time with *tenant* specific privileges.
|
||||
|
||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'joeuser'
|
||||
password: 'mypasswrd'
|
||||
project_name: 'intel'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
#. Create flavor.
|
||||
|
||||
Set resource CUSTOM_BM corresponding to **--resource-class bm**:
|
||||
|
||||
::
|
||||
|
||||
openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
|
||||
--property resources:CUSTOM_BM=1 \
|
||||
--property resources:VCPU=0 \
|
||||
--property resources:MEMORY_MB=0 \
|
||||
--property resources:DISK_GB=0 \
|
||||
--property capabilities:boot_option='local' \
|
||||
bm-flavor
|
||||
|
||||
See `Adding scheduling information
|
||||
<https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
|
||||
and `Configure Nova flavors
|
||||
<https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
|
||||
for more information.
|
||||
|
||||
#. Enable service
|
||||
|
||||
List the compute services:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service list
|
||||
|
||||
Set compute service properties:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service set --enable controller-0 nova-compute
|
||||
|
||||
#. Create instance
|
||||
|
||||
.. note::
|
||||
|
||||
The :command:`keypair create` command is optional. It is not required to
|
||||
enable a bare metal instance.
|
||||
|
||||
::
|
||||
|
||||
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
|
||||
|
||||
|
||||
Create 2 new servers, one bare metal and one virtual:
|
||||
|
||||
::
|
||||
|
||||
openstack server create --image centos --flavor bm-flavor \
|
||||
--network baremetal --key-name mykey bm
|
||||
|
||||
openstack server create --image centos --flavor m1.small \
|
||||
--network baremetal --key-name mykey vm
|
@ -0,0 +1,17 @@
|
||||
Prior to starting the StarlingX installation, the bare metal servers must be in
|
||||
the following condition:
|
||||
|
||||
* Physically installed
|
||||
|
||||
* Cabled for power
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in Figure 1.
|
||||
|
||||
* All disks wiped
|
||||
|
||||
* Ensures that servers will boot from either the network or USB storage (if present)
|
||||
|
||||
* Powered off
|
@ -0,0 +1,29 @@
|
||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
||||
availability (HA) servers with each server providing all three cloud functions
|
||||
(controller, worker, and storage).
|
||||
|
||||
An AIO-DX configuration provides the following benefits:
|
||||
|
||||
* Only a small amount of cloud processing and storage power is required
|
||||
* Application consolidation using multiple containers or virtual machines on a
|
||||
single pair of physical servers
|
||||
* High availability (HA) services run on the controller function across two
|
||||
physical servers in either active/active or active/standby mode
|
||||
* A storage back end solution using a two-node CEPH deployment across two servers
|
||||
* Containers or virtual machines scheduled on both worker functions
|
||||
* Protection against overall server hardware fault, where
|
||||
|
||||
* All controller HA services go active on the remaining healthy server
|
||||
* All virtual machines are recovered on the remaining healthy server
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-duplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Duplex deployment configuration
|
||||
|
||||
*Figure 1: All-in-one Duplex deployment configuration*
|
@ -0,0 +1,24 @@
|
||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
||||
functions (controller, worker, and storage) on a single server with the
|
||||
following benefits:
|
||||
|
||||
* Requires only a small amount of cloud processing and storage power
|
||||
* Application consolidation using multiple containers or virtual machines on a
|
||||
single pair of physical servers
|
||||
* A storage backend solution using a single-node CEPH deployment
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Simplex deployment configuration
|
||||
|
||||
*Figure 1: All-in-one Simplex deployment configuration*
|
||||
|
||||
An AIO-SX deployment gives no protection against overall server hardware fault.
|
||||
Hardware component protection can be enabled with, for example, a hardware RAID
|
||||
or 2x Port LAG in the deployment.
|
@ -0,0 +1,28 @@
|
||||
The Standard with Controller Storage deployment option provides two high
|
||||
availability (HA) controller nodes and a pool of up to 10 worker nodes.
|
||||
|
||||
A Standard with Controller Storage configuration provides the following benefits:
|
||||
|
||||
* A pool of up to 10 worker nodes
|
||||
* High availability (HA) services run across the controller nodes in either
|
||||
active/active or active/standby mode
|
||||
* A storage back end solution using a two-node CEPH deployment across two
|
||||
controller servers
|
||||
* Protection against overall controller and worker node failure, where
|
||||
|
||||
* On overall controller node failure, all controller HA services go active on
|
||||
the remaining healthy controller node
|
||||
* On overall worker node failure, virtual machines and containers are
|
||||
recovered on the remaining healthy worker nodes
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Controller Storage deployment configuration
|
||||
|
||||
*Figure 1: Standard with Controller Storage deployment configuration*
|
@ -0,0 +1,24 @@
|
||||
The Standard with Dedicated Storage deployment option is a standard installation
|
||||
with independent controller, worker, and storage nodes.
|
||||
|
||||
A Standard with Dedicated Storage configuration provides the following benefits:
|
||||
|
||||
* A pool of up to 100 worker nodes
|
||||
* A 2x node high availability (HA) controller cluster with HA services running
|
||||
across the controller nodes in either active/active or active/standby mode
|
||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
||||
that supports a replication factor of two or three
|
||||
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage
|
||||
nodes
|
||||
|
||||
.. note::
|
||||
|
||||
If you are behind a corporate firewall or proxy, you need to set proxy
|
||||
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Dedicated Storage deployment configuration
|
||||
|
||||
*Figure 1: Standard with Dedicated Storage deployment configuration*
|
@ -0,0 +1,313 @@
|
||||
===================================
|
||||
Distributed Cloud Installation R5.0
|
||||
===================================
|
||||
|
||||
This section describes how to install and configure the StarlingX distributed
|
||||
cloud deployment.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
Distributed cloud configuration supports an edge computing solution by
|
||||
providing central management and orchestration for a geographically
|
||||
distributed network of StarlingX Kubernetes edge systems/clusters.
|
||||
|
||||
The StarlingX distributed cloud implements the OpenStack Edge Computing
|
||||
Groups's MVP `Edge Reference Architecture
|
||||
<https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures>`_,
|
||||
specifically the "Distributed Control Plane" scenario.
|
||||
|
||||
The StarlingX distributed cloud deployment is designed to meet the needs of
|
||||
edge-based data centers with centralized orchestration and independent control
|
||||
planes, and in which Network Function Cloudification (NFC) worker resources
|
||||
are localized for maximum responsiveness. The architecture features:
|
||||
|
||||
- Centralized orchestration of edge cloud control planes.
|
||||
- Full synchronized control planes at edge clouds (that is, Kubernetes cluster
|
||||
master and nodes), with greater benefits for local services, such as:
|
||||
|
||||
- Reduced network latency.
|
||||
- Operational availability, even if northbound connectivity
|
||||
to the central cloud is lost.
|
||||
|
||||
The system supports a scalable number of StarlingX Kubernetes edge
|
||||
systems/clusters, which are centrally managed and synchronized over L3
|
||||
networks from a central cloud. Each edge system is also highly scalable, from
|
||||
a single node StarlingX Kubernetes deployment to a full standard cloud
|
||||
configuration with controller, worker and storage nodes.
|
||||
|
||||
------------------------------
|
||||
Distributed cloud architecture
|
||||
------------------------------
|
||||
|
||||
A distributed cloud system consists of a central cloud, and one or more
|
||||
subclouds connected to the SystemController region central cloud over L3
|
||||
networks, as shown in Figure 1.
|
||||
|
||||
- **Central cloud**
|
||||
|
||||
The central cloud provides a *RegionOne* region for managing the physical
|
||||
platform of the central cloud and the *SystemController* region for managing
|
||||
and orchestrating over the subclouds.
|
||||
|
||||
- **RegionOne**
|
||||
|
||||
In the Horizon GUI, RegionOne is the name of the access mode, or region,
|
||||
used to manage the nodes in the central cloud.
|
||||
|
||||
- **SystemController**
|
||||
|
||||
In the Horizon GUI, SystemController is the name of the access mode, or
|
||||
region, used to manage the subclouds.
|
||||
|
||||
You can use the SystemController to add subclouds, synchronize select
|
||||
configuration data across all subclouds and monitor subcloud operations
|
||||
and alarms. System software updates for the subclouds are also centrally
|
||||
managed and applied from the SystemController.
|
||||
|
||||
DNS, NTP, and other select configuration settings are centrally managed
|
||||
at the SystemController and pushed to the subclouds in parallel to
|
||||
maintain synchronization across the distributed cloud.
|
||||
|
||||
- **Subclouds**
|
||||
|
||||
The subclouds are StarlingX Kubernetes edge systems/clusters used to host
|
||||
containerized applications. Any type of StarlingX Kubernetes configuration,
|
||||
(including simplex, duplex, or standard with or without storage nodes), can
|
||||
be used for a subcloud. The two edge clouds shown in Figure 1 are subclouds.
|
||||
|
||||
Alarms raised at the subclouds are sent to the SystemController for
|
||||
central reporting.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-distributed-cloud.png
|
||||
:scale: 45%
|
||||
:alt: Distributed cloud deployment configuration
|
||||
|
||||
*Figure 1: Distributed cloud deployment configuration*
|
||||
|
||||
|
||||
--------------------
|
||||
Network requirements
|
||||
--------------------
|
||||
|
||||
Subclouds are connected to the SystemController through both the OAM and the
|
||||
Management interfaces. Because each subcloud is on a separate L3 subnet, the
|
||||
OAM, Management and PXE boot L2 networks are local to the subclouds. They are
|
||||
not connected via L2 to the central cloud, they are only connected via L3
|
||||
routing. The settings required to connect a subcloud to the SystemController
|
||||
are specified when a subcloud is defined. A gateway router is required to
|
||||
complete the L3 connections, which will provide IP routing between the
|
||||
subcloud Management and OAM IP subnet and the SystemController Management and
|
||||
OAM IP subnet, respectively. The SystemController bootstraps the subclouds via
|
||||
the OAM network, and manages them via the management network. For more
|
||||
information, see the `Install a Subcloud`_ section later in this guide.
|
||||
|
||||
.. note::
|
||||
|
||||
All messaging between SystemControllers and Subclouds uses the ``admin``
|
||||
REST API service endpoints which, in this distributed cloud environment,
|
||||
are all configured for secure HTTPS. Certificates for these HTTPS
|
||||
connections are managed internally by StarlingX.
|
||||
|
||||
---------------------------------------
|
||||
Install and provision the central cloud
|
||||
---------------------------------------
|
||||
|
||||
Installing the central cloud is similar to installing a standard
|
||||
StarlingX Kubernetes system. The central cloud supports either an AIO-duplex
|
||||
deployment configuration or a standard with dedicated storage nodes deployment
|
||||
configuration.
|
||||
|
||||
To configure controller-0 as a distributed cloud central controller, you must
|
||||
set certain system parameters during the initial bootstrapping of
|
||||
controller-0. Set the system parameter *distributed_cloud_role* to
|
||||
*systemcontroller* in the Ansible bootstrap override file. Also, set the
|
||||
management network IP address range to exclude IP addresses reserved for
|
||||
gateway routers providing routing to the subclouds' management subnets.
|
||||
|
||||
.. note:: Worker hosts and data networks are not used in the
|
||||
central cloud.
|
||||
|
||||
Procedure:
|
||||
|
||||
- Follow the StarlingX R5.0 installation procedures with the extra step noted below:
|
||||
|
||||
- AIO-duplex:
|
||||
`Bare metal All-in-one Duplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_duplex.html>`_
|
||||
|
||||
- Standard with dedicated storage nodes:
|
||||
`Bare metal Standard with Dedicated Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage.html>`_
|
||||
|
||||
- For the step "Bootstrap system on controller-0", add the following
|
||||
parameters to the Ansible bootstrap override file.
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
distributed_cloud_role: systemcontroller
|
||||
management_start_address: <X.Y.Z.2>
|
||||
management_end_address: <X.Y.Z.50>
|
||||
|
||||
------------------
|
||||
Install a subcloud
|
||||
------------------
|
||||
|
||||
At the subcloud location:
|
||||
|
||||
1. Physically install and cable all subcloud servers.
|
||||
2. Physically install the top of rack switch and configure it for the
|
||||
required networks.
|
||||
3. Physically install the gateway routers which will provide IP routing
|
||||
between the subcloud OAM and Management subnets and the SystemController
|
||||
OAM and management subnets.
|
||||
4. On the server designated for controller-0, install the StarlingX
|
||||
Kubernetes software from USB or a PXE Boot server.
|
||||
|
||||
5. Establish an L3 connection to the SystemController by enabling the OAM
|
||||
interface (with OAM IP/subnet) on the subcloud controller using the
|
||||
``config_management`` script. This step is for subcloud ansible bootstrap
|
||||
preparation.
|
||||
|
||||
.. note:: This step should **not** use an interface that uses the MGMT
|
||||
IP/subnet because the MGMT IP subnet will get moved to the loopback
|
||||
address by the Ansible bootstrap playbook during installation.
|
||||
|
||||
Be prepared to provide the following information:
|
||||
|
||||
- Subcloud OAM interface name (for example, enp0s3).
|
||||
- Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24).
|
||||
|
||||
.. note:: This must match the *external_oam_floating_address* supplied in
|
||||
the subcloud's ansible bootstrap override file.
|
||||
|
||||
- Subcloud gateway address on the OAM network
|
||||
(for example, 10.10.10.1). A default value is shown.
|
||||
- System Controller OAM subnet (for example, 10,10.10.0/24).
|
||||
|
||||
.. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for
|
||||
the script to finish.
|
||||
|
||||
.. note:: The `config_management` in the code snippet configures the OAM
|
||||
interface/address/gateway.
|
||||
|
||||
.. code:: sh
|
||||
|
||||
$ sudo config_management
|
||||
Enabling interfaces... DONE
|
||||
Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE
|
||||
Available interfaces:
|
||||
local interface remote port
|
||||
--------------- ----------
|
||||
enp0s3 08:00:27:c4:6c:7a
|
||||
enp0s8 08:00:27:86:7a:13
|
||||
enp0s9 unknown
|
||||
|
||||
Enter management interface name: enp0s3
|
||||
Enter management address CIDR: 10.10.10.12/24
|
||||
Enter management gateway address [10.10.10.1]:
|
||||
Enter System Controller subnet: 10.10.10.0/24
|
||||
Disabling non-management interfaces... DONE
|
||||
Configuring management interface... DONE
|
||||
RTNETLINK answers: File exists
|
||||
Adding route to System Controller... DONE
|
||||
|
||||
At the System Controller:
|
||||
|
||||
1. Create a ``bootstrap-values.yml`` override file for the subcloud. For
|
||||
example:
|
||||
|
||||
.. code:: yaml
|
||||
|
||||
system_mode: duplex
|
||||
name: "subcloud1"
|
||||
description: "Ottawa Site"
|
||||
location: "YOW"
|
||||
|
||||
management_subnet: 192.168.101.0/24
|
||||
management_start_address: 192.168.101.2
|
||||
management_end_address: 192.168.101.50
|
||||
management_gateway_address: 192.168.101.1
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.12
|
||||
|
||||
systemcontroller_gateway_address: 192.168.204.101
|
||||
|
||||
.. important:: The `management_*` entries in the override file are required
|
||||
for all installation types, including AIO-Simplex.
|
||||
|
||||
.. important:: The `management_subnet` must not overlap with any other subclouds.
|
||||
|
||||
.. note:: The `systemcontroller_gateway_address` is the address of central
|
||||
cloud management network gateway.
|
||||
|
||||
2. Add the subcloud using the CLI command below:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
dcmanager subcloud add --bootstrap-address <ip_address>
|
||||
--bootstrap-values <config-file>
|
||||
|
||||
Where:
|
||||
|
||||
- *<ip_address>* is the OAM interface address set earlier on the subcloud.
|
||||
- *<config_file>* is the Ansible override configuration file, ``bootstrap-values.yml``,
|
||||
created earlier in step 1.
|
||||
|
||||
You will be prompted for the Linux password of the subcloud. This command
|
||||
will take 5- 10 minutes to complete. You can monitor the progress of the
|
||||
subcloud bootstrap through logs:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
tail –f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log
|
||||
|
||||
3. Confirm that the subcloud was deployed successfully:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
dcmanager subcloud list
|
||||
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
| id | name | management | availability | deploy status | sync |
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
| 1 | subcloud1 | unmanaged | offline | complete | unknown |
|
||||
+----+-----------+------------+--------------+---------------+---------+
|
||||
|
||||
4. Continue provisioning the subcloud system as required using the StarlingX
|
||||
R5.0 Installation procedures and starting from the 'Configure controller-0'
|
||||
step.
|
||||
|
||||
- For AIO-Simplex:
|
||||
`Bare metal All-in-one Simplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_simplex.html>`_
|
||||
|
||||
- For AIO-Duplex:
|
||||
`Bare metal All-in-one Duplex Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/aio_duplex.html>`_
|
||||
|
||||
- For Standard with controller storage:
|
||||
`Bare metal Standard with Controller Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/controller_storage.html>`_
|
||||
|
||||
- For Standard with dedicated storage nodes:
|
||||
`Bare metal Standard with Dedicated Storage Installation R5.0 <https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage.html>`_
|
||||
|
||||
On the active controller for each subcloud:
|
||||
|
||||
#. Add a route from the subcloud to the controller management network to enable
|
||||
the subcloud to go online. For each host in the subcloud:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
system host-route-add <host id> <mgmt.interface> \
|
||||
<system controller mgmt.subnet> <prefix> <subcloud mgmt.gateway ip>
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: sh
|
||||
|
||||
system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
|
||||
|
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 358 KiB |
After Width: | Height: | Size: 437 KiB |
After Width: | Height: | Size: 96 KiB |
After Width: | Height: | Size: 109 KiB |
After Width: | Height: | Size: 313 KiB |
After Width: | Height: | Size: 100 KiB |
After Width: | Height: | Size: 101 KiB |
After Width: | Height: | Size: 127 KiB |
After Width: | Height: | Size: 70 KiB |
71
doc/source/deploy_install_guides/r5_release/index.rst
Normal file
@ -0,0 +1,71 @@
|
||||
===========================
|
||||
StarlingX R5.0 Installation
|
||||
===========================
|
||||
|
||||
StarlingX provides a pre-defined set of standard
|
||||
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
|
||||
be installed in a virtual environment or on bare metal.
|
||||
|
||||
-----------------------------------------------------
|
||||
Install StarlingX Kubernetes in a virtual environment
|
||||
-----------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
virtual/aio_simplex
|
||||
virtual/aio_duplex
|
||||
virtual/controller_storage
|
||||
virtual/dedicated_storage
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
virtual/config_virtualbox_netwk
|
||||
virtual/install_virtualbox
|
||||
|
||||
------------------------------------------
|
||||
Install StarlingX Kubernetes on bare metal
|
||||
------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
bare_metal/aio_simplex
|
||||
bare_metal/aio_duplex
|
||||
bare_metal/controller_storage
|
||||
bare_metal/dedicated_storage
|
||||
bare_metal/ironic
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
ansible_bootstrap_configs
|
||||
|
||||
-------------------------------------------------
|
||||
Install StarlingX Distributed Cloud on bare metal
|
||||
-------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
distributed_cloud/index
|
||||
|
||||
-----------------
|
||||
Access Kubernetes
|
||||
-----------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
kubernetes_access
|
||||
|
||||
--------------------------
|
||||
Access StarlingX OpenStack
|
||||
--------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
openstack/index
|
||||
|
14
doc/source/deploy_install_guides/r5_release/ipv6_note.txt
Normal file
@ -0,0 +1,14 @@
|
||||
.. note::
|
||||
|
||||
By default, StarlingX uses IPv4. To use StarlingX with IPv6:
|
||||
|
||||
* The entire infrastructure and cluster configuration must be IPv6, with the
|
||||
exception of the PXE boot network.
|
||||
|
||||
* Not all external servers are reachable via IPv6 addresses (for example
|
||||
Docker registries). Depending on your infrastructure, it may be necessary
|
||||
to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.
|
||||
|
||||
* Refer to the :doc:`/../developer_resources/stx_ipv6_deployment` guide
|
||||
for details on how to deploy a NAT64/DNS64 gateway to use StarlingX
|
||||
with IPv6.
|
@ -0,0 +1,188 @@
|
||||
================================
|
||||
Access StarlingX Kubernetes R5.0
|
||||
================================
|
||||
|
||||
This section describes how to use local/remote CLIs, GUIs, and/or REST APIs to
|
||||
access and manage StarlingX Kubernetes and hosted containerized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----------
|
||||
Local CLIs
|
||||
----------
|
||||
|
||||
To access the StarlingX and Kubernetes commands on controller-0, follow these
|
||||
steps:
|
||||
|
||||
#. Log in to controller-0 via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
#. Acquire Keystone admin and Kubernetes admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
*********************************************
|
||||
StarlingX system and host management commands
|
||||
*********************************************
|
||||
|
||||
Access StarlingX system and host management commands using the :command:`system`
|
||||
command. For example:
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
Use the :command:`system help` command for the full list of options.
|
||||
|
||||
***********************************
|
||||
StarlingX fault management commands
|
||||
***********************************
|
||||
|
||||
Access StarlingX fault management commands using the :command:`fm` command, for
|
||||
example:
|
||||
|
||||
::
|
||||
|
||||
fm alarm-list
|
||||
|
||||
*******************
|
||||
Kubernetes commands
|
||||
*******************
|
||||
|
||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
|
||||
|
||||
::
|
||||
|
||||
kubectl get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
controller-0 Ready master 5d19h v1.13.5
|
||||
|
||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
|
||||
|
||||
-----------
|
||||
Remote CLIs
|
||||
-----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
.. note::
|
||||
|
||||
For a virtual installation, run the browser on the host machine.
|
||||
|
||||
*********************
|
||||
StarlingX Horizon GUI
|
||||
*********************
|
||||
|
||||
Access the StarlingX Horizon GUI with the following steps:
|
||||
|
||||
#. Enter the OAM floating IP address in your browser:
|
||||
``\http://<oam-floating-ip-address>:8080``.
|
||||
|
||||
Discover your OAM floating IP address with the :command:`system oam-show`
|
||||
command.
|
||||
|
||||
#. Log in to Horizon with an admin/<sysadmin-password>.
|
||||
|
||||
********************
|
||||
Kubernetes dashboard
|
||||
********************
|
||||
|
||||
The Kubernetes dashboard is not installed by default.
|
||||
|
||||
To install the Kubernetes dashboard, execute the following steps on
|
||||
controller-0:
|
||||
|
||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
|
||||
the override values shown below:
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > dashboard-values.yaml
|
||||
service:
|
||||
type: NodePort
|
||||
nodePort: 30000
|
||||
|
||||
rbac:
|
||||
create: true
|
||||
clusterAdminRole: true
|
||||
|
||||
serviceAccount:
|
||||
create: true
|
||||
name: kubernetes-dashboard
|
||||
EOF
|
||||
|
||||
helm repo update
|
||||
|
||||
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
|
||||
|
||||
helm install dashboard kubernetes-dashboard/kubernetes-dashboard -f dashboard-values.yaml
|
||||
|
||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges,
|
||||
and display its token for logging into the Kubernetes dashboard.
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > admin-login.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
EOF
|
||||
|
||||
kubectl apply -f admin-login.yaml
|
||||
|
||||
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||
|
||||
|
||||
Access the Kubernetes dashboard GUI with the following steps:
|
||||
|
||||
#. Enter the OAM floating IP address in your browser:
|
||||
``\https://<oam-floating-ip-address>:30000``.
|
||||
|
||||
Discover your OAM floating IP address with the :command:`system oam-show`
|
||||
command.
|
||||
|
||||
#. Log in to the Kubernetes dashboard using the ``admin-user`` token.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
List the StarlingX platform-related public REST API endpoints using the
|
||||
following command:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list | grep public
|
||||
|
||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
|
||||
REST API messages.
|
@ -0,0 +1,7 @@
|
||||
Your Kubernetes cluster is now up and running.
|
||||
|
||||
For instructions on how to access StarlingX Kubernetes see
|
||||
:doc:`../kubernetes_access`.
|
||||
|
||||
For instructions on how to install and access StarlingX OpenStack see
|
||||
:doc:`../openstack/index`.
|
329
doc/source/deploy_install_guides/r5_release/openstack/access.rst
Normal file
@ -0,0 +1,329 @@
|
||||
==========================
|
||||
Access StarlingX OpenStack
|
||||
==========================
|
||||
|
||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
|
||||
OpenStack and hosted virtualized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------
|
||||
Configure helm endpoint domain
|
||||
------------------------------
|
||||
|
||||
Containerized OpenStack services in StarlingX are deployed behind an ingress
|
||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
|
||||
The ingress controller routes packets to the specific OpenStack service, such as
|
||||
the Cinder service, or the Neutron service, by parsing the FQDN in the packet.
|
||||
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service,
|
||||
`cinder‐api.openstack.svc.cluster.local` is for the Cinder service.
|
||||
|
||||
This routing requires that access to OpenStack REST APIs must be via a FQDN
|
||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
|
||||
OpenStack REST APIs using an IP address.
|
||||
|
||||
FQDNs (such as `cinder‐api.openstack.svc.cluster.local`) must be in a DNS server
|
||||
that is publicly accessible.
|
||||
|
||||
.. note::
|
||||
|
||||
There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
|
||||
server configuration so that you don’t need to update the DNS server every
|
||||
time an OpenStack service is added. Check your particular DNS server for
|
||||
details on how to wild-card a set of FQDNs.
|
||||
|
||||
In a “real” deployment, that is, not a lab scenario, you can not use the default
|
||||
`openstack.svc.cluster.local` domain name externally. You must set a unique
|
||||
domain name for your StarlingX system. StarlingX provides the
|
||||
:command:`system service‐parameter-add` command to configure and set the
|
||||
OpenStack domain name:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=<domain_name>
|
||||
|
||||
`<domain_name>` should be a fully qualified domain name that you own, such that
|
||||
you can configure the DNS Server that owns `<domain_name>` with the OpenStack
|
||||
service names underneath the domain.
|
||||
|
||||
For example:
|
||||
::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
|
||||
system application-apply stx-openstack
|
||||
|
||||
This command updates the helm charts of all OpenStack services and restarts them.
|
||||
For example it would change `cinder‐api.openstack.svc.cluster.local` to
|
||||
`cinder‐api.my-starlingx-domain.my-company.com`, and so on for all OpenStack
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
This command also changes the containerized OpenStack Horizon to listen on
|
||||
`horizon.my-starlingx-domain.my-company.com:80` instead of the initial
|
||||
`<oam‐floating‐ip>:31000`.
|
||||
|
||||
You must configure `{ ‘*.my-starlingx-domain.my-company.com’: --> oam‐floating‐ip‐address }`
|
||||
in the external DNS server that owns `my-company.com`.
|
||||
|
||||
---------
|
||||
Local CLI
|
||||
---------
|
||||
|
||||
Access OpenStack using the local CLI with one of the following methods.
|
||||
|
||||
**Method 1**
|
||||
|
||||
You can use this method on either controller, active or standby.
|
||||
|
||||
#. Log in to the desired controller via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
**Do not** use ``source /etc/platform/openrc``.
|
||||
|
||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
|
||||
OpenStack admin credentials:
|
||||
|
||||
::
|
||||
|
||||
sudo su -
|
||||
mkdir -p /etc/openstack
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: '<sysadmin-password>'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
exit
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
**Method 2**
|
||||
|
||||
Use this method to access StarlingX Kubernetes commands and StarlingX OpenStack
|
||||
commands in the same shell. You can only use this method on the active
|
||||
controller.
|
||||
|
||||
#. Log in to the active controller via the console or SSH with a
|
||||
sysadmin/<sysadmin-password>.
|
||||
|
||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
|
||||
OpenStack admin credentials:
|
||||
|
||||
::
|
||||
|
||||
sed '/export OS_AUTH_URL/c\export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3' /etc/platform/openrc > ~/openrc.os
|
||||
source ./openrc.os
|
||||
|
||||
.. note::
|
||||
|
||||
To switch between StarlingX Kubernetes/Platform credentials and StarlingX
|
||||
OpenStack credentials, use ``source /etc/platform/openrc`` or
|
||||
``source ./openrc.os`` respectively.
|
||||
|
||||
|
||||
**********************
|
||||
OpenStack CLI commands
|
||||
**********************
|
||||
|
||||
Access OpenStack CLI commands for the StarlingX OpenStack cloud application
|
||||
using the :command:`openstack` command. For example:
|
||||
|
||||
::
|
||||
|
||||
controller-0:~$ export OS_CLOUD=openstack_helm
|
||||
controller-0:~$ openstack flavor list
|
||||
controller-0:~$ openstack image list
|
||||
|
||||
.. note::
|
||||
|
||||
If you are using Method 2 described above, use these commands:
|
||||
|
||||
::
|
||||
|
||||
controller-0:~$ source ./openrc.os
|
||||
controller-0:~$ openstack flavor list
|
||||
controller-0:~$ openstack image list
|
||||
|
||||
The image below shows a typical successful run.
|
||||
|
||||
.. figure:: ../figures/starlingx-access-openstack-flavorlist.png
|
||||
:alt: starlingx-access-openstack-flavorlist
|
||||
:scale: 50%
|
||||
|
||||
*Figure 1: StarlingX OpenStack Flavorlist*
|
||||
|
||||
|
||||
.. figure:: ../figures/starlingx-access-openstack-command.png
|
||||
:alt: starlingx-access-openstack-command
|
||||
:scale: 50%
|
||||
|
||||
*Figure 2: StarlingX OpenStack Commands*
|
||||
|
||||
----------
|
||||
Remote CLI
|
||||
----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the
|
||||
following address:
|
||||
|
||||
::
|
||||
|
||||
http://<oam-floating-ip-address>:31000
|
||||
|
||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
This section provides an overview of accessing REST APIs with examples of
|
||||
`curl`-based REST API commands.
|
||||
|
||||
****************
|
||||
Public endpoints
|
||||
****************
|
||||
|
||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list
|
||||
|
||||
The public endpoints will look like:
|
||||
|
||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
|
||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.openstack.svc.cluster.local:80/`
|
||||
* `etc.`
|
||||
|
||||
If you have set a unique domain name, then the public endpoints will look like:
|
||||
|
||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
|
||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
|
||||
* `etc.`
|
||||
|
||||
Documentation for the OpenStack REST APIs is available at
|
||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
|
||||
|
||||
***********
|
||||
Get a token
|
||||
***********
|
||||
|
||||
The following command will request the Keystone token:
|
||||
|
||||
::
|
||||
|
||||
curl -i -H "Content-Type: application/json" -d
|
||||
'{ "auth": {
|
||||
"identity": {
|
||||
"methods": ["password"],
|
||||
"password": {
|
||||
"user": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" },
|
||||
"password": "St8rlingX*"
|
||||
}
|
||||
}
|
||||
},
|
||||
"scope": {
|
||||
"project": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}' http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
|
||||
|
||||
The token will be returned in the "X-Subject-Token" header field of the response:
|
||||
|
||||
::
|
||||
|
||||
HTTP/1.1 201 CREATED
|
||||
Date: Wed, 02 Oct 2019 18:27:38 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 8128
|
||||
Connection: keep-alive
|
||||
X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
|
||||
Vary: X-Auth-Token
|
||||
x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
|
||||
|
||||
{"token": {"is_domain": false,
|
||||
|
||||
...
|
||||
|
||||
You can set an environment variable to hold the token value from the response.
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
|
||||
|
||||
*****************
|
||||
List Nova flavors
|
||||
*****************
|
||||
|
||||
The following command will request a list of all Nova flavors:
|
||||
|
||||
::
|
||||
|
||||
curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
|
||||
|
||||
The list will be returned in the response. For example:
|
||||
|
||||
::
|
||||
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 2529 100 2529 0 0 24187 0 --:--:-- --:--:-- --:--:-- 24317
|
||||
{
|
||||
"flavors": [
|
||||
{
|
||||
"id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "m1.tiny"
|
||||
},
|
||||
{
|
||||
"id": "14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "medium.dpdk"
|
||||
},
|
||||
{
|
||||
|
||||
...
|
||||
|
@ -0,0 +1,16 @@
|
||||
===================
|
||||
StarlingX OpenStack
|
||||
===================
|
||||
|
||||
This section describes the steps to install and access StarlingX OpenStack.
|
||||
Other than the OpenStack-specific configurations required in the underlying
|
||||
StarlingX Kubernetes infrastructure (described in the installation steps for
|
||||
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX
|
||||
is independent of deployment configuration.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
install
|
||||
access
|
||||
uninstall_delete
|
@ -0,0 +1,75 @@
|
||||
===========================
|
||||
Install StarlingX OpenStack
|
||||
===========================
|
||||
|
||||
These instructions assume that you have completed the following
|
||||
OpenStack-specific configuration tasks that are required by the underlying
|
||||
StarlingX Kubernetes platform:
|
||||
|
||||
* All nodes have been labeled appropriately for their OpenStack role(s).
|
||||
* The vSwitch type has been configured.
|
||||
* The nova-local volume group has been configured on any node's host, if running
|
||||
the compute function.
|
||||
|
||||
--------------------------------------------
|
||||
Install application manifest and helm-charts
|
||||
--------------------------------------------
|
||||
|
||||
#. Get the latest StarlingX OpenStack application (stx-openstack) manifest and
|
||||
helm charts. Use one of the following options:
|
||||
|
||||
* Private StarlingX build. See :ref:`Build-stx-openstack-app` for details.
|
||||
* Public download from
|
||||
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||
|
||||
After you select a release, helm charts are located in ``centos/outputs/helm-charts``.
|
||||
|
||||
#. Load the stx-openstack application's package into StarlingX. The tarball
|
||||
package contains stx-openstack's Airship Armada manifest and stx-openstack's
|
||||
set of helm charts. For example:
|
||||
|
||||
::
|
||||
|
||||
system application-upload stx-openstack-<version>-centos-stable-latest.tgz
|
||||
|
||||
This will:
|
||||
|
||||
* Load the Armada manifest and helm charts.
|
||||
* Internally manage helm chart override values for each chart.
|
||||
* Automatically generate system helm chart overrides for each chart based on
|
||||
the current state of the underlying StarlingX Kubernetes platform and the
|
||||
recommended StarlingX configuration of OpenStack services.
|
||||
|
||||
#. Apply the stx-openstack application in order to bring StarlingX OpenStack into
|
||||
service. If your environment is preconfigured with a proxy server, then
|
||||
make sure HTTPS proxy is set before applying stx-openstack.
|
||||
|
||||
::
|
||||
|
||||
system application-apply stx-openstack
|
||||
|
||||
.. note::
|
||||
|
||||
To set the HTTPS proxy at bootstrap time, refer to
|
||||
`Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r5_release/ansible_bootstrap_configs.html#docker-proxy>`_.
|
||||
|
||||
To set the HTTPS proxy after installation, refer to
|
||||
`Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_.
|
||||
|
||||
#. Wait for the activation of stx-openstack to complete.
|
||||
|
||||
This can take 5-10 minutes depending on the performance of your host machine.
|
||||
|
||||
Monitor progress with the command:
|
||||
|
||||
::
|
||||
|
||||
watch -n 5 system application-list
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
Your OpenStack cloud is now up and running.
|
||||
|
||||
See :doc:`access` for details on how to access StarlingX OpenStack.
|
@ -0,0 +1,33 @@
|
||||
=============================
|
||||
Uninstall StarlingX OpenStack
|
||||
=============================
|
||||
|
||||
This section provides additional commands for uninstalling and deleting the
|
||||
StarlingX OpenStack application.
|
||||
|
||||
.. warning::
|
||||
|
||||
Uninstalling the OpenStack application will terminate all OpenStack services.
|
||||
|
||||
-----------------------------
|
||||
Bring down OpenStack services
|
||||
-----------------------------
|
||||
|
||||
Use the system CLI to uninstall the OpenStack application:
|
||||
|
||||
::
|
||||
|
||||
system application-remove stx-openstack
|
||||
system application-list
|
||||
|
||||
---------------------------------------
|
||||
Delete OpenStack application definition
|
||||
---------------------------------------
|
||||
|
||||
Use the system CLI to delete the OpenStack application definition:
|
||||
|
||||
::
|
||||
|
||||
system application-delete stx-openstack
|
||||
system application-list
|
||||
|
@ -0,0 +1,21 @@
|
||||
===========================================
|
||||
Virtual All-in-one Duplex Installation R5.0
|
||||
===========================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_duplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_duplex_environ
|
||||
aio_duplex_install_kubernetes
|
@ -0,0 +1,57 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R5.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* duplex-controller-0
|
||||
* duplex-controller-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'duplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c duplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
||||
|
@ -0,0 +1,467 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-DX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server 'duplex-controller-0' was started by the :command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for Ceph:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend.
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
:end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------
|
||||
Install software on controller-1 node
|
||||
-------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
|
||||
automatically attempt to network boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start duplex-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
|
||||
reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up
|
||||
automatically by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
#. Configure data interfaces for controller-1.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-1
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-1 for Ceph:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-1
|
||||
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
|
||||
system host-stor-list controller-1
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
system host-label-assign controller-1 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
============================================
|
||||
Virtual All-in-one Simplex Installation R5.0
|
||||
============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_simplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
aio_simplex_environ
|
||||
aio_simplex_install_kubernetes
|
@ -0,0 +1,55 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R5.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* simplex-controller-0
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'simplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c simplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -0,0 +1,324 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-SX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`aio_simplex_environ`, the controller-0 virtual server 'simplex-controller-0'
|
||||
was started by the :command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the
|
||||
appropriate installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console simplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: simplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||
as "oam". Use the OAM port name, for example eth0, that is applicable to your
|
||||
deployment environment:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export NODE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for Ceph:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest/helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 openvswitch=enabled
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export NODE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,161 @@
|
||||
===================================
|
||||
Configure VirtualBox Network Access
|
||||
===================================
|
||||
|
||||
This guide describes two alternatives for providing external network access
|
||||
to the controller :abbr:`VMs (Virtual Machines)` for VirtualBox:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----------------------
|
||||
Install VM as a router
|
||||
----------------------
|
||||
|
||||
|
||||
A router can be used to act as a gateway to allow your other VirtualBox VMs
|
||||
(for example, controllers) access to the external internet. The router needs to
|
||||
be able to forward traffic from the OAM network to the internet.
|
||||
|
||||
In VirtualBox, create a new Linux VM to act as your router. This example uses
|
||||
Ubuntu. For ease of use, we recommend downloading Ubuntu 18.04 Desktop
|
||||
version or higher.
|
||||
|
||||
**Installation tip**
|
||||
|
||||
Before you install the Ubuntu 18.04 Desktop version in a Virtual Box 5.2,
|
||||
configure the VM using Edit Settings as follows:
|
||||
|
||||
#. Go to Display and move the "Video memory" slider all the way to the right.
|
||||
Then tick the "Acceleration" checkbox "Enable 3D Acceleration".
|
||||
#. Go to General/Advanced and set "Shared Clipboard" and "Drag'n Drop" to
|
||||
Bidirectional.
|
||||
#. Go to User Interface/Devices and select "Devices/Insert Guest Additions CD
|
||||
image" from the drop down. Restart your VM.
|
||||
|
||||
The network configuration for this VM must include:
|
||||
|
||||
* NAT interface to allow installation and access to the external internet.
|
||||
* Host-only Adapter connected to the same network as the OAM interfaces on
|
||||
your controllers.
|
||||
|
||||
Once the router VM has been installed, enable forwarding. In Ubuntu, do the
|
||||
following steps:
|
||||
|
||||
::
|
||||
|
||||
# Edit sysctl.conf and uncomment the following line:
|
||||
# net.ipv4.ip_forward=1
|
||||
sudo vim /etc/sysctl.conf
|
||||
# Activate the change
|
||||
sudo sysctl -p
|
||||
|
||||
Then add the gateway IP address to the interface connected to the OAM host only
|
||||
network:
|
||||
|
||||
::
|
||||
|
||||
# Assuming that enp0s8 is connected to the OAM host only network:
|
||||
cat > /etc/netplan/99_config.yaml << EOF
|
||||
network:
|
||||
version: 2
|
||||
renderer: networkd
|
||||
ethernets:
|
||||
enp0s8:
|
||||
addresses:
|
||||
- 10.10.10.1/24
|
||||
EOF
|
||||
sudo netplan apply
|
||||
|
||||
# If netplan is not installed on your router you can use these commands instead of the above.
|
||||
ip addr add 10.10.10.1/24 dev enp0s8
|
||||
|
||||
Finally, set up iptables to forward packets from the host only network to the
|
||||
NAT network:
|
||||
|
||||
::
|
||||
|
||||
# This assumes the NAT is on enp0s3 and the host only network is on enp0s8
|
||||
sudo iptables -t nat -A POSTROUTING --out-interface enp0s3 -j MASQUERADE
|
||||
sudo iptables -A FORWARD --in-interface enp0s8 -j ACCEPT
|
||||
sudo apt-get install iptables-persistent
|
||||
|
||||
|
||||
-----------------------------
|
||||
Add NAT Network in VirtualBox
|
||||
-----------------------------
|
||||
|
||||
#. Select File -> Preferences menu.
|
||||
#. Choose Network, ``Nat Networks`` tab should be selected.
|
||||
#. Click on plus icon to add a network, which will add a network named
|
||||
NatNetwork.
|
||||
#. Edit the NatNetwork (gear or screwdriver icon).
|
||||
|
||||
* Network CIDR: 10.10.10.0/24 (to match OAM network specified in
|
||||
ansible bootstrap overrides file)
|
||||
* Disable ``Supports DHCP``
|
||||
* Enable ``Supports IPv6``
|
||||
* Select ``Port Forwarding`` and add any rules you desire. Here are some
|
||||
examples where 10.10.10.2 is the StarlingX OAM Floating IP address and
|
||||
10.10.10.3/.4 are the IP addresses of the two controller units:
|
||||
|
||||
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| Name | Protocol | Host IP | Host Port | Guest IP | Guest Port |
|
||||
+=========================+===========+=========+===========+============+=============+
|
||||
| controller-0-ssh | TCP | | 3022 | 10.10.10.3 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| controller-1-ssh | TCP | | 3122 | 10.10.10.4 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| controller-ssh | TCP | | 22 | 10.10.10.2 | 22 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| platform-horizon-http | TCP | | 8080 | 10.10.10.2 | 8080 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| platform-horizon-https | TCP | | 8443 | 10.10.10.2 | 8443 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| openstack-horizon-http | TCP | | 80 | 10.10.10.2 | 80 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
| openstack-horizon-https | TCP | | 443 | 10.10.10.2 | 443 |
|
||||
+-------------------------+-----------+---------+-----------+------------+-------------+
|
||||
|
||||
~~~~~~~~~~~~~
|
||||
Access the VM
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Once your VM is running, use your PC's host address and the forwarded port to
|
||||
access the VM.
|
||||
|
||||
Instead of these commands:
|
||||
|
||||
::
|
||||
|
||||
# ssh to controller-0
|
||||
ssh wrsroot@10.10.10.3
|
||||
# scp file to controller-0
|
||||
scp <filename> wrsroot@10.10.10.3:~
|
||||
|
||||
Enter these commands instead:
|
||||
|
||||
::
|
||||
|
||||
# ssh to controller-0
|
||||
ssh -p 3022 wrsroot@<PC hostname or IP>
|
||||
# scp file to controller-0
|
||||
scp -P 3022 <filename> wrsroot@<PC hostname or IP>:~
|
||||
|
||||
|
||||
To access your VM console from Horizon, you can update the VNC proxy address
|
||||
using service parameters. The worker nodes will require a reboot following
|
||||
this change, therefore it is best to perform this operation before unlocking
|
||||
the worker nodes.
|
||||
|
||||
|
||||
::
|
||||
|
||||
# update vnc proxy setting to use NatNetwork host name
|
||||
system service-parameter-add nova vnc vncproxy_host=<hostname or IP> --personality controller --resource nova::compute::vncproxy_host # aio
|
||||
system service-parameter-add nova vnc vncproxy_host=<hostname or IP> --personality compute --resource nova::compute::vncproxy_host # standard
|
||||
system service-parameter-apply nova
|
||||
|
||||
|
@ -0,0 +1,21 @@
|
||||
==========================================================
|
||||
Virtual Standard with Controller Storage Installation R5.0
|
||||
==========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_controller_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller_storage_environ
|
||||
controller_storage_install_kubernetes
|
@ -0,0 +1,59 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R5.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* controllerstorage-controller-0
|
||||
* controllerstorage-controller-1
|
||||
* controllerstorage-worker-0
|
||||
* controllerstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'controllerstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -0,0 +1,593 @@
|
||||
========================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
|
||||
========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
|
||||
server 'controllerstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
``sysadmin home directory ($HOME)``
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
.. include:: ../ansible_install_time_only.txt
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <admin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
|
||||
# Add these lines to configure Docker to use a proxy server
|
||||
# docker_http_proxy: http://my.proxy.com:1080
|
||||
# docker_https_proxy: https://my.proxy.com:1443
|
||||
# docker_no_proxy:
|
||||
# - 1.2.3.4
|
||||
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r5_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
|
||||
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instance clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP here.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure Ceph storage backend
|
||||
|
||||
.. important::
|
||||
|
||||
This step required only if your application requires
|
||||
persistent storage.
|
||||
|
||||
**If you want to install the StarlingX Openstack application
|
||||
(stx-openstack) this step is mandatory.**
|
||||
|
||||
::
|
||||
|
||||
system storage-backend-add ceph --confirmed
|
||||
|
||||
#. If required, and not already done as part of bootstrap, configure Docker to
|
||||
use a proxy server.
|
||||
|
||||
#. List Docker proxy parameters:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-list platform docker
|
||||
|
||||
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
|
||||
details about Docker proxy settings.
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------------------
|
||||
Install software on controller-1 and worker nodes
|
||||
-------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'controllerstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'controllerstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for 'controllerstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=worker-1
|
||||
|
||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
|
||||
complete, for all virtual servers to reboot, and for all to show as
|
||||
locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | worker-0 | worker | locked | disabled | online |
|
||||
| 4 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-start:
|
||||
|
||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add the third Ceph monitor to a worker node:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add worker-0
|
||||
|
||||
#. Wait for the worker node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual worker nodes to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-unlock $NODE
|
||||
done
|
||||
|
||||
The worker nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------------------------
|
||||
Add Ceph OSDs to controllers
|
||||
----------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add OSDs to controller-0:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to controller-1:
|
||||
|
||||
.. important::
|
||||
|
||||
This step requires a configured Ceph storage backend
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
=========================================================
|
||||
Virtual Standard with Dedicated Storage Installation R5.0
|
||||
=========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_dedicated_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
dedicated_storage_environ
|
||||
dedicated_storage_install_kubernetes
|
@ -0,0 +1,61 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R5.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The following commands for host, virtual environment setup, and host
|
||||
power-on use KVM / virsh for virtual machine and VM management
|
||||
technology. For an alternative virtualization environment, see:
|
||||
:doc:`Install StarlingX in VirtualBox <install_virtualbox>`.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* dedicatedstorage-controller-0
|
||||
* dedicatedstorage-controller-1
|
||||
* dedicatedstorage-storage-0
|
||||
* dedicatedstorage-storage-1
|
||||
* dedicatedstorage-worker-0
|
||||
* dedicatedstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'dedicatedstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present errors will occur and the X-based GUI for the
|
||||
virt-manager application will not start. The virt-manager GUI is not absolutely
|
||||
required and you can safely ignore errors and continue.
|
@ -0,0 +1,395 @@
|
||||
=======================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
|
||||
=======================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R5.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of :doc:`dedicated_storage_environ`, the controller-0 virtual
|
||||
server 'dedicatedstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. important::
|
||||
|
||||
Make sure the Ceph storage backend is configured. If it is
|
||||
not configured, you will not be able to configure storage
|
||||
nodes.
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and worker nodes
|
||||
-----------------------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-1
|
||||
|
||||
#. As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates software installation on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the
|
||||
personality to 'storage' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-storage-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for 'dedicatedstorage-storage-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=worker-0
|
||||
|
||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
ssystem host-update 6 personality=worker hostname=worker-1
|
||||
|
||||
This initiates software installation on worker-0 and worker-1.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
worker-0, and worker-1 to complete, for all virtual servers to reboot, and for all
|
||||
to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | worker-0 | worker | locked | disabled | online |
|
||||
| 6 | worker-1 | worker | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in storage-0 storage-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add OSDs to storage-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to storage-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock virtual storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure worker nodes
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system interface-network-assign $NODE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for worker nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
Configure the datanetworks in sysinv, prior to referencing it in the
|
||||
:command:`system host-if-modify` command.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring interface for: $NODE"
|
||||
set -ex
|
||||
system host-port-list ${NODE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${NODE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for NODE in worker-0 worker-1; do
|
||||
echo "Configuring Nova local for: $NODE"
|
||||
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${NODE} nova-local
|
||||
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
-------------------
|
||||
Unlock worker nodes
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,366 @@
|
||||
===============================
|
||||
Install StarlingX in VirtualBox
|
||||
===============================
|
||||
|
||||
This guide describes how to run StarlingX in a set of VirtualBox :abbr:`VMs
|
||||
(Virtual Machines)`, which is an alternative to the default StarlingX
|
||||
instructions using libvirt.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-------------
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
* A Windows or Linux computer for running VirtualBox.
|
||||
* VirtualBox is installed on your computer. The latest verified version is
|
||||
5.2.22. Download from: http://www.virtualbox.org/wiki/Downloads
|
||||
* VirtualBox Extension Pack is installed.
|
||||
To boot worker nodes via the controller, you must install the
|
||||
VirtualBox Extension Pack to add support for PXE boot of Intel cards. Download
|
||||
the extension pack from: https://www.virtualbox.org/wiki/Downloads
|
||||
|
||||
.. note::
|
||||
|
||||
A set of scripts for deploying VirtualBox VMs can be found in the
|
||||
`STX tools repository
|
||||
<https://opendev.org/starlingx/tools/src/branch/master/deployment/virtualbox>`_,
|
||||
however, the scripts may not be updated to the latest StarlingX
|
||||
recommendations.
|
||||
|
||||
---------------------------------------------------
|
||||
Create VMs for controller, worker and storage hosts
|
||||
---------------------------------------------------
|
||||
|
||||
For each StarlingX host, configure a VirtualBox VM with the following settings.
|
||||
|
||||
.. note::
|
||||
|
||||
The different settings for controller, worker, and storage nodes are
|
||||
embedded in the particular sections below.
|
||||
|
||||
***************************
|
||||
OS type and memory settings
|
||||
***************************
|
||||
|
||||
* Type: Linux
|
||||
|
||||
* Version: Other Linux (64-bit)
|
||||
|
||||
* Memory size:
|
||||
|
||||
* Controller node: 16384 MB
|
||||
* Worker node: 8192MB
|
||||
* Storage node: 4096 MB
|
||||
* All-in-one node: 20480 MB
|
||||
|
||||
****************
|
||||
Disk(s) settings
|
||||
****************
|
||||
|
||||
Use the default disk controller and default disk format (for example IDE/vdi)
|
||||
for VirtualBox VMs.
|
||||
|
||||
* Minimum disk size requirements:
|
||||
|
||||
* Controller nodes:
|
||||
|
||||
* Disk 1: 201GB disk
|
||||
* Disk 2: 10GB disk (Note: Use 30GB if you are planning to work on the
|
||||
analytics.)
|
||||
|
||||
* Worker nodes: 80GB root disk (Note: Use 100GB if you are installing
|
||||
StarlingX AIO node.)
|
||||
|
||||
* When the node is configured for local storage, this will provide ~12GB of
|
||||
local storage space for disk allocation to VM instances.
|
||||
* Additional disks can be added to the node to extend the local storage
|
||||
but are not required.
|
||||
|
||||
* Storage nodes (minimum of 2 disks required):
|
||||
|
||||
* 80GB disk for rootfs.
|
||||
* 10GB disk (or larger) for each OSD. The size depends on how many VMs you
|
||||
plan to run.
|
||||
|
||||
* Storage tree, see empty CD-ROM. Click cd-rom, click ``+`` to choose a CD/DVD
|
||||
iso, and browse to ISO location. Use this ISO only for the first controller
|
||||
node. The second controller node and worker nodes will network boot from the
|
||||
first controller node.
|
||||
|
||||
***************
|
||||
System settings
|
||||
***************
|
||||
|
||||
* System->Motherboard:
|
||||
|
||||
* Boot Order: Enable the Network option. Order should be: Floppy, CD/DVD,
|
||||
Hard Disk, Network.
|
||||
|
||||
* System->Processors:
|
||||
|
||||
* Controller node: 4 CPU
|
||||
* Worker node: 3 CPU
|
||||
|
||||
.. note::
|
||||
|
||||
This will allow only a single instance to be launched. More processors
|
||||
are required to launch more instances. If more than 4 CPUs are
|
||||
allocated, you must limit vswitch to a single CPU before unlocking your
|
||||
worker node, otherwise your worker node will **reboot in a loop**
|
||||
(vswitch will fail to start, in-test will detect that a critical service
|
||||
failed to start and reboot the node). Use the following command to limit
|
||||
vswitch:
|
||||
|
||||
::
|
||||
|
||||
system host-cpu-modify worker-0 -f vswitch -p0 1
|
||||
|
||||
* Storage node: 1 CPU
|
||||
|
||||
****************
|
||||
Network settings
|
||||
****************
|
||||
|
||||
The OAM network has the following options:
|
||||
|
||||
* Host Only Network - **Strongly Recommended.** This option
|
||||
requires the router VM to forward packets from the controllers to the external
|
||||
network. Follow the instructions at :doc:`Install VM as a router <config_virtualbox_netwk>`
|
||||
to set it up. Create one network adapter for external OAM. The IP addresses
|
||||
in the example below match the default configuration.
|
||||
|
||||
* VirtualBox: File -> Preferences -> Network -> Host-only Networks. Click
|
||||
``+`` to add Ethernet Adapter.
|
||||
|
||||
* Windows: This creates a ``VirtualBox Host-only Adapter`` and prompts
|
||||
with the Admin dialog box. Click ``Accept`` to create an interface.
|
||||
* Linux: This creates a ``vboxnet<x>`` per interface.
|
||||
|
||||
* External OAM: IPv4 Address: 10.10.10.254, IPv4 Network Mask: 255.255.255.0,
|
||||
DHCP Server: unchecked.
|
||||
|
||||
* NAT Network - This option provides external network access to the controller
|
||||
VMs. Follow the instructions at :doc:`Add NAT Network in VirtualBox <config_virtualbox_netwk>`.
|
||||
|
||||
Adapter settings for the different node types are as follows:
|
||||
|
||||
* Controller nodes:
|
||||
|
||||
* Adapter 1 setting depends on your choice for the OAM network above. It can
|
||||
be either of the following:
|
||||
|
||||
* Adapter 1: Host-Only Adapter; VirtualBox Host-Only Ethernet Adapter 1),
|
||||
Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Deny
|
||||
* Adapter 1: NAT Network; Name: NatNetwork
|
||||
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT
|
||||
Desktop, Advanced: Promiscuous Mode: Allow All
|
||||
|
||||
* Worker nodes:
|
||||
|
||||
* Adapter 1:
|
||||
|
||||
Internal Network, Name: intnet-unused; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
* Adapter 3: Internal Network, Name: intnet-data1; Advanced:
|
||||
Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
|
||||
|
||||
* Windows: If you have a separate Ubuntu VM for Linux work, then add
|
||||
another interface to your Ubuntu VM and add it to the same
|
||||
intnet-data1 internal network.
|
||||
* Linux: If you want to access the VM instances directly, create a new
|
||||
``Host-only`` network called ``vboxnet<x>`` similar to the external OAM
|
||||
one above. Ensure DHCP Server is unchecked, and that the IP address is
|
||||
on a network unrelated to the rest of the addresses we're configuring.
|
||||
(The default will often be fine.) Now attach adapter-3 to the new
|
||||
Host-only network.
|
||||
* Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized
|
||||
Network (virtio-net), Promiscuous Mode: Allow All
|
||||
|
||||
Additional adapters can be added via command line, for :abbr:`LAG (Link
|
||||
Aggregation Group)` purposes. For example:
|
||||
|
||||
::
|
||||
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic5 intnet --nictype5 virtio --intnet5 intnet-data1 --nicpromisc5 allow-all
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic6 intnet --nictype6 virtio --intnet6 intnet-data2 --nicpromisc6 allow-all
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic7 intnet --nictype7 82540EM --intnet7 intnet-infra --nicpromisc7 allow-all
|
||||
|
||||
* Storage nodes:
|
||||
|
||||
* Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel
|
||||
PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
* Adapter 2: Internal Network, Name: intnet-management; Advanced:
|
||||
Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All
|
||||
|
||||
* Set the boot priority for interface 2 (eth1) on ALL VMs (controller, worker
|
||||
and storage):
|
||||
|
||||
::
|
||||
|
||||
# First list the VMs
|
||||
bwensley@yow-bwensley-lx:~$ VBoxManage list vms
|
||||
"YOW-BWENSLEY-VM" {f6d4df83-bee5-4471-9497-5a229ead8750}
|
||||
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
|
||||
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
|
||||
"worker-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
|
||||
"worker-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
|
||||
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
|
||||
# Then set the priority for interface 2. Do this for ALL VMs.
|
||||
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
|
||||
bwensley@yow-bwensley-lx:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
|
||||
# OR do them all with a foreach loop in linux
|
||||
bwensley@yow-bwensley-lx:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
|
||||
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
|
||||
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
|
||||
|
||||
* Alternative method for debugging:
|
||||
|
||||
* Turn on VM and press F12 for the boot menu.
|
||||
* Press ``L`` for LAN boot.
|
||||
* Press CTL+B for the iPXE CLI (this has a short timeout).
|
||||
* The autoboot command opens a link with each interface sequentially
|
||||
and tests for netboot.
|
||||
|
||||
|
||||
********************
|
||||
Serial port settings
|
||||
********************
|
||||
|
||||
To use serial ports, you must select Serial Console during initial boot using
|
||||
one of the following methods:
|
||||
|
||||
* Windows: Select ``Enable Serial Port``, port mode to ``Host Pipe``. Select
|
||||
``Create Pipe`` (or deselect ``Connect to existing pipe/socket``). Enter
|
||||
a Port/File Path in the form ``\\.\pipe\controller-0`` or
|
||||
``\\.\pipe\worker-1``. Later, you can use this in PuTTY to connect to the
|
||||
console. Choose speed of 9600 or 38400.
|
||||
|
||||
* Linux: Select ``Enable Serial Port`` and set the port mode to ``Host Pipe``.
|
||||
Select ``Create Pipe`` (or deselect ``Connect to existing pipe/socket``).
|
||||
Enter a Port/File Path in the form ``/tmp/controller_serial``. Later, you can
|
||||
use this with ``socat`` as shown in this example:
|
||||
|
||||
::
|
||||
|
||||
socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
|
||||
|
||||
***********
|
||||
Other notes
|
||||
***********
|
||||
|
||||
If you're using a Dell PowerEdge R720 system, it's important to execute the
|
||||
command below to avoid any kernel panic issues:
|
||||
|
||||
::
|
||||
|
||||
VBoxManage? setextradata VBoxInternal?/CPUM/EnableHVP 1
|
||||
|
||||
|
||||
----------------------------------------
|
||||
Start controller VM and allow it to boot
|
||||
----------------------------------------
|
||||
|
||||
Console usage:
|
||||
|
||||
#. To use a serial console: Select ``Serial Controller Node Install``, then
|
||||
follow the instructions above in the ``Serial Port`` section to connect to
|
||||
it.
|
||||
#. To use a graphical console: Select ``Graphics Text Controller Node
|
||||
Install`` and continue using the Virtual Box console.
|
||||
|
||||
For details on how to specify installation parameters such as rootfs device
|
||||
and console port, see :ref:`config_install_parms`.
|
||||
|
||||
Follow the :doc:`StarlingX Installation and Deployment Guides </deploy_install_guides/index>`
|
||||
to continue.
|
||||
|
||||
* Ensure that boot priority on all VMs is changed using the commands in the "Set
|
||||
the boot priority" step above.
|
||||
* In an AIO-DX and standard configuration, additional
|
||||
hosts must be booted using controller-0 (rather than ``bootimage.iso`` file).
|
||||
* On Virtual Box, click F12 immediately when the VM starts to select a different
|
||||
boot option. Select the ``lan`` option to force a network boot.
|
||||
|
||||
.. _config_install_parms_r5:
|
||||
|
||||
------------------------------------
|
||||
Configurable installation parameters
|
||||
------------------------------------
|
||||
|
||||
StarlingX allows you to specify certain configuration parameters during
|
||||
installation:
|
||||
|
||||
* Boot device: This is the device that is to be used for the boot partition. In
|
||||
most cases, this must be ``sda``, which is the default, unless the BIOS
|
||||
supports using a different disk for the boot partition. This is specified with
|
||||
the ``boot_device`` option.
|
||||
|
||||
* Rootfs device: This is the device that is to be used for the rootfs and
|
||||
various platform partitions. The default is ``sda``. This is specified with
|
||||
the ``rootfs_device`` option.
|
||||
|
||||
* Install output: Text mode vs graphical. The default is ``text``. This is
|
||||
specified with the ``install_output`` option.
|
||||
|
||||
* Console: This is the console specification, allowing the user to specify the
|
||||
console port and/or baud. The default value is ``ttyS0,115200``. This is
|
||||
specified with the ``console`` option.
|
||||
|
||||
*********************************
|
||||
Install controller-0 from ISO/USB
|
||||
*********************************
|
||||
|
||||
The initial boot menu for controller-0 is built-in, so modification of the
|
||||
installation parameters requires direct modification of the boot command line.
|
||||
This is done by scrolling to the boot option you want (for example, Serial
|
||||
Controller Node Install vs Graphics Controller Node Install), and hitting the
|
||||
tab key to allow command line modification. The example below shows how to
|
||||
modify the ``rootfs_device`` specification.
|
||||
|
||||
.. figure:: ../figures/install_virtualbox_configparms.png
|
||||
:scale: 100%
|
||||
:alt: Install controller-0
|
||||
|
||||
|
||||
************************************
|
||||
Install nodes from active controller
|
||||
************************************
|
||||
|
||||
The installation parameters are part of the system inventory host details for
|
||||
each node, and can be specified when the host is added or updated. These
|
||||
parameters can be set as part of a host-add or host-bulk-add, host-update, or
|
||||
via the GUI when editing a host.
|
||||
|
||||
For example, if you prefer to see the graphical installation, you can enter the
|
||||
following command when setting the personality of a newly discovered host:
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller install_output=graphical console=
|
||||
|
||||
If you don’t set up a serial console, but prefer the text installation, you
|
||||
can clear out the default console setting with the command:
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller install_output=text console=
|
||||
|
||||
If you’d prefer to install to the second disk on your node, use the command:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=compute hostname=compute-0 rootfs_device=sdb
|
||||
|
||||
Alternatively, these values can be set from the GUI via the ``Edit Host``
|
||||
option.
|
||||
|
||||
.. figure:: ../figures/install_virtualbox_guiscreen.png
|
||||
:scale: 100%
|
||||
:alt: Install controller-0
|
@ -0,0 +1,72 @@
|
||||
The following sections describe system requirements and host setup for a
|
||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
|
||||
|
||||
*********************
|
||||
Hardware requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
|
||||
virtualization extensions
|
||||
|
||||
* **Cores:** 8
|
||||
|
||||
* **Memory:** 32GB RAM
|
||||
|
||||
* **Hard Disk:** 500GB HDD
|
||||
|
||||
* **Network:** One network adapter with active Internet connection
|
||||
|
||||
*********************
|
||||
Software requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
|
||||
|
||||
All other required packages will be installed by scripts in the StarlingX tools repository.
|
||||
|
||||
**********
|
||||
Host setup
|
||||
**********
|
||||
|
||||
Set up the host with the following steps:
|
||||
|
||||
#. Update OS:
|
||||
|
||||
::
|
||||
|
||||
apt-get update
|
||||
|
||||
#. Clone the StarlingX tools repository:
|
||||
|
||||
::
|
||||
|
||||
apt-get install -y git
|
||||
cd $HOME
|
||||
git clone https://opendev.org/starlingx/tools.git
|
||||
|
||||
#. Install required packages:
|
||||
|
||||
::
|
||||
|
||||
cd $HOME/tools/deployment/libvirt/
|
||||
bash install_packages.sh
|
||||
apt install -y apparmor-profiles
|
||||
apt-get install -y ufw
|
||||
ufw disable
|
||||
ufw status
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
|
||||
the example above, you must reboot the server to fully install the
|
||||
apparmor-profile modules.
|
||||
|
||||
|
||||
#. Get the StarlingX ISO from the
|
||||
`CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
|
||||
Alternately, you can use an ISO from a private StarlingX build.
|
@ -5,28 +5,32 @@ Welcome to the StarlingX Documentation
|
||||
StarlingX is a fully integrated edge cloud software stack that provides
|
||||
everything needed to deploy an edge cloud on one, two, or up to 100 servers.
|
||||
|
||||
**The latest release of StarlingX is StarlingX R3.0.**
|
||||
**The most recent supported release of StarlingX is StarlingX R4.0.**
|
||||
|
||||
* See the :ref:`release-notes`.
|
||||
* `Download the R3.0 StarlingX ISO <http://mirror.starlingx.cengn.ca/mirror/starlingx/release/3.0.0/>`_.
|
||||
* Download the `R4.0 StarlingX ISO image
|
||||
<http://mirror.starlingx.cengn.ca/mirror/starlingx/release/>`_ from the
|
||||
CENGN StarlingX mirror.
|
||||
|
||||
For more information about the StarlingX project, refer to the
|
||||
`StarlingX wiki <https://wiki.openstack.org/wiki/StarlingX>`__.
|
||||
`StarlingX wiki <https://wiki.openstack.org/wiki/StarlingX>`_.
|
||||
|
||||
.. Note::
|
||||
|
||||
Community contributions to the documentation are welcome! If you see an empty
|
||||
topic and want to contribute, refer to the linked story in the topic for details.
|
||||
topic and want to contribute, refer to the linked story in the topic for
|
||||
details.
|
||||
|
||||
Additional information in the :doc:`Contributor guide </contributor/index>`.
|
||||
See our :doc:`Contributor guide </contributor/index>` for more information.
|
||||
|
||||
-----------
|
||||
Get started
|
||||
-----------
|
||||
|
||||
To get started fast with StarlingX, try the
|
||||
:doc:`Virtual All-in-one Simplex installation </deploy_install_guides/r3_release/virtual/aio_simplex>`,
|
||||
which requires a minimum of hardware and configuration.
|
||||
:doc:`Virtual All-in-one Simplex installation
|
||||
</deploy_install_guides/r4_release/virtual/aio_simplex>`, which requires a
|
||||
minimum of hardware and configuration.
|
||||
|
||||
Learn more about StarlingX:
|
||||
|
||||
|