Initial setup for R3 docs
- Copy R2 docs into R3 (as base) and update release number - Remove unused labels from R2 (resolve conflict w/ copy) - Add note to users re R3 release status and contributing (on the R3 install guides landing page) Change-Id: Ib3d3138e93a20d11e89e7eae7bc2885c8c0b90c8 Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
parent
3f9e215b30
commit
5ce4e885ea
@ -21,7 +21,12 @@ StarlingX R2.0 is the latest officially released version of StarlingX.
|
||||
Upcoming R3.0 release
|
||||
---------------------
|
||||
|
||||
The upcoming R3 release is the forthcoming version of StarlingX under development.
|
||||
StarlingX R3.0 is the forthcoming version of StarlingX under development.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
r3_release/index
|
||||
|
||||
-----------------
|
||||
Archived releases
|
||||
@ -42,57 +47,25 @@ Archived releases
|
||||
bootable_usb
|
||||
|
||||
|
||||
.. Steps you must take when a new release of the deployment and installation
|
||||
guides occurs:
|
||||
|
||||
.. 1. Archive the "current" release:
|
||||
1. Rename the "current" folder to the release name to the release number eg. "r1_release".
|
||||
2. Go into the renamed folder (i.e. the old "current" folder) and update all links in the *.rst
|
||||
files to use the new path (e.g. :doc:`Libvirt/QEMU </installation_guide/current/installation_libvirt_qemu>`
|
||||
becomes
|
||||
:doc:`Libvirt/QEMU </installation_guide/<rX_release>/installation_libvirt_qemu>`
|
||||
3. You might want to change your working directory to /<Year_Month> and use Git to grep for
|
||||
the "current" string (i.e. 'git grep "current" *'). For each applicable occurrence, make
|
||||
the call whether or not to convert the string to the actual archived string "<Year_Month>".
|
||||
Be sure to scrub all files for the "current" string in both the "installation_guide"
|
||||
and "developer_guide" folders downward.
|
||||
2. Add the new "current" release:
|
||||
1. Rename the existing "upcoming" folders to "current". This assumes that "upcoming" represented
|
||||
the under-development release that just officially released.
|
||||
2. Get inside your new folder (i.e. the old "upcoming" folder) and update all links in the *.rst
|
||||
files to use the new path (e.g. :doc:`Libvirt/QEMU </installation_guide/latest/installation_libvirt_qemu>`
|
||||
becomes
|
||||
:doc:`Libvirt/QEMU </installation_guide/current/installation_libvirt_qemu>`
|
||||
3. Again, scrub all files as per step 1.3 above.
|
||||
4. Because the "upcoming" release is now available, make sure to update these pages:
|
||||
- index
|
||||
- installation guide
|
||||
- developer guide
|
||||
- release notes
|
||||
3. Create a new "upcoming" release, which are the installation and developer guides under development:
|
||||
1. Copy your "current" folders and rename them "upcoming".
|
||||
2. Make sure the new files have the correct version in the page title and intro
|
||||
sentence (e.g. '2019.10.rc1 Installation Guide').
|
||||
3. Make sure all files in new "upcoming" link to the correct versions of supporting
|
||||
docs. You do this through the doc link, so that it resolves to the top of the page
|
||||
(e.g. :doc:`/installation_guide/latest/index`)
|
||||
4. Make sure the new release index is labeled with the correct version name
|
||||
(e.g .. _index-2019-05:)
|
||||
5. Add the archived version to the toctree on this page. You want all possible versions
|
||||
to build.
|
||||
6. Since you are adding a new version ("upcoming") *before* it is available
|
||||
(e.g. to begin work on new docs), make sure page text still directs user to the
|
||||
"current" release and not to the under development version of the manuals.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
.. Making a new release
|
||||
.. 1. Archive the previous 'latest' release.
|
||||
Move the toctree link from the Latest release section into the Archived
|
||||
releases toctree.
|
||||
.. 2. Make the previous 'upcoming' release the new 'latest.'
|
||||
Move the toctree link from the Upcoming release section into the Latest
|
||||
release. Update narrative text for the Latest release section to use the
|
||||
latest version.
|
||||
.. 3. Add new 'upcoming' release.
|
||||
If the new upcoming release docs arent ready, remove toctree from Upcoming
|
||||
section and just leave narrative text. Update text for the upcoming release
|
||||
version. Once the new upcoming docs are ready, add them in the toctree here.
|
||||
|
||||
.. Adding new release docs
|
||||
.. 1. Make sure the most recent release versioned docs are complete for that
|
||||
release.
|
||||
.. 2. Make a copy of the most recent release folder e.g. 'r2_release.' Rename the
|
||||
folder for the new release e.g. 'r3_release'.
|
||||
.. 3. Search and replace all references to previous release number with the new
|
||||
release number. For example replace all 'R2.0' with 'R3.0.' Also search and
|
||||
replease any links that may have a specific release in the path.
|
||||
.. 4. Link new version on this page (the index page).
|
||||
|
@ -9,8 +9,6 @@ Ansible bootstrap scenarios.
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
.. _ansible_bootstrap_ipv6:
|
||||
|
||||
----
|
||||
IPv6
|
||||
----
|
||||
|
@ -1,6 +1,4 @@
|
||||
.. _bm_standard_dedicated_r2:
|
||||
|
||||
============================================================
|
||||
============================================================
|
||||
Bare metal Standard with Dedicated Storage Installation R2.0
|
||||
============================================================
|
||||
|
||||
|
@ -0,0 +1,246 @@
|
||||
================================
|
||||
Ansible Bootstrap Configurations
|
||||
================================
|
||||
|
||||
This section describes additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----
|
||||
IPv6
|
||||
----
|
||||
|
||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
|
||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
|
||||
updated to IPv6 addressing.
|
||||
|
||||
Example IPv6 override values are shown below:
|
||||
|
||||
::
|
||||
|
||||
dns_servers:
|
||||
‐ 2001:4860:4860::8888
|
||||
‐ 2001:4860:4860::8844
|
||||
pxeboot_subnet: 169.254.202.0/24
|
||||
management_subnet: 2001:db8:2::/64
|
||||
cluster_host_subnet: 2001:db8:3::/64
|
||||
cluster_pod_subnet: 2001:db8:4::/64
|
||||
cluster_service_subnet: 2001:db8:4::/112
|
||||
external_oam_subnet: 2001:db8:1::/64
|
||||
external_oam_gateway_address: 2001:db8::1
|
||||
external_oam_floating_address: 2001:db8::2
|
||||
external_oam_node_0_address: 2001:db8::3
|
||||
external_oam_node_1_address: 2001:db8::4
|
||||
management_multicast_subnet: ff08::1:1:0/124
|
||||
|
||||
.. note::
|
||||
|
||||
The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
|
||||
are not required for the AIO‐SX installation.
|
||||
|
||||
----------------
|
||||
Private registry
|
||||
----------------
|
||||
|
||||
To bootstrap StarlingX requires pulling container images for multiple system
|
||||
services. By default these container images are pulled from public registries:
|
||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
|
||||
|
||||
It may be required (or desired) to copy the container images to a private
|
||||
registry and pull the images from the private registry (instead of the public
|
||||
registries) as part of the StarlingX bootstrap. For example, a private registry
|
||||
would be required if a StarlingX system was deployed in an air-gapped network
|
||||
environment.
|
||||
|
||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
|
||||
alternate registry(s) for the public registries from which container images are
|
||||
pulled. These alternate registries are used during the bootstrapping of
|
||||
controller-0, and on :command:`system application-apply` of application packages.
|
||||
|
||||
The `docker_registries` structure is a map of public registries and the
|
||||
alternate registry values for each public registry. For each public registry the
|
||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
|
||||
and the alternate registry URL and username/password (if authenticated).
|
||||
|
||||
url
|
||||
The fully scoped registry name (and optionally namespace/) for the alternate
|
||||
registry location where the images associated with this public registry
|
||||
should now be pulled from.
|
||||
|
||||
Valid formats for the `url` value are:
|
||||
|
||||
* Domain. For example:
|
||||
|
||||
::
|
||||
example.domain
|
||||
|
||||
* Domain with port. For example:
|
||||
|
||||
::
|
||||
example.domain:5000
|
||||
|
||||
* IPv4 address. For example:
|
||||
|
||||
::
|
||||
1.2.3.4
|
||||
|
||||
* IPv4 address with port. For example:
|
||||
|
||||
::
|
||||
1.2.3.4:5000
|
||||
|
||||
* IPv6 address. For example:
|
||||
|
||||
::
|
||||
FD01::0100
|
||||
|
||||
* IPv6 address with port. For example:
|
||||
|
||||
::
|
||||
[FD01::0100]:5000
|
||||
|
||||
username
|
||||
The username for logging into the alternate registry, if authenticated.
|
||||
|
||||
password
|
||||
The password for logging into the alternate registry, if authenticated.
|
||||
|
||||
|
||||
Additional configuration options in the `docker_registries` structure are:
|
||||
|
||||
unified
|
||||
A special public registry key which, if defined, will specify that images
|
||||
from all public registries should be retrieved from this single source.
|
||||
Alternate registry values, if specified, are ignored. The `unified` key
|
||||
supports the same set of alternate registry values of `url`, `username`, and
|
||||
`password`.
|
||||
|
||||
is_secure_registry
|
||||
Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
|
||||
Applies to all alternate registries. A boolean value. The default value is
|
||||
True (secure, HTTPS).
|
||||
|
||||
|
||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
|
||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
|
||||
This results in the :command:`docker pull` of images from this registry to fail.
|
||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
|
||||
signed the alternate registry’s certificate. This will add the CA as a trusted
|
||||
CA to the StarlingX system.
|
||||
|
||||
ssl_ca_cert
|
||||
The `ssl_ca_cert` value is the absolute path of the certificate file. The
|
||||
certificate must be in PEM format and the file may contain a single CA
|
||||
certificate or multiple CA certificates in a bundle.
|
||||
|
||||
|
||||
The following example specifies a single alternate registry from which to
|
||||
bootstrap StarlingX, where the images of the public registries have been
|
||||
copied to the single alternate registry. It additionally defines an alternate
|
||||
registry certificate:
|
||||
|
||||
::
|
||||
|
||||
docker_registries:
|
||||
k8s.gcr.io:
|
||||
url:
|
||||
gcr.io:
|
||||
url:
|
||||
quay.io:
|
||||
url:
|
||||
docker.io:
|
||||
url:
|
||||
unified:
|
||||
url: my.registry.io
|
||||
username: myreguser
|
||||
password: myregP@ssw0rd
|
||||
is_secure_registry: True
|
||||
|
||||
ssl_ca_cert: /path/to/ssl_ca_cert_file
|
||||
|
||||
------------
|
||||
Docker proxy
|
||||
------------
|
||||
|
||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
|
||||
to the Docker registries used by StarlingX or applications running on StarlingX,
|
||||
then Docker within StarlingX must be configured to use these http/https proxies.
|
||||
|
||||
Use the following configuration overrides to configure your Docker proxy settings.
|
||||
|
||||
docker_http_proxy
|
||||
Specify the HTTP proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_http_proxy: http://my.proxy.com:1080
|
||||
|
||||
docker_https_proxy
|
||||
Specify the HTTPS proxy URL to use. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_https_proxy: https://my.proxy.com:1443
|
||||
|
||||
docker_no_proxy
|
||||
A no-proxy address list can be provided for registries not on the other side
|
||||
of the proxies. This list will be added to the default no-proxy list derived
|
||||
from localhost, loopback, management, and OAM floating addresses at run time.
|
||||
Each address in the no-proxy list must neither contain a wildcard nor have
|
||||
subnet format. For example:
|
||||
|
||||
::
|
||||
|
||||
docker_no_proxy:
|
||||
- 1.2.3.4
|
||||
- 5.6.7.8
|
||||
|
||||
-------------------------------
|
||||
K8S Root CA Certificate and Key
|
||||
-------------------------------
|
||||
|
||||
By default the K8S Root CA Certificate and Key are auto-generated and result in
|
||||
the use of self-signed certificates for the Kubernetes API server. In the case
|
||||
where self-signed certificates are not acceptable, use the bootstrap override
|
||||
values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the certificate and
|
||||
key for the Kubernetes root CA.
|
||||
|
||||
k8s_root_ca_cert
|
||||
Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
k8s_root_ca_key
|
||||
Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
|
||||
value is the absolute path of the certificate file. The certificate must be
|
||||
in PEM format and the value must be provided as part of a pair with
|
||||
`k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
|
||||
|
||||
.. important::
|
||||
|
||||
The default length for the generated Kubernetes root CA certificate is 10
|
||||
years. Replacing the root CA certificate is an involved process so the custom
|
||||
certificate expiry should be as long as possible. We recommend ensuring root
|
||||
CA certificate has an expiry of at least 5-10 years.
|
||||
|
||||
The administrator can also provide values to add to the Kubernetes API server
|
||||
certificate Subject Alternative Name list using the 'apiserver_cert_sans`
|
||||
override parameter.
|
||||
|
||||
apiserver_cert_sans
|
||||
Specifies a list of Subject Alternative Name entries that will be added to the
|
||||
Kubernetes API server certificate. Each entry in the list must be an IP address
|
||||
or domain name. For example:
|
||||
|
||||
::
|
||||
|
||||
apiserver_cert_sans:
|
||||
- hostname.domain
|
||||
- 198.51.100.75
|
||||
|
||||
StarlingX automatically updates this parameter to include IP records for the OAM
|
||||
floating IP and both OAM unit IP addresses.
|
@ -0,0 +1,26 @@
|
||||
==============================================
|
||||
Bare metal All-in-one Duplex Installation R3.0
|
||||
==============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_duplex.txt
|
||||
|
||||
The bare metal AIO-DX deployment configuration may be extended with up to four
|
||||
worker/compute nodes (not shown in the diagram). Installation instructions for
|
||||
these additional nodes are described in :doc:`aio_duplex_extend`.
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
aio_duplex_hardware
|
||||
aio_duplex_install_kubernetes
|
||||
aio_duplex_extend
|
@ -0,0 +1,192 @@
|
||||
================================================
|
||||
Extend Capacity with Worker and/or Compute Nodes
|
||||
================================================
|
||||
|
||||
This section describes the steps to extend capacity with worker and/or compute
|
||||
nodes on a **StarlingX R3.0 bare metal All-in-one Duplex** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------------------
|
||||
Install software on compute nodes
|
||||
---------------------------------
|
||||
|
||||
#. Power on the compute servers and force them to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As the compute servers boot, a message appears on their console instructing
|
||||
you to configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered compute
|
||||
hosts (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 3 | None | None | locked | disabled | offline |
|
||||
| 4 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=compute-0
|
||||
system host-update 4 personality=worker hostname=compute-1
|
||||
|
||||
This initiates the install of software on compute nodes.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. Wait for the install of software on the computes to complete, the computes to
|
||||
reboot and to both show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | unlocked | enabled | available |
|
||||
| 3 | compute-0 | compute | locked | disabled | online |
|
||||
| 4 | compute-1 | compute | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
-----------------------
|
||||
Configure compute nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-1 0 -1G 100
|
||||
system host-memory-modify controller-1 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring interface for: $COMPUTE"
|
||||
set -ex
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in compute-0 compute-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
|
||||
needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring Nova local for: $COMPUTE"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
|
||||
--------------------
|
||||
Unlock compute nodes
|
||||
--------------------
|
||||
|
||||
Unlock compute nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-unlock $COMPUTE
|
||||
done
|
||||
|
||||
The compute nodes will reboot to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
@ -0,0 +1,58 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 2 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SDD or NVMe |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
|
||||
| | for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,434 @@
|
||||
=================================================
|
||||
Install StarlingX Kubernetes on Bare Metal AIO-DX
|
||||
=================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-install-software-controller-0-aio-simplex-start:
|
||||
:end-before: incl-install-software-controller-0-aio-simplex-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
|
||||
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes.
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-0 0 -1G 100
|
||||
system host-memory-modify controller-0 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export COMPUTE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for ceph:
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
:end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-0-aio-simplex-start:
|
||||
:end-before: incl-unlock-controller-0-aio-simplex-end:
|
||||
|
||||
-------------------------------------
|
||||
Install software on controller-1 node
|
||||
-------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
#. Wait for the software installation on controller-1 to complete, for controller-1 to
|
||||
reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment:
|
||||
|
||||
(Note that the MGMT interface is partially set up automatically by the network
|
||||
install procedure.)
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
#. Configure data interfaces for controller-1. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-1 0 -1G 100
|
||||
system host-memory-modify controller-1 1 -1G 100
|
||||
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export COMPUTE=controller-1
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-1 for ceph:
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-1
|
||||
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
|
||||
system host-stor-list controller-1
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
system host-label-assign controller-1 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export COMPUTE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
===============================================
|
||||
Bare metal All-in-one Simplex Installation R3.0
|
||||
===============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_simplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
aio_simplex_hardware
|
||||
aio_simplex_install_kubernetes
|
@ -0,0 +1,58 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R3.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum Requirement | All-in-one Controller Node |
|
||||
+=========================+===========================================================+
|
||||
| Number of servers | 1 |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
| | |
|
||||
| | or |
|
||||
| | |
|
||||
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
|
||||
| | (low-power/low-cost option) |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum memory | 64 GB |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Primary disk | 500 GB SDD or NVMe |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
|
||||
| | - Recommended, but not required: 1 or more SSDs or NVMe |
|
||||
| | drives for Ceph journals (min. 1024 MiB per OSD |
|
||||
| | journal) |
|
||||
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
|
||||
| | RPM) for VM local ephemeral storage |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| Minimum network ports | - OAM: 1x1GE |
|
||||
| | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------------------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,346 @@
|
||||
=================================================
|
||||
Install StarlingX Kubernetes on Bare Metal AIO-SX
|
||||
=================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 bare metal All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-install-software-controller-0-aio-simplex-start:
|
||||
|
||||
#. Insert the bootable USB into a bootable USB port on the host you are
|
||||
configuring as controller-0.
|
||||
|
||||
#. Power on the host.
|
||||
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
StarlingX Installer Menus.
|
||||
|
||||
#. Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
|
||||
your terminal access to the console port
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
#. Wait for non-interactive install of software to complete and server to reboot.
|
||||
This can take 5-10 minutes, depending on the performance of the server.
|
||||
|
||||
.. incl-install-software-controller-0-aio-simplex-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: simplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||
as "oam". Use the OAM port name, for example eth0, that is applicable to your
|
||||
deployment environment:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
|
||||
eth0, applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plugin
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 sriovdp=enabled
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes.
|
||||
|
||||
::
|
||||
|
||||
system host-memory-modify controller-0 0 -1G 100
|
||||
system host-memory-modify controller-0 1 -1G 100
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
export COMPUTE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for ceph:
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 openvswitch=enabled
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
StarlingX has OVS (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, OVS-DPDK should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||
|
||||
To deploy the default containerized OVS:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||
OVS defined in the helm charts of stx-openstack manifest.
|
||||
|
||||
To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
|
||||
supported only on bare metal hardware), run the following command:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||
vSwitch cores for computes.
|
||||
|
||||
When using OVS-DPDK, virtual machines must be configured to use a flavor with
|
||||
property: hw:mem_page_size=large
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking all computes (and/or AIO Controllers) to
|
||||
apply the change.
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export COMPUTE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-start:
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-controller-0-aio-simplex-end:
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,22 @@
|
||||
=============================================================
|
||||
Bare metal Standard with Controller Storage Installation R3.0
|
||||
=============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_controller_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
controller_storage_hardware
|
||||
controller_storage_install_kubernetes
|
@ -0,0 +1,55 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R3.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum Requirement | Controller Node | Compute Node |
|
||||
+=========================+=============================+=============================+
|
||||
| Number of servers | 2 | 2-10 |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
|
||||
| | 8 cores/socket |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum memory | 64 GB | 32 GB |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Primary disk | 500 GB SDD or NVMe | 120 GB (Minimum 10k RPM) |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
|
||||
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
|
||||
| | - Recommended, but not | 10K RPM) for VM local |
|
||||
| | required: 1 or more SSDs | ephemeral storage |
|
||||
| | or NVMe drives for Ceph | |
|
||||
| | journals (min. 1024 MiB | |
|
||||
| | per OSD journal) | |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
|
||||
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+-------------------------+-----------------------------+-----------------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,586 @@
|
||||
===========================================================================
|
||||
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
|
||||
===========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Create a bootable USB
|
||||
---------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB with the StarlingX ISO on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-install-software-controller-0-standard-start:
|
||||
|
||||
#. Insert the bootable USB into a bootable USB port on the host you are
|
||||
configuring as controller-0.
|
||||
|
||||
#. Power on the host.
|
||||
|
||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
|
||||
StarlingX Installer Menus.
|
||||
|
||||
#. Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
|
||||
your terminal access to the console port
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
#. Wait for non-interactive install of software to complete and server to reboot.
|
||||
This can take 5-10 minutes, depending on the performance of the server.
|
||||
|
||||
.. incl-install-software-controller-0-standard-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-start:
|
||||
|
||||
#. Login using the username / password of "sysadmin" / "sysadmin".
|
||||
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. Verify and/or configure IP connectivity.
|
||||
|
||||
External connectivity is required to run the Ansible bootstrap playbook. The
|
||||
StarlingX boot image will DHCP out all interfaces so the server may have
|
||||
obtained an IP address and have external IP connectivity if a DHCP server is
|
||||
present in your environment. Verify this using the :command:`ip addr` and
|
||||
:command:`ping 8.8.8.8` commands.
|
||||
|
||||
Otherwise, manually configure an IP address and default IP route. Use the
|
||||
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
|
||||
deployment environment.
|
||||
|
||||
::
|
||||
|
||||
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
|
||||
sudo ip link set up dev <PORT>
|
||||
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
|
||||
ping 8.8.8.8
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
#. Use a copy of the default.yml file listed above to provide your overrides.
|
||||
|
||||
The default.yml file lists all available parameters for bootstrap
|
||||
configuration with a brief description for each parameter in the file comments.
|
||||
|
||||
To use this method, copy the default.yml file listed above to
|
||||
``$HOME/localhost.yml`` and edit the configurable values as desired.
|
||||
|
||||
#. Create a minimal user configuration override file.
|
||||
|
||||
To use this method, create your override file at ``$HOME/localhost.yml``
|
||||
and provide the minimum required parameters for the deployment configuration
|
||||
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
|
||||
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
|
||||
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
|
||||
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
|
||||
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-storage-start:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks. Use the OAM and MGMT port names, for example eth0, that are
|
||||
applicable to your deployment environment.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
|
||||
|
||||
StarlingX has OVS (kernel-based) vSwitch configured as default:
|
||||
|
||||
* Runs in a container; defined within the helm charts of stx-openstack
|
||||
manifest.
|
||||
* Shares the core(s) assigned to the platform.
|
||||
|
||||
If you require better performance, OVS-DPDK should be used:
|
||||
|
||||
* Runs directly on the host (it is not containerized).
|
||||
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
|
||||
|
||||
To deploy the default containerized OVS:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type none
|
||||
|
||||
Do not run any vSwitch directly on the host, instead, use the containerized
|
||||
OVS defined in the helm charts of stx-openstack manifest.
|
||||
|
||||
To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
|
||||
supported only on bare metal hardware), run the following command:
|
||||
|
||||
::
|
||||
|
||||
system modify --vswitch_type ovs-dpdk
|
||||
system host-cpu-modify -f vswitch -p0 1 controller-0
|
||||
|
||||
Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
|
||||
default to automatically assigning 1 vSwitch core for AIO controllers and 2
|
||||
vSwitch cores for computes.
|
||||
|
||||
When using OVS-DPDK, Virtual Machines must be configured to use a flavor with
|
||||
property: hw:mem_page_size=large.
|
||||
|
||||
.. note::
|
||||
|
||||
After controller-0 is unlocked, changing vswitch_type requires
|
||||
locking and unlocking all computes (and/or AIO controllers) to
|
||||
apply the change.
|
||||
|
||||
.. incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
--------------------------------------------------
|
||||
Install software on controller-1 and compute nodes
|
||||
--------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=compute-0
|
||||
|
||||
Repeat for compute-1. Power on compute-1 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=compute-1
|
||||
|
||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
||||
complete, for all servers to reboot, and for to all show as locked/disabled/online in
|
||||
'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | compute-0 | compute | locked | disabled | online |
|
||||
| 4 | compute-1 | compute | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-start:
|
||||
|
||||
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
|
||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
|
||||
to your deployment environment.
|
||||
|
||||
(Note that the MGMT interface is partially set up automatically by the network
|
||||
install procedure.)
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=<OAM-PORT>
|
||||
MGMT_IF=<MGMT-PORT>
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 $MGMT_IF cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-start:
|
||||
|
||||
Unlock controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
.. incl-unlock-controller-1-end:
|
||||
|
||||
-----------------------
|
||||
Configure compute nodes
|
||||
-----------------------
|
||||
|
||||
#. Add the third Ceph monitor to compute-0:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add compute-0
|
||||
|
||||
#. Wait for the compute node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
||||
done
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring interface for: $COMPUTE"
|
||||
set -ex
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in compute-0 compute-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring Nova local for: $COMPUTE"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock compute nodes
|
||||
--------------------
|
||||
|
||||
Unlock compute nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-unlock $COMPUTE
|
||||
done
|
||||
|
||||
The compute nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------------
|
||||
Add Ceph OSDs to controllers
|
||||
----------------------------
|
||||
|
||||
#. Add OSDs to controller-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to controller-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,22 @@
|
||||
|
||||
============================================================
|
||||
Bare metal Standard with Dedicated Storage Installation R3.0
|
||||
============================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_dedicated_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
dedicated_storage_hardware
|
||||
dedicated_storage_install_kubernetes
|
@ -0,0 +1,60 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
The recommended minimum hardware requirements for bare metal servers for various
|
||||
host types are:
|
||||
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Minimum Requirement | Controller Node | Storage Node | Compute Node |
|
||||
+=====================+=======================+=======================+=======================+
|
||||
| Number of servers | 2 | 2-9 | 2-100 |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
|
||||
| class | |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Minimum memory | 64 GB | 64 GB | 32 GB |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Primary disk | 500 GB SDD or NVM | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
|
||||
| | | (min.10K RPM) for | recommend 1 or more |
|
||||
| | | Ceph OSD | 500 GB (min. 10K |
|
||||
| | | - Recommended, but | RPM) for VM |
|
||||
| | | not required: 1 or | ephemeral storage |
|
||||
| | | more SSDs or NVMe | |
|
||||
| | | drives for Ceph | |
|
||||
| | | journals (min. 1024 | |
|
||||
| | | MiB per OSD | |
|
||||
| | | journal) | |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
|
||||
| ports | 1x10GE | 1x10GE | 1x10GE |
|
||||
| | - OAM: 1x1GE | | - Data: 1 or more |
|
||||
| | | | x 10GE |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
| BIOS settings | - Hyper-Threading technology enabled |
|
||||
| | - Virtualization technology enabled |
|
||||
| | - VT for directed I/O enabled |
|
||||
| | - CPU power and performance policy set to performance |
|
||||
| | - CPU C state control disabled |
|
||||
| | - Plug & play BMC detection disabled |
|
||||
+---------------------+-----------------------+-----------------------+-----------------------+
|
||||
|
||||
--------------------------
|
||||
Prepare bare metal servers
|
||||
--------------------------
|
||||
|
||||
.. include:: prep_servers.txt
|
@ -0,0 +1,362 @@
|
||||
==========================================================================
|
||||
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
|
||||
==========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------------------
|
||||
Create a bootable USB with the StarlingX ISO
|
||||
--------------------------------------------
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
|
||||
create a bootable USB on your system.
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-install-software-controller-0-standard-start:
|
||||
:end-before: incl-install-software-controller-0-standard-end:
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-sys-controller-0-standard-start:
|
||||
:end-before: incl-bootstrap-sys-controller-0-standard-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-storage-start:
|
||||
:end-before: incl-config-controller-0-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
------------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and compute nodes
|
||||
------------------------------------------------------------------
|
||||
|
||||
#. Power on the controller-1 server and force it to network boot with the
|
||||
appropriate BIOS boot options for your particular server.
|
||||
|
||||
#. As controller-1 boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
|
||||
host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the storage-0 and
|
||||
storage-1 servers. Set the personality to 'storage' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on storage-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for storage-1. Power on storage-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates the software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting for the previous step to complete, power on the compute-0 and
|
||||
compute-1 servers. Set the personality to 'worker' and assign a unique
|
||||
hostname for each.
|
||||
|
||||
For example, power on compute-0 and wait for the new host (hostname=None) to
|
||||
be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=compute-0
|
||||
|
||||
Repeat for compute-1. Power on compute-1 and wait for the new host
|
||||
(hostname=None) to be discovered by checking 'system host-list':
|
||||
|
||||
::
|
||||
|
||||
system host-update 6 personality=worker hostname=compute-1
|
||||
|
||||
This initiates the install of software on compute-0 and compute-1.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
compute-0, and compute-1 to complete, for all servers to reboot, and for all to
|
||||
show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | compute-0 | compute | locked | disabled | online |
|
||||
| 6 | compute-1 | compute | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-start:
|
||||
:end-before: incl-config-controller-1-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-start:
|
||||
:end-before: incl-unlock-controller-1-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in storage-0 storage-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add OSDs to storage-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to storage-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
-----------------------
|
||||
Configure compute nodes
|
||||
-----------------------
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
|
||||
|
||||
(Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.)
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
|
||||
example eth0, that are applicable to your deployment environment.
|
||||
|
||||
.. important::
|
||||
|
||||
This step is **required** for OpenStack.
|
||||
|
||||
This step is optional for Kubernetes: Do this step if using SRIOV network
|
||||
attachments in hosted application containers.
|
||||
|
||||
For Kubernetes SRIOV network attachments:
|
||||
|
||||
* Configure the SRIOV device plug in:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-label-assign ${COMPUTE} sriovdp=enabled
|
||||
done
|
||||
|
||||
* If planning on running DPDK in containers on this host, configure the number
|
||||
of 1G Huge pages required on both NUMA nodes:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-memory-modify ${COMPUTE} 0 -1G 100
|
||||
system host-memory-modify ${COMPUTE} 1 -1G 100
|
||||
done
|
||||
|
||||
For both Kubernetes and OpenStack:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=<DATA-0-PORT>
|
||||
DATA1IF=<DATA-1-PORT>
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring interface for: $COMPUTE"
|
||||
set -ex
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
||||
support of installing the stx-openstack manifest and helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
for NODE in compute-0 compute-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring Nova local for: $COMPUTE"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock compute nodes
|
||||
--------------------
|
||||
|
||||
Unlock compute nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-unlock $COMPUTE
|
||||
done
|
||||
|
||||
The compute nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the
|
||||
host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,66 @@
|
||||
====================================
|
||||
Bare metal Standard with Ironic R3.0
|
||||
====================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
Ironic is an OpenStack project that provisions bare metal machines. For
|
||||
information about the Ironic project, see
|
||||
`Ironic Documentation <https://docs.openstack.org/ironic>`__.
|
||||
|
||||
End user applications can be deployed on bare metal servers (instead of
|
||||
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
|
||||
more bare metal servers.
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-ironic.png
|
||||
:scale: 90%
|
||||
:alt: Standard with Ironic deployment configuration
|
||||
|
||||
*Figure 1: Standard with Ironic deployment configuration*
|
||||
|
||||
Bare metal servers must be connected to:
|
||||
|
||||
* IPMI for OpenStack Ironic control
|
||||
* ironic-provisioning-net tenant network via their untagged physical interface,
|
||||
which supports PXE booting
|
||||
|
||||
As part of configuring OpenStack Ironic in StarlingX:
|
||||
|
||||
* An ironic-provisioning-net tenant network must be identified as the boot
|
||||
network for bare metal nodes.
|
||||
* An additional untagged physical interface must be configured on controller
|
||||
nodes and connected to the ironic-provisioning-net tenant network. The
|
||||
OpenStack Ironic tftpboot server will PXE boot the bare metal servers over
|
||||
this interface.
|
||||
|
||||
.. note::
|
||||
|
||||
Bare metal servers are NOT:
|
||||
|
||||
* Running any OpenStack / StarlingX software; they are running end user
|
||||
applications (for example, Glance Images).
|
||||
* To be connected to the internal management network.
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
StarlingX currently supports only a bare metal installation of Ironic with a
|
||||
standard configuration, either:
|
||||
|
||||
* :doc:`controller_storage`
|
||||
|
||||
* :doc:`dedicated_storage`
|
||||
|
||||
|
||||
This guide assumes that you have a standard deployment installed and configured
|
||||
with 2x controllers and at least 1x compute node, with the StarlingX OpenStack
|
||||
application (stx-openstack) applied.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
ironic_hardware
|
||||
ironic_install
|
@ -0,0 +1,51 @@
|
||||
=====================
|
||||
Hardware Requirements
|
||||
=====================
|
||||
|
||||
This section describes the hardware requirements and server preparation for a
|
||||
**StarlingX R3.0 bare metal Ironic** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
-----------------------------
|
||||
Minimum hardware requirements
|
||||
-----------------------------
|
||||
|
||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
|
||||
|
||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
|
||||
address of bare metal hosts.
|
||||
|
||||
For controller nodes:
|
||||
|
||||
* Additional NIC port on both controller nodes for connecting to the
|
||||
ironic-provisioning-net.
|
||||
|
||||
For compute nodes:
|
||||
|
||||
* If using a flat data network for the Ironic provisioning network, an additional
|
||||
NIC port on one of the compute nodes is required.
|
||||
|
||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
|
||||
simply add the new data network to an existing interface on the compute node.
|
||||
|
||||
* Additional switch ports / configuration for new ports on controller, compute,
|
||||
and Ironic nodes, for connectivity to the Ironic provisioning network.
|
||||
|
||||
-----------------------------------
|
||||
BMC configuration of Ironic node(s)
|
||||
-----------------------------------
|
||||
|
||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
|
||||
For example, set:
|
||||
|
||||
IP address
|
||||
10.10.10.126
|
||||
|
||||
username
|
||||
root
|
||||
|
||||
password
|
||||
test123
|
@ -0,0 +1,389 @@
|
||||
================================
|
||||
Install Ironic on StarlingX R3.0
|
||||
================================
|
||||
|
||||
This section describes the steps to install Ironic on a standard configuration,
|
||||
either:
|
||||
|
||||
* **StarlingX R3.0 bare metal Standard with Controller Storage** deployment
|
||||
configuration
|
||||
|
||||
* **StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
|
||||
configuration
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
---------------------
|
||||
Enable Ironic service
|
||||
---------------------
|
||||
|
||||
This section describes the pre-configuration required to enable the Ironic service.
|
||||
All the commands in this section are for the StarlingX platform.
|
||||
|
||||
First acquire administrative privileges:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
********************************
|
||||
Download Ironic deployment image
|
||||
********************************
|
||||
|
||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
|
||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
|
||||
by the deployment image wipes the disks and tests connectivity to the Ironic
|
||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
|
||||
|
||||
The Ironic deployment Stein image (**Ironic-kernel** and **Ironic-ramdisk**)
|
||||
can be found here:
|
||||
|
||||
* `Ironic-kernel coreos_production_pxe-stable-stein.vmlinuz
|
||||
<https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-stable-stein.vmlinuz>`__
|
||||
* `Ironic-ramdisk coreos_production_pxe_image-oem-stable-stein.cpio.gz
|
||||
<https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-stable-stein.cpio.gz>`__
|
||||
|
||||
|
||||
*******************************************************
|
||||
Configure Ironic network on deployed standard StarlingX
|
||||
*******************************************************
|
||||
|
||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
|
||||
|
||||
#. Add the Ironic platform network. This example uses `ironic-net`:
|
||||
|
||||
::
|
||||
|
||||
system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
|
||||
|
||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
|
||||
|
||||
.. note::
|
||||
|
||||
The tenant network is not the same as the platform network described in
|
||||
the previous step. You can specify any name for the tenant network other
|
||||
than ‘ironic’. If the name 'ironic' is used, a user override must be
|
||||
generated to indicate the tenant network name.
|
||||
|
||||
Refer to section `Generate user Helm overrides`_ for details.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ironic-data flat
|
||||
|
||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
|
||||
them to the platform network. Host must be locked. This example uses the
|
||||
platform network `ironic-net` that was named in a previous step.
|
||||
|
||||
These new interfaces to the controllers are used to connect to the Ironic
|
||||
provisioning network:
|
||||
|
||||
**controller-0**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-0 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-0 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-0
|
||||
|
||||
|
||||
**controller-1**
|
||||
|
||||
::
|
||||
|
||||
system interface-network-assign controller-1 enp2s0 ironic-net
|
||||
system host-if-modify -n ironic -c platform \
|
||||
--ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
|
||||
|
||||
# Apply the OpenStack Ironic node labels
|
||||
system host-label-assign controller-1 openstack-ironic=enabled
|
||||
|
||||
# Unlock the node to apply changes
|
||||
system host-unlock controller-1
|
||||
|
||||
#. Configure the new interface (for Ironic) on one of the compute nodes and
|
||||
assign it to the Ironic data network. This example uses the data network
|
||||
`ironic-data` that was named in a previous step.
|
||||
|
||||
::
|
||||
|
||||
system interface-datanetwork-assign compute-0 eno1 ironic-data
|
||||
system host-if-modify -n ironicdata -c data compute-0 eno1
|
||||
|
||||
****************************
|
||||
Generate user Helm overrides
|
||||
****************************
|
||||
|
||||
Ironic Helm Charts are included in the stx-openstack application. By default,
|
||||
Ironic is disabled.
|
||||
|
||||
To enable Ironic, update the following Ironic Helm Chart attributes:
|
||||
|
||||
::
|
||||
|
||||
system helm-override-update stx-openstack ironic openstack \
|
||||
--set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
|
||||
--set network.pxe.neutron_subnet_gateway=10.10.20.1 \
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
|
||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
|
||||
|
||||
If the data network name for Ironic is changed, modify
|
||||
:command:`network.pxe.neutron_provider_network` to the command above:
|
||||
|
||||
::
|
||||
|
||||
--set network.pxe.neutron_provider_network=ironic-data
|
||||
|
||||
*******************************
|
||||
Apply stx-openstack application
|
||||
*******************************
|
||||
|
||||
Re-apply the stx-openstack application to apply the changes to Ironic:
|
||||
|
||||
::
|
||||
|
||||
system application-apply stx-openstack
|
||||
|
||||
--------------------
|
||||
Start an Ironic node
|
||||
--------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application with
|
||||
administrative privileges.
|
||||
|
||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: 'Li69nux*'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
********************
|
||||
Create Glance images
|
||||
********************
|
||||
|
||||
#. Create the **ironic-kernel** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe-stable-stein.vmlinuz \
|
||||
--disk-format aki \
|
||||
--container-format aki \
|
||||
--public \
|
||||
ironic-kernel
|
||||
|
||||
#. Create the **ironic-ramdisk** image:
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
|
||||
--disk-format ari \
|
||||
--container-format ari \
|
||||
--public \
|
||||
ironic-ramdisk
|
||||
|
||||
#. Create the end user application image (for example, CentOS):
|
||||
|
||||
::
|
||||
|
||||
openstack image create \
|
||||
--file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
|
||||
--public --disk-format \
|
||||
qcow2 --container-format bare centos
|
||||
|
||||
*********************
|
||||
Create an Ironic node
|
||||
*********************
|
||||
|
||||
#. Create a node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node create --driver ipmi --name ironic-test0
|
||||
|
||||
#. Add IPMI information:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info ipmi_address=10.10.10.126 \
|
||||
--driver-info ipmi_username=root \
|
||||
--driver-info ipmi_password=test123 \
|
||||
--driver-info ipmi_terminal_port=623 ironic-test0
|
||||
|
||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
|
||||
on this bare metal node:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
|
||||
--driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
|
||||
ironic-test0
|
||||
|
||||
#. Set resource properties on this bare metal node based on actual Ironic node
|
||||
capacities:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set \
|
||||
--property cpus=4 \
|
||||
--property cpu_arch=x86_64\
|
||||
--property capabilities="boot_option:local" \
|
||||
--property memory_mb=65536 \
|
||||
--property local_gb=400 \
|
||||
--resource-class bm ironic-test0
|
||||
|
||||
#. Add pxe_template location:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node set --driver-info \
|
||||
pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template'\
|
||||
ironic-test0
|
||||
|
||||
#. Create a port to identify the specific port used by the Ironic node.
|
||||
Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
|
||||
port which connects to the Ironic network:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal port create \
|
||||
--node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
|
||||
--pxe-enabled true a4:bf:01:2b:3b:c8
|
||||
|
||||
#. Change node state to `manage`:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node manage ironic-test0
|
||||
|
||||
#. Make node available for deployment:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node provide ironic-test0
|
||||
|
||||
#. Wait for ironic-test0 provision-state: available:
|
||||
|
||||
::
|
||||
|
||||
openstack baremetal node show ironic-test0
|
||||
|
||||
---------------------------------
|
||||
Deploy an instance on Ironic node
|
||||
---------------------------------
|
||||
|
||||
All the commands in this section are for the OpenStack application, but this
|
||||
time with *tenant* specific privileges.
|
||||
|
||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
|
||||
|
||||
::
|
||||
|
||||
mkdir -p /etc/openstack
|
||||
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'joeuser'
|
||||
password: 'mypasswrd'
|
||||
project_name: 'intel'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
#. Create flavor.
|
||||
|
||||
Set resource CUSTOM_BM corresponding to **--resource-class bm**:
|
||||
|
||||
::
|
||||
|
||||
openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
|
||||
--property resources:CUSTOM_BM=1 \
|
||||
--property resources:VCPU=0 \
|
||||
--property resources:MEMORY_MB=0 \
|
||||
--property resources:DISK_GB=0 \
|
||||
--property capabilities:boot_option='local' \
|
||||
bm-flavor
|
||||
|
||||
See `Adding scheduling information
|
||||
<https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
|
||||
and `Configure Nova flavors
|
||||
<https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
|
||||
for more information.
|
||||
|
||||
#. Enable service
|
||||
|
||||
List the compute services:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service list
|
||||
|
||||
Set compute service properties:
|
||||
|
||||
::
|
||||
|
||||
openstack compute service set --enable controller-0 nova-compute
|
||||
|
||||
#. Create instance
|
||||
|
||||
.. note::
|
||||
|
||||
The :command:`keypair create` command is optional. It is not required to
|
||||
enable a bare metal instance.
|
||||
|
||||
::
|
||||
|
||||
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
|
||||
|
||||
|
||||
Create 2 new servers, one bare metal and one virtual:
|
||||
|
||||
::
|
||||
|
||||
openstack server create --image centos --flavor bm-flavor \
|
||||
--network baremetal --key-name mykey bm
|
||||
|
||||
openstack server create --image centos --flavor m1.small \
|
||||
--network baremetal --key-name mykey vm
|
@ -0,0 +1,17 @@
|
||||
Prior to starting the StarlingX installation, the bare metal servers must be in
|
||||
the following condition:
|
||||
|
||||
* Physically installed
|
||||
|
||||
* Cabled for power
|
||||
|
||||
* Cabled for networking
|
||||
|
||||
* Far-end switch ports should be properly configured to realize the networking
|
||||
shown in Figure 1.
|
||||
|
||||
* All disks wiped
|
||||
|
||||
* Ensures that servers will boot from either the network or USB storage (if present)
|
||||
|
||||
* Powered off
|
@ -0,0 +1,23 @@
|
||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
|
||||
availability (HA) servers with each server providing all three cloud functions
|
||||
(controller, compute, and storage).
|
||||
|
||||
An AIO-DX configuration provides the following benefits:
|
||||
|
||||
* Only a small amount of cloud processing and storage power is required
|
||||
* Application consolidation using multiple virtual machines on a single pair of
|
||||
physical servers
|
||||
* High availability (HA) services run on the controller function across two
|
||||
physical servers in either active/active or active/standby mode
|
||||
* A storage back end solution using a two-node CEPH deployment across two servers
|
||||
* Virtual machines scheduled on both compute functions
|
||||
* Protection against overall server hardware fault, where
|
||||
|
||||
* All controller HA services go active on the remaining healthy server
|
||||
* All virtual machines are recovered on the remaining healthy server
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-duplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Duplex deployment configuration
|
||||
|
||||
*Figure 1: All-in-one Duplex deployment configuration*
|
@ -0,0 +1,18 @@
|
||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
|
||||
functions (controller, compute, and storage) on a single server with the
|
||||
following benefits:
|
||||
|
||||
* Requires only a small amount of cloud processing and storage power
|
||||
* Application consolidation using multiple virtual machines on a single pair of
|
||||
physical servers
|
||||
* A storage backend solution using a single-node CEPH deployment
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
|
||||
:scale: 50%
|
||||
:alt: All-in-one Simplex deployment configuration
|
||||
|
||||
*Figure 1: All-in-one Simplex deployment configuration*
|
||||
|
||||
An AIO-SX deployment gives no protection against overall server hardware fault.
|
||||
Hardware component protection can be enabled with, for example, a hardware RAID
|
||||
or 2x Port LAG in the deployment.
|
@ -0,0 +1,22 @@
|
||||
The Standard with Controller Storage deployment option provides two high
|
||||
availability (HA) controller nodes and a pool of up to 10 compute nodes.
|
||||
|
||||
A Standard with Controller Storage configuration provides the following benefits:
|
||||
|
||||
* A pool of up to 10 compute nodes
|
||||
* High availability (HA) services run across the controller nodes in either
|
||||
active/active or active/standby mode
|
||||
* A storage back end solution using a two-node CEPH deployment across two
|
||||
controller servers
|
||||
* Protection against overall controller and compute node failure, where
|
||||
|
||||
* On overall controller node failure, all controller HA services go active on
|
||||
the remaining healthy controller node
|
||||
* On overall compute node failure, virtual machines and containers are
|
||||
recovered on the remaining healthy compute nodes
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Controller Storage deployment configuration
|
||||
|
||||
*Figure 1: Standard with Controller Storage deployment configuration*
|
@ -0,0 +1,17 @@
|
||||
The Standard with Dedicated Storage deployment option is a standard installation
|
||||
with independent controller, compute, and storage nodes.
|
||||
|
||||
A Standard with Dedicated Storage configuration provides the following benefits:
|
||||
|
||||
* A pool of up to 100 compute nodes
|
||||
* A 2x node high availability (HA) controller cluster with HA services running
|
||||
across the controller nodes in either active/active or active/standby mode
|
||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
|
||||
that supports a replication factor of two or three
|
||||
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes
|
||||
|
||||
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
|
||||
:scale: 50%
|
||||
:alt: Standard with Dedicated Storage deployment configuration
|
||||
|
||||
*Figure 1: Standard with Dedicated Storage deployment configuration*
|
Binary file not shown.
After Width: | Height: | Size: 96 KiB |
Binary file not shown.
After Width: | Height: | Size: 109 KiB |
Binary file not shown.
After Width: | Height: | Size: 100 KiB |
Binary file not shown.
After Width: | Height: | Size: 103 KiB |
Binary file not shown.
After Width: | Height: | Size: 127 KiB |
Binary file not shown.
After Width: | Height: | Size: 70 KiB |
63
doc/source/deploy_install_guides/r3_release/index.rst
Normal file
63
doc/source/deploy_install_guides/r3_release/index.rst
Normal file
@ -0,0 +1,63 @@
|
||||
===========================
|
||||
StarlingX R3.0 Installation
|
||||
===========================
|
||||
|
||||
.. Note::
|
||||
|
||||
StarlingX R3.0 is the upcoming release currently under development. Instructions
|
||||
and examples may change with the final release of R3.0.
|
||||
|
||||
Community contributions to the documentation are welcome! If you see an empty
|
||||
topic and want to contribute, refer to the linked story in the topic for details.
|
||||
|
||||
StarlingX provides a pre-defined set of standard
|
||||
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
|
||||
be installed in a virtual environment or on bare metal.
|
||||
|
||||
-----------------------------------------------------
|
||||
Install StarlingX Kubernetes in a virtual environment
|
||||
-----------------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
virtual/aio_simplex
|
||||
virtual/aio_duplex
|
||||
virtual/controller_storage
|
||||
virtual/dedicated_storage
|
||||
|
||||
------------------------------------------
|
||||
Install StarlingX Kubernetes on bare metal
|
||||
------------------------------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
bare_metal/aio_simplex
|
||||
bare_metal/aio_duplex
|
||||
bare_metal/controller_storage
|
||||
bare_metal/dedicated_storage
|
||||
bare_metal/ironic
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
ansible_bootstrap_configs
|
||||
|
||||
-----------------
|
||||
Access Kubernetes
|
||||
-----------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
kubernetes_access
|
||||
|
||||
-------------------
|
||||
StarlingX OpenStack
|
||||
-------------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
openstack/index
|
10
doc/source/deploy_install_guides/r3_release/ipv6_note.txt
Normal file
10
doc/source/deploy_install_guides/r3_release/ipv6_note.txt
Normal file
@ -0,0 +1,10 @@
|
||||
.. note::
|
||||
|
||||
By default, StarlingX uses IPv4. To use StarlingX with IPv6:
|
||||
|
||||
* The entire infrastructure and cluster configuration must be IPv6, with the
|
||||
exception of the PXE boot network.
|
||||
|
||||
* Not all external servers are reachable via IPv6 addresses (for example
|
||||
Docker registries). Depending on your infrastructure, it may be necessary
|
||||
to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.
|
@ -0,0 +1,175 @@
|
||||
================================
|
||||
Access StarlingX Kubernetes R3.0
|
||||
================================
|
||||
|
||||
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
|
||||
Kubernetes and hosted containerized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
----------
|
||||
Local CLIs
|
||||
----------
|
||||
|
||||
In order to access the StarlingX and Kubernetes commands on controller-O, first
|
||||
follow these steps:
|
||||
|
||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
|
||||
|
||||
#. Acquire Keystone admin and Kubernetes admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
*********************************************
|
||||
StarlingX system and host management commands
|
||||
*********************************************
|
||||
|
||||
Access StarlingX system and host management commands using the :command:`system`
|
||||
command. For example:
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
Use the :command:`system help` command for the full list of options.
|
||||
|
||||
***********************************
|
||||
StarlingX fault management commands
|
||||
***********************************
|
||||
|
||||
Access StarlingX fault management commands using the :command:`fm` command, for example:
|
||||
|
||||
::
|
||||
|
||||
fm alarm-list
|
||||
|
||||
*******************
|
||||
Kubernetes commands
|
||||
*******************
|
||||
|
||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
|
||||
|
||||
::
|
||||
|
||||
kubectl get nodes
|
||||
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
controller-0 Ready master 5d19h v1.13.5
|
||||
|
||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
|
||||
|
||||
-----------
|
||||
Remote CLIs
|
||||
-----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
*********************
|
||||
StarlingX Horizon GUI
|
||||
*********************
|
||||
|
||||
Access the StarlingX Horizon GUI in your browser at the following address:
|
||||
|
||||
::
|
||||
|
||||
http://<oam-floating-ip-address>:8080
|
||||
|
||||
|
||||
Log in to Horizon with an admin/<sysadmin-password>.
|
||||
|
||||
********************
|
||||
Kubernetes dashboard
|
||||
********************
|
||||
|
||||
The Kubernetes dashboard is not installed by default.
|
||||
|
||||
To install the Kubernetes dashboard, execute the following steps on controller-0:
|
||||
|
||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
|
||||
the override values shown below:
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > dashboard-values.yaml
|
||||
service:
|
||||
type: NodePort
|
||||
nodePort: 30000
|
||||
|
||||
rbac:
|
||||
create: true
|
||||
clusterAdminRole: true
|
||||
|
||||
serviceAccount:
|
||||
create: true
|
||||
name: kubernetes-dashboard
|
||||
EOF
|
||||
|
||||
helm helm repo update
|
||||
|
||||
helm install stable/kubernetes-dashboard --name dashboard -f dashboard-values.yaml
|
||||
|
||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges, and
|
||||
display its token for logging into the Kubernetes dashboard.
|
||||
|
||||
::
|
||||
|
||||
cat <<EOF > admin-login.yaml
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: admin-user
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: cluster-admin
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: admin-user
|
||||
namespace: kube-system
|
||||
EOF
|
||||
|
||||
kubectl apply -f admin-login.yaml
|
||||
|
||||
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
|
||||
|
||||
#. Access the Kubernetes dashboard GUI in your browser at the following address:
|
||||
|
||||
::
|
||||
|
||||
https://<oam-floating-ip-address>:30000
|
||||
|
||||
#. Log in with the ``admin-user`` TOKEN.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
List the StarlingX platform-related public REST API endpoints using the
|
||||
following command:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list | grep public
|
||||
|
||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
|
||||
REST API messages.
|
@ -0,0 +1,7 @@
|
||||
Your Kubernetes cluster is now up and running.
|
||||
|
||||
For instructions on how to access StarlingX Kubernetes see
|
||||
:doc:`../kubernetes_access`.
|
||||
|
||||
For instructions on how to install and access StarlingX OpenStack see
|
||||
:doc:`../openstack/index`.
|
273
doc/source/deploy_install_guides/r3_release/openstack/access.rst
Normal file
273
doc/source/deploy_install_guides/r3_release/openstack/access.rst
Normal file
@ -0,0 +1,273 @@
|
||||
==========================
|
||||
Access StarlingX OpenStack
|
||||
==========================
|
||||
|
||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
|
||||
OpenStack and hosted virtualized applications.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------
|
||||
Configure helm endpoint domain
|
||||
------------------------------
|
||||
|
||||
Containerized OpenStack services in StarlingX are deployed behind an ingress
|
||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
|
||||
The ingress controller routes packets to the specific OpenStack service, such as
|
||||
the Cinder service, or the Neutron service, by parsing the FQDN in the packet.
|
||||
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service,
|
||||
`cinder‐api.openstack.svc.cluster.local` is for the Cinder service.
|
||||
|
||||
This routing requires that access to OpenStack REST APIs must be via a FQDN
|
||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
|
||||
OpenStack REST APIs using an IP address.
|
||||
|
||||
FQDNs (such as `cinder‐api.openstack.svc.cluster.local`) must be in a DNS server
|
||||
that is publicly accessible.
|
||||
|
||||
.. note::
|
||||
|
||||
There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
|
||||
server configuration so that you don’t need to update the DNS server every
|
||||
time an OpenStack service is added. Check your particular DNS server for
|
||||
details on how to wild-card a set of FQDNs.
|
||||
|
||||
In a “real” deployment, that is, not a lab scenario, you can not use the default
|
||||
`openstack.svc.cluster.local` domain name externally. You must set a unique
|
||||
domain name for your StarlingX system. StarlingX provides the
|
||||
:command:`system service‐parameter-add` command to configure and set the
|
||||
OpenStack domain name:
|
||||
|
||||
::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=<domain_name>
|
||||
|
||||
`<domain_name>` should be a fully qualified domain name that you own, such that
|
||||
you can configure the DNS Server that owns `<domain_name>` with the OpenStack
|
||||
service names underneath the domain.
|
||||
|
||||
For example:
|
||||
::
|
||||
|
||||
system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
|
||||
system application-apply stx-openstack
|
||||
|
||||
This command updates the helm charts of all OpenStack services and restarts them.
|
||||
For example it would change `cinder‐api.openstack.svc.cluster.local` to
|
||||
`cinder‐api.my-starlingx-domain.my-company.com`, and so on for all OpenStack
|
||||
services.
|
||||
|
||||
.. note::
|
||||
|
||||
This command also changes the containerized OpenStack Horizon to listen on
|
||||
`horizon.my-starlingx-domain.my-company.com:80` instead of the initial
|
||||
`<oam‐floating‐ip>:31000`.
|
||||
|
||||
You must configure `{ ‘*.my-starlingx-domain.my-company.com’: --> oam‐floating‐ip‐address }`
|
||||
in the external DNS server that owns `my-company.com`.
|
||||
|
||||
---------
|
||||
Local CLI
|
||||
---------
|
||||
|
||||
Access OpenStack using the local CLI with the following steps:
|
||||
|
||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
|
||||
*Do not use* source /etc/platform/openrc .
|
||||
|
||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
|
||||
OpenStack admin credentials:
|
||||
|
||||
::
|
||||
|
||||
sudo su -
|
||||
mkdir -p /etc/openstack
|
||||
tee /etc/openstack/clouds.yaml << EOF
|
||||
clouds:
|
||||
openstack_helm:
|
||||
region_name: RegionOne
|
||||
identity_api_version: 3
|
||||
endpoint_type: internalURL
|
||||
auth:
|
||||
username: 'admin'
|
||||
password: '<sysadmin-password>'
|
||||
project_name: 'admin'
|
||||
project_domain_name: 'default'
|
||||
user_domain_name: 'default'
|
||||
auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
EOF
|
||||
exit
|
||||
|
||||
export OS_CLOUD=openstack_helm
|
||||
|
||||
**********************
|
||||
OpenStack CLI commands
|
||||
**********************
|
||||
|
||||
Access OpenStack CLI commands for the StarlingX OpenStack cloud application
|
||||
using the :command:`openstack` command. For example:
|
||||
|
||||
::
|
||||
|
||||
[sysadmin@controller-0 ~(keystone_admin)]$ openstack flavor list
|
||||
[sysadmin@controller-0 ~(keystone_admin)]$ openstack image list
|
||||
|
||||
----------
|
||||
Remote CLI
|
||||
----------
|
||||
|
||||
Documentation coming soon.
|
||||
|
||||
---
|
||||
GUI
|
||||
---
|
||||
|
||||
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the
|
||||
following address:
|
||||
|
||||
::
|
||||
|
||||
http://<oam-floating-ip-address>:31000
|
||||
|
||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
|
||||
|
||||
---------
|
||||
REST APIs
|
||||
---------
|
||||
|
||||
This section provides an overview of accessing REST APIs with examples of
|
||||
`curl`-based REST API commands.
|
||||
|
||||
****************
|
||||
Public endpoints
|
||||
****************
|
||||
|
||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
|
||||
|
||||
::
|
||||
|
||||
openstack endpoint list
|
||||
|
||||
The public endpoints will look like:
|
||||
|
||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
|
||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.openstack.svc.cluster.local:80/`
|
||||
* `etc.`
|
||||
|
||||
If you have set a unique domain name, then the public endpoints will look like:
|
||||
|
||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
|
||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
|
||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
|
||||
* `etc.`
|
||||
|
||||
Documentation for the OpenStack REST APIs is available at
|
||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
|
||||
|
||||
***********
|
||||
Get a token
|
||||
***********
|
||||
|
||||
The following command will request the Keystone token:
|
||||
|
||||
::
|
||||
|
||||
curl -i -H "Content-Type: application/json" -d
|
||||
'{ "auth": {
|
||||
"identity": {
|
||||
"methods": ["password"],
|
||||
"password": {
|
||||
"user": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" },
|
||||
"password": "St8rlingX*"
|
||||
}
|
||||
}
|
||||
},
|
||||
"scope": {
|
||||
"project": {
|
||||
"name": "admin",
|
||||
"domain": { "id": "default" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}' http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
|
||||
|
||||
The token will be returned in the "X-Subject-Token" header field of the response:
|
||||
|
||||
::
|
||||
|
||||
HTTP/1.1 201 CREATED
|
||||
Date: Wed, 02 Oct 2019 18:27:38 GMT
|
||||
Content-Type: application/json
|
||||
Content-Length: 8128
|
||||
Connection: keep-alive
|
||||
X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
|
||||
Vary: X-Auth-Token
|
||||
x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
|
||||
|
||||
{"token": {"is_domain": false,
|
||||
|
||||
...
|
||||
|
||||
You can set an environment variable to hold the token value from the response.
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
|
||||
|
||||
*****************
|
||||
List Nova flavors
|
||||
*****************
|
||||
|
||||
The following command will request a list of all Nova flavors:
|
||||
|
||||
::
|
||||
|
||||
curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
|
||||
|
||||
The list will be returned in the response. For example:
|
||||
|
||||
::
|
||||
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 2529 100 2529 0 0 24187 0 --:--:-- --:--:-- --:--:-- 24317
|
||||
{
|
||||
"flavors": [
|
||||
{
|
||||
"id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "m1.tiny"
|
||||
},
|
||||
{
|
||||
"id": "14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"links": [
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "self"
|
||||
},
|
||||
{
|
||||
"href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
|
||||
"rel": "bookmark"
|
||||
}
|
||||
],
|
||||
"name": "medium.dpdk"
|
||||
},
|
||||
{
|
||||
|
||||
...
|
||||
|
@ -0,0 +1,16 @@
|
||||
===================
|
||||
StarlingX OpenStack
|
||||
===================
|
||||
|
||||
This section describes the steps to install and access StarlingX OpenStack.
|
||||
Other than the OpenStack-specific configurations required in the underlying
|
||||
StarlingX Kubernetes infrastructure (described in the installation steps for
|
||||
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX
|
||||
is independent of deployment configuration.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
install
|
||||
access
|
||||
uninstall_delete
|
@ -0,0 +1,65 @@
|
||||
===========================
|
||||
Install StarlingX OpenStack
|
||||
===========================
|
||||
|
||||
These instructions assume that you have completed the following
|
||||
OpenStack-specific configuration tasks that are required by the underlying
|
||||
StarlingX Kubernetes platform:
|
||||
|
||||
* All nodes have been labeled appropriately for their OpenStack role(s).
|
||||
* The vSwitch type has been configured.
|
||||
* The nova-local volume group has been configured on any node's host, if running
|
||||
the compute function.
|
||||
|
||||
--------------------------------------------
|
||||
Install application manifest and helm-charts
|
||||
--------------------------------------------
|
||||
|
||||
#. Get the StarlingX OpenStack application (stx-openstack) manifest and helm-charts.
|
||||
This can be from a private StarlingX build or, as shown below, from the public
|
||||
Cengen StarlingX build off ``master`` branch:
|
||||
|
||||
::
|
||||
|
||||
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/helm-charts/stx-openstack-1.0-17-centos-stable-latest.tgz
|
||||
|
||||
#. Load the stx-openstack application's package into StarlingX. The tarball
|
||||
package contains stx-openstack's Airship Armada manifest and stx-openstack's
|
||||
set of helm charts:
|
||||
|
||||
::
|
||||
|
||||
system application-upload stx-openstack-1.0-17-centos-stable-latest.tgz
|
||||
|
||||
This will:
|
||||
|
||||
* Load the Armada manifest and helm charts.
|
||||
* Internally manage helm chart override values for each chart.
|
||||
* Automatically generate system helm chart overrides for each chart based on
|
||||
the current state of the underlying StarlingX Kubernetes platform and the
|
||||
recommended StarlingX configuration of OpenStack services.
|
||||
|
||||
#. Apply the stx-openstack application in order to bring StarlingX OpenStack into
|
||||
service.
|
||||
|
||||
::
|
||||
|
||||
system application-apply stx-openstack
|
||||
|
||||
#. Wait for the activation of stx-openstack to complete.
|
||||
|
||||
This can take 5-10 minutes depending on the performance of your host machine.
|
||||
|
||||
Monitor progress with the command:
|
||||
|
||||
::
|
||||
|
||||
watch -n 5 system application-list
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
Your OpenStack cloud is now up and running.
|
||||
|
||||
See :doc:`access` for details on how to access StarlingX OpenStack.
|
@ -0,0 +1,33 @@
|
||||
=============================
|
||||
Uninstall StarlingX OpenStack
|
||||
=============================
|
||||
|
||||
This section provides additional commands for uninstalling and deleting the
|
||||
StarlingX OpenStack application.
|
||||
|
||||
.. warning::
|
||||
|
||||
Uninstalling the OpenStack application will terminate all OpenStack services.
|
||||
|
||||
-----------------------------
|
||||
Bring down OpenStack services
|
||||
-----------------------------
|
||||
|
||||
Use the system CLI to uninstall the OpenStack application:
|
||||
|
||||
::
|
||||
|
||||
system application-remove stx-openstack
|
||||
system application-list
|
||||
|
||||
---------------------------------------
|
||||
Delete OpenStack application definition
|
||||
---------------------------------------
|
||||
|
||||
Use the system CLI to delete the OpenStack application definition:
|
||||
|
||||
::
|
||||
|
||||
system application-delete stx-openstack
|
||||
system application-list
|
||||
|
@ -0,0 +1,21 @@
|
||||
===========================================
|
||||
Virtual All-in-one Duplex Installation R3.0
|
||||
===========================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_duplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
aio_duplex_environ
|
||||
aio_duplex_install_kubernetes
|
@ -0,0 +1,52 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R3.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
The following steps explain how to prepare the virtual environment and servers
|
||||
on a physical host for a StarlingX R3.0 virtual All-in-one Duplex deployment
|
||||
configuration.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* duplex-controller-0
|
||||
* duplex-controller-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'duplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c duplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present, then errors are returned.
|
||||
|
@ -0,0 +1,425 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-DX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 virtual All-in-one Duplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of "Prepare virtual environment and servers" the
|
||||
controller-0 virtual server 'duplex-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export COMPUTE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for ceph:
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. include:: aio_simplex_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
:end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-------------------------------------
|
||||
Install software on controller-1 node
|
||||
-------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
|
||||
automatically attempt to network boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start duplex-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console duplex-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
|
||||
reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up
|
||||
automatically by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
#. Configure data interfaces for controller-1.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export COMPUTE=controller-1
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-1 for ceph:
|
||||
|
||||
::
|
||||
|
||||
echo ">>> Add OSDs to primary tier"
|
||||
system host-disk-list controller-1
|
||||
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
|
||||
system host-stor-list controller-1
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
system host-label-assign controller-1 openstack-compute-node=enabled
|
||||
system host-label-assign controller-1 openvswitch=enabled
|
||||
system host-label-assign controller-1 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
export COMPUTE=controller-1
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
============================================
|
||||
Virtual All-in-one Simplex Installation R3.0
|
||||
============================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_aio_simplex.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
aio_simplex_environ
|
||||
aio_simplex_install_kubernetes
|
@ -0,0 +1,50 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R3.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
The following steps explain how to prepare the virtual environment and servers
|
||||
on a physical host for a StarlingX R3.0 virtual All-in-one Simplex deployment
|
||||
configuration.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up the virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* simplex-controller-0
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'simplex-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c simplex -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present, then errors will occur.
|
@ -0,0 +1,286 @@
|
||||
==============================================
|
||||
Install StarlingX Kubernetes on Virtual AIO-SX
|
||||
==============================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 virtual All-in-one Simplex** deployment configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of "Prepare virtual environment and servers", the
|
||||
controller-0 virtual server 'simplex-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the
|
||||
appropriate installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console simplex-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'All-in-one Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook.
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: simplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM interface of controller-0 and specify the attached network
|
||||
as "oam". Use the OAM port name, for example eth0, that is applicable to your
|
||||
deployment environment:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instances clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP in this step.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
#. Configure data interfaces for controller-0.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
export COMPUTE=controller-0
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
|
||||
#. Add an OSD on controller-0 for ceph:
|
||||
|
||||
::
|
||||
|
||||
system host-disk-list controller-0
|
||||
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
|
||||
system host-stor-list controller-0
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest/helm-charts later.
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
system host-label-assign controller-0 openstack-compute-node=enabled
|
||||
system host-label-assign controller-0 openvswitch=enabled
|
||||
system host-label-assign controller-0 sriov=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks.
|
||||
|
||||
::
|
||||
|
||||
export COMPUTE=controller-0
|
||||
|
||||
echo ">>> Getting root disk info"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
|
||||
|
||||
echo ">>>> Configuring nova-local"
|
||||
NOVA_SIZE=34
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
sleep 2
|
||||
|
||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
==========================================================
|
||||
Virtual Standard with Controller Storage Installation R3.0
|
||||
==========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_controller_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
controller_storage_environ
|
||||
controller_storage_install_kubernetes
|
@ -0,0 +1,54 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R3.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
---------------------------------------
|
||||
Prepare virtual environment and servers
|
||||
---------------------------------------
|
||||
|
||||
The following steps explain how to prepare the virtual environment and servers
|
||||
on a physical host for a StarlingX R3.0 virtual Standard with Controller Storage
|
||||
deployment configuration.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* controllerstorage-controller-0
|
||||
* controllerstorage-controller-1
|
||||
* controllerstorage-worker-0
|
||||
* controllerstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'controllerstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present, then errors are returned.
|
@ -0,0 +1,550 @@
|
||||
========================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
|
||||
========================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 virtual Standard with Controller Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of "Prepare the virtual environment and virtual servers" the
|
||||
controller-0 virtual server 'controllerstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
|
||||
When logging in for the first time, you will be forced to change the password.
|
||||
|
||||
::
|
||||
|
||||
Login: sysadmin
|
||||
Password:
|
||||
Changing password for sysadmin.
|
||||
(current) UNIX Password: sysadmin
|
||||
New Password:
|
||||
(repeat) New Password:
|
||||
|
||||
#. External connectivity is required to run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
|
||||
export DEFAULT_OAM_GATEWAY=10.10.10.1
|
||||
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
|
||||
sudo ip link set up dev enp7s1
|
||||
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
|
||||
|
||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
|
||||
|
||||
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
|
||||
configuration are:
|
||||
|
||||
``/etc/ansible/hosts``
|
||||
The default Ansible inventory file. Contains a single host: localhost.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
|
||||
The Ansible bootstrap playbook.
|
||||
|
||||
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
|
||||
The default configuration values for the bootstrap playbook.
|
||||
|
||||
sysadmin home directory ($HOME)
|
||||
The default location where Ansible looks for and imports user
|
||||
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
|
||||
|
||||
|
||||
Specify the user configuration override file for the Ansible bootstrap
|
||||
playbook using one of the following methods:
|
||||
|
||||
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
|
||||
the configurable values as desired (use the commented instructions in
|
||||
the file).
|
||||
|
||||
or
|
||||
|
||||
* Create the minimal user configuration override file as shown in the example
|
||||
below:
|
||||
|
||||
::
|
||||
|
||||
cd ~
|
||||
cat <<EOF > localhost.yml
|
||||
system_mode: duplex
|
||||
|
||||
dns_servers:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
|
||||
external_oam_subnet: 10.10.10.0/24
|
||||
external_oam_gateway_address: 10.10.10.1
|
||||
external_oam_floating_address: 10.10.10.2
|
||||
external_oam_node_0_address: 10.10.10.3
|
||||
external_oam_node_1_address: 10.10.10.4
|
||||
|
||||
admin_username: admin
|
||||
admin_password: <sysadmin-password>
|
||||
ansible_become_pass: <sysadmin-password>
|
||||
EOF
|
||||
|
||||
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
|
||||
for information on additional Ansible bootstrap configurations for advanced
|
||||
Ansible bootstrap scenarios.
|
||||
|
||||
#. Run the Ansible bootstrap playbook:
|
||||
|
||||
::
|
||||
|
||||
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
|
||||
|
||||
Wait for Ansible bootstrap playbook to complete.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-start:
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Acquire admin credentials:
|
||||
|
||||
::
|
||||
|
||||
source /etc/platform/openrc
|
||||
|
||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
|
||||
attached networks:
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
MGMT_IF=enp7s2
|
||||
system host-if-modify controller-0 lo -c none
|
||||
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
|
||||
for UUID in $IFNET_UUIDS; do
|
||||
system interface-network-remove ${UUID}
|
||||
done
|
||||
system host-if-modify controller-0 $OAM_IF -c platform
|
||||
system interface-network-assign controller-0 $OAM_IF oam
|
||||
system host-if-modify controller-0 $MGMT_IF -c platform
|
||||
system interface-network-assign controller-0 $MGMT_IF mgmt
|
||||
system interface-network-assign controller-0 $MGMT_IF cluster-host
|
||||
|
||||
#. Configure NTP Servers for network time synchronization:
|
||||
|
||||
.. note::
|
||||
|
||||
In a virtual environment, this can sometimes cause Ceph clock skew alarms.
|
||||
Also, the virtual instance clock is synchronized with the host clock,
|
||||
so it is not absolutely required to configure NTP here.
|
||||
|
||||
::
|
||||
|
||||
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-0 openstack-control-plane=enabled
|
||||
|
||||
#. **For OpenStack only:** A vSwitch is required.
|
||||
|
||||
The default vSwitch is containerized OVS that is packaged with the
|
||||
stx-openstack manifest/helm-charts. StarlingX provides the option to use
|
||||
OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
|
||||
supported, only OVS is supported. Therefore, simply use the default OVS
|
||||
vSwitch here.
|
||||
|
||||
.. incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
--------------------------------------------------
|
||||
Install software on controller-1 and compute nodes
|
||||
--------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'controllerstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console controllerstorage-controller-1
|
||||
|
||||
As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On console of virtual controller-0, list hosts to see the newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. On virtual controller-0, using the host id, set the personality of this host
|
||||
to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates the install of software on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'controllerstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=worker hostname=compute-0
|
||||
|
||||
Repeat for 'controllerstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start controllerstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=worker hostname=compute-1
|
||||
|
||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
|
||||
complete, for all virtual servers to reboot, and for all to show as
|
||||
locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | compute-0 | compute | locked | disabled | online |
|
||||
| 4 | compute-1 | compute | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-start:
|
||||
|
||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
|
||||
attached networks. Note that the MGMT interface is partially set up by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
OAM_IF=enp7s1
|
||||
system host-if-modify controller-1 $OAM_IF -c platform
|
||||
system interface-network-assign controller-1 $OAM_IF oam
|
||||
system interface-network-assign controller-1 mgmt0 cluster-host
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
|
||||
of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
system host-label-assign controller-1 openstack-control-plane=enabled
|
||||
|
||||
.. incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual controller-1 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-1
|
||||
|
||||
Controller-1 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
-----------------------
|
||||
Configure compute nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add the third Ceph monitor to compute-0:
|
||||
|
||||
(The first two Ceph monitors are automatically assigned to controller-0 and
|
||||
controller-1.)
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-add compute-0
|
||||
|
||||
#. Wait for the compute node monitor to complete configuration:
|
||||
|
||||
::
|
||||
|
||||
system ceph-mon-list
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| uuid | ceph_ | hostname | state | task |
|
||||
| | mon_g | | | |
|
||||
| | ib | | | |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
|
||||
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
|
||||
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None |
|
||||
+--------------------------------------+-------+--------------+------------+------+
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for compute nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
# configure the datanetworks in sysinv, prior to referencing it
|
||||
# in the ``system host-if-modify`` command'.
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring interface for: $COMPUTE"
|
||||
set -ex
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in compute-0 compute-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring Nova local for: $COMPUTE"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock compute nodes
|
||||
--------------------
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
|
||||
Unlock virtual compute nodes to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system host-unlock $COMPUTE
|
||||
done
|
||||
|
||||
The compute nodes will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------------------------
|
||||
Add Ceph OSDs to controllers
|
||||
----------------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Add OSDs to controller-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to controller-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=controller-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,21 @@
|
||||
=========================================================
|
||||
Virtual Standard with Dedicated Storage Installation R3.0
|
||||
=========================================================
|
||||
|
||||
--------
|
||||
Overview
|
||||
--------
|
||||
|
||||
.. include:: ../desc_dedicated_storage.txt
|
||||
|
||||
.. include:: ../ipv6_note.txt
|
||||
|
||||
------------
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
dedicated_storage_environ
|
||||
dedicated_storage_install_kubernetes
|
@ -0,0 +1,56 @@
|
||||
============================
|
||||
Prepare Host and Environment
|
||||
============================
|
||||
|
||||
This section describes how to prepare the physical host and virtual environment
|
||||
for a **StarlingX R3.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
------------------------------------
|
||||
Physical host requirements and setup
|
||||
------------------------------------
|
||||
|
||||
.. include:: physical_host_req.txt
|
||||
|
||||
-----------------------------------------
|
||||
Preparing virtual environment and servers
|
||||
-----------------------------------------
|
||||
|
||||
The following steps explain how to prepare the virtual environment and servers
|
||||
on a physical host for a StarlingX R3.0 virtual Standard with Dedicated Storage
|
||||
deployment configuration.
|
||||
|
||||
#. Prepare virtual environment.
|
||||
|
||||
Set up virtual platform networks for virtual deployment:
|
||||
|
||||
::
|
||||
|
||||
bash setup_network.sh
|
||||
|
||||
#. Prepare virtual servers.
|
||||
|
||||
Create the XML definitions for the virtual servers required by this
|
||||
configuration option. This will create the XML virtual server definition for:
|
||||
|
||||
* dedicatedstorage-controller-0
|
||||
* dedicatedstorage-controller-1
|
||||
* dedicatedstorage-storage-0
|
||||
* dedicatedstorage-storage-1
|
||||
* dedicatedstorage-worker-0
|
||||
* dedicatedstorage-worker-1
|
||||
|
||||
The following command will start/virtually power on:
|
||||
|
||||
* The 'dedicatedstorage-controller-0' virtual server
|
||||
* The X-based graphical virt-manager application
|
||||
|
||||
::
|
||||
|
||||
bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
|
||||
|
||||
If there is no X-server present, then errors are returned.
|
@ -0,0 +1,390 @@
|
||||
=======================================================================
|
||||
Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
|
||||
=======================================================================
|
||||
|
||||
This section describes the steps to install the StarlingX Kubernetes platform
|
||||
on a **StarlingX R3.0 virtual Standard with Dedicated Storage** deployment
|
||||
configuration.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
--------------------------------
|
||||
Install software on controller-0
|
||||
--------------------------------
|
||||
|
||||
In the last step of "Prepare the virtual environment and virtual servers" the
|
||||
controller-0 virtual server 'dedicatedstorage-controller-0' was started by the
|
||||
:command:`setup_configuration.sh` command.
|
||||
|
||||
On the host, attach to the console of virtual controller-0 and select the appropriate
|
||||
installer menu options to start the non-interactive install of
|
||||
StarlingX software on controller-0.
|
||||
|
||||
.. note::
|
||||
|
||||
When entering the console, it is very easy to miss the first installer menu
|
||||
selection. Use ESC to navigate to previous menus, to ensure you are at the
|
||||
first installer menu.
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-0
|
||||
|
||||
Make the following menu selections in the installer:
|
||||
|
||||
#. First menu: Select 'Standard Controller Configuration'
|
||||
#. Second menu: Select 'Serial Console'
|
||||
#. Third menu: Select 'Standard Security Profile'
|
||||
|
||||
Wait for the non-interactive install of software to complete and for the server
|
||||
to reboot. This can take 5-10 minutes depending on the performance of the host
|
||||
machine.
|
||||
|
||||
--------------------------------
|
||||
Bootstrap system on controller-0
|
||||
--------------------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
|
||||
|
||||
----------------------
|
||||
Configure controller-0
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-0-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-0-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-0
|
||||
-------------------
|
||||
|
||||
Unlock virtual controller-0 in order to bring it into service:
|
||||
|
||||
::
|
||||
|
||||
system host-unlock controller-0
|
||||
|
||||
Controller-0 will reboot in order to apply configuration changes and come into
|
||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
------------------------------------------------------------------
|
||||
Install software on controller-1, storage nodes, and compute nodes
|
||||
------------------------------------------------------------------
|
||||
|
||||
#. On the host, power on the controller-1 virtual server,
|
||||
'dedicatedstorage-controller-1'. It will automatically attempt to network
|
||||
boot over the management network:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-controller-1
|
||||
|
||||
#. Attach to the console of virtual controller-1:
|
||||
|
||||
::
|
||||
|
||||
virsh console dedicatedstorage-controller-1
|
||||
|
||||
#. As controller-1 VM boots, a message appears on its console instructing you to
|
||||
configure the personality of the node.
|
||||
|
||||
#. On the console of controller-0, list hosts to see newly discovered
|
||||
controller-1 host (hostname=None):
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | None | None | locked | disabled | offline |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
#. Using the host id, set the personality of this host to 'controller':
|
||||
|
||||
::
|
||||
|
||||
system host-update 2 personality=controller
|
||||
|
||||
This initiates software installation on controller-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the
|
||||
personality to 'storage' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-storage-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 3 personality=storage
|
||||
|
||||
Repeat for 'dedicatedstorage-storage-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-storage-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 4 personality=storage
|
||||
|
||||
This initiates software installation on storage-0 and storage-1.
|
||||
This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
#. While waiting on the previous step to complete, start up and set the personality
|
||||
for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the
|
||||
personality to 'worker' and assign a unique hostname for each.
|
||||
|
||||
For example, start 'dedicatedstorage-worker-0' from the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-0
|
||||
|
||||
Wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
system host-update 5 personality=worker hostname=compute-0
|
||||
|
||||
Repeat for 'dedicatedstorage-worker-1'. On the host:
|
||||
|
||||
::
|
||||
|
||||
virsh start dedicatedstorage-worker-1
|
||||
|
||||
And wait for new host (hostname=None) to be discovered by checking
|
||||
‘system host-list’ on virtual controller-0:
|
||||
|
||||
::
|
||||
|
||||
ssystem host-update 6 personality=worker hostname=compute-1
|
||||
|
||||
This initiates software installation on compute-0 and compute-1.
|
||||
|
||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
|
||||
compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all
|
||||
to show as locked/disabled/online in 'system host-list'.
|
||||
|
||||
::
|
||||
|
||||
system host-list
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| id | hostname | personality | administrative | operational | availability |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
| 1 | controller-0 | controller | unlocked | enabled | available |
|
||||
| 2 | controller-1 | controller | locked | disabled | online |
|
||||
| 3 | storage-0 | storage | locked | disabled | online |
|
||||
| 4 | storage-1 | storage | locked | disabled | online |
|
||||
| 5 | compute-0 | compute | locked | disabled | online |
|
||||
| 6 | compute-1 | compute | locked | disabled | online |
|
||||
+----+--------------+-------------+----------------+-------------+--------------+
|
||||
|
||||
----------------------
|
||||
Configure controller-1
|
||||
----------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-config-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-config-controller-1-virt-controller-storage-end:
|
||||
|
||||
-------------------
|
||||
Unlock controller-1
|
||||
-------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-controller-1-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-controller-1-virt-controller-storage-end:
|
||||
|
||||
-----------------------
|
||||
Configure storage nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up by the network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in storage-0 storage-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Add OSDs to storage-0:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-0
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
#. Add OSDs to storage-1:
|
||||
|
||||
::
|
||||
|
||||
HOST=storage-1
|
||||
DISKS=$(system host-disk-list ${HOST})
|
||||
TIERS=$(system storage-tier-list ceph_cluster)
|
||||
OSDs="/dev/sdb"
|
||||
for OSD in $OSDs; do
|
||||
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
|
||||
done
|
||||
|
||||
system host-stor-list $HOST
|
||||
|
||||
--------------------
|
||||
Unlock storage nodes
|
||||
--------------------
|
||||
|
||||
Unlock virtual storage nodes in order to bring them into service:
|
||||
|
||||
::
|
||||
|
||||
for STORAGE in storage-0 storage-1; do
|
||||
system host-unlock $STORAGE
|
||||
done
|
||||
|
||||
The storage nodes will reboot in order to apply configuration changes and come
|
||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
|
||||
|
||||
-----------------------
|
||||
Configure compute nodes
|
||||
-----------------------
|
||||
|
||||
On virtual controller-0:
|
||||
|
||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
|
||||
|
||||
Note that the MGMT interfaces are partially set up automatically by the
|
||||
network install procedure.
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
system interface-network-assign $COMPUTE mgmt0 cluster-host
|
||||
done
|
||||
|
||||
#. Configure data interfaces for compute nodes.
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
1G Huge Pages are not supported in the virtual environment and there is no
|
||||
virtual NIC supporting SRIOV. For that reason, data interfaces are not
|
||||
applicable in the virtual environment for the Kubernetes-only scenario.
|
||||
|
||||
For OpenStack only:
|
||||
|
||||
::
|
||||
|
||||
DATA0IF=eth1000
|
||||
DATA1IF=eth1001
|
||||
PHYSNET0='physnet0'
|
||||
PHYSNET1='physnet1'
|
||||
SPL=/tmp/tmp-system-port-list
|
||||
SPIL=/tmp/tmp-system-host-if-list
|
||||
|
||||
Configure the datanetworks in sysinv, prior to referencing it in the
|
||||
:command:`system host-if-modify` command.
|
||||
|
||||
::
|
||||
|
||||
system datanetwork-add ${PHYSNET0} vlan
|
||||
system datanetwork-add ${PHYSNET1} vlan
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring interface for: $COMPUTE"
|
||||
set -ex
|
||||
system host-port-list ${COMPUTE} --nowrap > ${SPL}
|
||||
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
|
||||
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
|
||||
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
|
||||
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
|
||||
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
|
||||
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
|
||||
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
|
||||
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
|
||||
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
|
||||
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
|
||||
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
|
||||
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
|
||||
set +ex
|
||||
done
|
||||
|
||||
*************************************
|
||||
OpenStack-specific host configuration
|
||||
*************************************
|
||||
|
||||
.. important::
|
||||
|
||||
**This step is required only if the StarlingX OpenStack application
|
||||
(stx-openstack) will be installed.**
|
||||
|
||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
|
||||
support of installing the stx-openstack manifest/helm-charts later:
|
||||
|
||||
::
|
||||
|
||||
for NODE in compute-0 compute-1; do
|
||||
system host-label-assign $NODE openstack-compute-node=enabled
|
||||
system host-label-assign $NODE openvswitch=enabled
|
||||
system host-label-assign $NODE sriov=enabled
|
||||
done
|
||||
|
||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
|
||||
which is needed for stx-openstack nova ephemeral disks:
|
||||
|
||||
::
|
||||
|
||||
for COMPUTE in compute-0 compute-1; do
|
||||
echo "Configuring Nova local for: $COMPUTE"
|
||||
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
|
||||
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
|
||||
PARTITION_SIZE=10
|
||||
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
|
||||
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
|
||||
system host-lvg-add ${COMPUTE} nova-local
|
||||
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
|
||||
done
|
||||
|
||||
--------------------
|
||||
Unlock compute nodes
|
||||
--------------------
|
||||
|
||||
.. include:: controller_storage_install_kubernetes.rst
|
||||
:start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
|
||||
:end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
|
||||
|
||||
----------
|
||||
Next steps
|
||||
----------
|
||||
|
||||
.. include:: ../kubernetes_install_next.txt
|
@ -0,0 +1,75 @@
|
||||
The following sections describe system requirements and host setup for a
|
||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
|
||||
|
||||
*********************
|
||||
Hardware requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
|
||||
virtualization extensions
|
||||
|
||||
* **Cores:** 8
|
||||
|
||||
* **Memory:** 32GB RAM
|
||||
|
||||
* **Hard Disk:** 500GB HDD
|
||||
|
||||
* **Network:** One network adapter with active Internet connection
|
||||
|
||||
*********************
|
||||
Software requirements
|
||||
*********************
|
||||
|
||||
The host system should have at least:
|
||||
|
||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
|
||||
|
||||
All other required packages will be installed by scripts in the StarlingX tools repository.
|
||||
|
||||
**********
|
||||
Host setup
|
||||
**********
|
||||
|
||||
Set up the host with the following steps:
|
||||
|
||||
#. Update OS:
|
||||
|
||||
::
|
||||
|
||||
apt-get update
|
||||
|
||||
#. Clone the StarlingX tools repository:
|
||||
|
||||
::
|
||||
|
||||
apt-get install -y git
|
||||
cd $HOME
|
||||
git clone https://opendev.org/starlingx/tools
|
||||
|
||||
#. Install required packages:
|
||||
|
||||
::
|
||||
|
||||
cd $HOME/tools/deployment/libvirt/
|
||||
bash install_packages.sh
|
||||
apt install -y apparmor-profiles
|
||||
apt-get install -y ufw
|
||||
ufw disable
|
||||
ufw status
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
|
||||
the example above, you must reboot the server to fully install the
|
||||
apparmor-profile modules.
|
||||
|
||||
|
||||
#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn
|
||||
StarlingX build off 'master' branch, as shown below:
|
||||
|
||||
::
|
||||
|
||||
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso
|
Loading…
x
Reference in New Issue
Block a user