Planning guide

Incorporated minor comments on patchset 2.
Incorporated comments on patchset 1.
Stripped trailing white spaces.
Initial review submission of Planning guide (Kubernetes)

Signed-off-by: Stone <ronald.stone@windriver.com>
Change-Id: I1f5fc7a7747098090a4ac7fd84558c27c959804f
Signed-off-by: Stone <ronald.stone@windriver.com>
This commit is contained in:
Stone 2020-12-29 14:48:47 -05:00
parent 0123636bd3
commit e8d36f6507
55 changed files with 2336 additions and 14 deletions

View File

@ -0,0 +1,2 @@
.. [#] See |datanet-doc| for addtional information.
.. [#] PCI passthrough support for Mellanox CX3 is deprecated.

View File

@ -39,9 +39,18 @@ Learn more about StarlingX:
introduction/index
---------------
Planning guides
---------------
--------
Planning
--------
.. toctree::
:maxdepth: 2
planning/index
-----------------
Deployment guides
-----------------
.. toctree::
:maxdepth: 2

View File

@ -0,0 +1,3 @@
{
"restructuredtext.confPath": "/mnt/c/Users/rstone/Desktop/upstream/planning/docs/doc/source"
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -0,0 +1,122 @@
.. Planning file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
========
Planning
========
----------
Kubernetes
----------
************
Introduction
************
.. toctree::
:maxdepth: 1
kubernetes/overview-of-starlingx-planning
****************
Network planning
****************
.. toctree::
:maxdepth: 1
kubernetes/network-requirements
kubernetes/networks-for-a-simplex-system
kubernetes/networks-for-a-duplex-system
kubernetes/networks-for-a-system-with-controller-storage
kubernetes/networks-for-a-system-with-dedicated-storage
kubernetes/network-requirements-ip-support
kubernetes/network-planning-the-pxe-boot-network
kubernetes/the-cluster-host-network
kubernetes/the-storage-network
Internal management network
***************************
.. toctree::
:maxdepth: 1
kubernetes/the-internal-management-network
kubernetes/internal-management-network-planning
kubernetes/multicast-subnets-for-the-management-network
OAM network
***********
.. toctree::
:maxdepth: 1
kubernetes/about-the-oam-network
kubernetes/oam-network-planning
kubernetes/dns-and-ntp-servers
kubernetes/network-planning-firewall-options
L2 access switches
******************
.. toctree::
:maxdepth: 1
kubernetes/l2-access-switches
kubernetes/redundant-top-of-rack-switch-deployment-considerations
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
kubernetes/about-ethernet-interfaces
kubernetes/network-planning-ethernet-interface-configuration
kubernetes/the-ethernet-mtu
kubernetes/shared-vlan-or-multi-netted-ethernet-interfaces
****************
Storage planning
****************
.. toctree::
:maxdepth: 1
kubernetes/storage-planning-storage-resources
kubernetes/storage-planning-storage-on-controller-hosts
kubernetes/storage-planning-storage-on-worker-hosts
kubernetes/storage-planning-storage-on-storage-hosts
kubernetes/external-netapp-trident-storage
*****************
Security planning
*****************
.. toctree::
:maxdepth: 1
kubernetes/security-planning-uefi-secure-boot-planning
kubernetes/tpm-planning
**********************************
Installation and resource planning
**********************************
.. toctree::
:maxdepth: 1
kubernetes/installation-and-resource-planning-https-access-planning
kubernetes/starlingx-hardware-requirements
kubernetes/verified-commercial-hardware
kubernetes/starlingx-boot-sequence-considerations
kubernetes/hard-drive-options
kubernetes/controller-disk-configurations-for-all-in-one-systems
---------
OpenStack
---------
Coming soon.

View File

@ -0,0 +1,28 @@
.. buu1552671069267
.. _about-ethernet-interfaces:
=========================
About Ethernet Interfaces
=========================
Ethernet interfaces, both physical and virtual, play a key role in the overall
performance of the virtualized network. It is important to understand the
available interface types, their configuration options, and their impact on
network design.
.. _about-ethernet-interfaces-section-N1006F-N1001A-N10001:
-----------------------
About LAG/AE interfaces
-----------------------
You can use |LAG| for Ethernet interfaces. |prod| supports up to four ports in
a |LAG| group.
Ethernet interfaces in a |LAG| group can be attached either to the same L2
switch, or to multiple switches in a redundant configuration. For more
information about L2 switch configurations, see |planning-doc|: :ref:`L2 Access
Switches <l2-access-switches>`.
.. xbooklink For information about the different |LAG| modes, see |node-doc|: :ref:`Link Aggregation Settings <link-aggregation-settings>`.

View File

@ -0,0 +1,63 @@
.. ozd1552671198357
.. _about-the-oam-network:
=====================
About the OAM Network
=====================
The |OAM| network provides for control access.
You should ensure that the following services are available on the |OAM|
Network:
**DNS Service**
Needed to facilitate the name resolution of servers reachable on the |OAM|
Network.
|prod| can operate without a configured DNS service. However, a DNS service
should be in place to ensure that links to external references in the
current and future versions of the Horizon Web interface work as expected.
**Docker Registry Service**
A private or public Docker registry service needed to serve remote
container image requests from Kubernetes and the underlying Docker service.
This remote Docker registry must hold the required |prod| container images
for the appropriate release, to fully install a |prod| system.
**NTP Service**
|NTP| can be used by the |prod| controller nodes to synchronize their local
clocks with a reliable external time reference. |org| strongly recommends
that this service be available to ensure that system-wide log reports
present a unified view of the day-to-day operations.
The |prod| worker nodes and storage nodes always use the controller nodes
as the de-facto time server for the entire |prod| cluster.
**PTP Service**
As an alternative to |NTP| services, |PTP| can be used by the |prod|
controller nodes to synchronize clocks in a network. It provides:
- more accurate clock synchronization
- the ability to extend the clock synchronization, not only to |prod|
hosts \(controllers, workers, and storage nodes\), but also to hosted
applications on |prod| hosts.
When used in conjunction with hardware support on the |OAM| and Management
network interface cards, |PTP| is capable of sub-microsecond accuracy.
|org| strongly recommends that this service, or |NTP|, if available, be
used to ensure that system-wide log reports present a unified view of the
day-to-day operations, and that other time-sensitive operations are
performed accurately.
Various |NICs| and network switches provide the hardware support for |PTP|
used by |OAM| and Management networks. This hardware provides an on-board
clock that is synchronized to the |PTP| master. The computer's system clock
is synchronized to the |PTP| hardware clock on the |NIC| used to stamp
transmitted and received |PTP| messages. For more information, see the
*IEEE 1588-2002* standard.
.. note::
|NTP| and |PTP| can be configured on a per host basis.

View File

@ -0,0 +1,48 @@
.. kll1552672476085
.. _controller-disk-configurations-for-all-in-one-systems:
=====================================================
Controller Disk Configurations for All-in-one Systems
=====================================================
For |prod| Simplex and Duplex Systems, the controller disk configuration is
highly flexible to support different system requirements for Cinder and
nova-local storage.
You can also change the disk configuration after installation to increase the
persistent volume claim or container-ephemeral storage.
.. _controller-disk-configurations-for-all-in-one-systems-table-h4n-rmg-3jb:
.. table:: Table 1. Disk Configurations for |prod| Simplex or Duplex systems
:widths: auto
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| No. of Disks | Disk | BIOS Boot | Boot | Root | Platform File System Volume Group \(cgts-vg\) | Root Disk Unallocated Space | Ceph OSD \(PVCs\) | Notes |
+==============+==========+===========+===========+===========+===============================================+=============================+===================+=========================================================================+
| 1 | /dev/sda | | | | | | | Not supported |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use |
| | | | | | | | | |
| | /dev/sdb | | | | | | | AIO-SX [#fntarg1]_ \(replication = 1\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion |
| | | | | | | | | |
| | /dev/sdb | | | | | | | AIO-SX \(replication = 1\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use |
| | | | | | | | | |
| | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
| | | | | | | | | |
| | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion |
| | | | | | | | | |
| | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
| | | | | | | | | |
| | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
.. [#fntarg1] |AIO|-Simplex Ceph replication is disk-based.

View File

@ -0,0 +1,14 @@
.. olk1552671165229
.. _dns-and-ntp-servers:
===================
DNS and NTP Servers
===================
|prod| supports configuring up to three remote DNS servers and three remote
|NTP| servers to use for name resolution and network time synchronization
respectively.
These can be specified during Ansible bootstrapping of controller-0 or at a
later time.

View File

@ -0,0 +1,28 @@
.. pci1585052341505
.. _external-netapp-trident-storage:
===============================
External Netapp Trident Storage
===============================
|prod| can utilize the open source Netapp Trident block storage backend as an
alternative to Ceph-based internal block storage.
Netapp Trident supports:
.. _external-netapp-trident-storage-d247e23:
- |AWS| Cloud Volumes
- E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire
- Azure NetApp Files service
For more information about Trident, see `https://netapp-trident.readthedocs.io
<https://netapp-trident.readthedocs.io>`__.

View File

@ -0,0 +1,28 @@
.. lqo1552672461538
.. _hard-drive-options:
==================
Hard Drive Options
==================
For hard drive storage, |prod| supports high-performance |SSD| and |NVMe|
drives as well as rotational disks.
To increase system performance, you can use a |SSD| or a |NVMe| drive on |prod|
hosts in place of any rotational drive. |SSD| provides faster read-write access
than mechanical drives. |NVMe| supports the full performance potential of |SSD|
by providing a faster communications bus compared to the |SATA| or |SAS|
technology used with standard |SSDs|.
On storage hosts, |SSD| or |NVMe| drives are required for journals or Ceph
caching.
.. xrefbook For more information about these features, see |stor-doc|: :ref:`Storage on Storage Hosts <storage-hosts-storage-on-storage-hosts>`.
For |NVMe| drives, a host with an |NVMe|-ready BIOS and |NVMe| connectors or
adapters is required.
To use an |NVMe| drive as a root drive, you must enable |UEFI| support in the
host BIOS. In addition, when installing the host, you must perform extra steps
to assign the drive as the boot device.

View File

@ -0,0 +1,109 @@
.. cxj1582060027471
.. _installation-and-resource-planning-https-access-planning:
=====================
HTTPS Access Planning
=====================
You can enable secure HTTPS access and manage HTTPS certificates for all
external |prod| service endpoints.
These include:
.. _installation-and-resource-planning-https-access-planning-d18e34:
.. contents::
:local:
:depth: 1
.. note::
Only self-signed or Root |CA|-signed certificates are supported for the
above |prod| service endpoints. See `https://en.wikipedia.org/wiki/X.509
<https://en.wikipedia.org/wiki/X.509>`__ for an overview of root,
intermediate, and end-entity certificates.
You can also add a trusted |CA| for the |prod| system.
.. note::
The default HTTPS X.509 certificates that are used by |prod-long| for
authentication are not signed by a known authority. For increased security,
obtain, install, and use certificates that have been signed by a Root
certificate authority. Refer to the documentation for the external Root
|CA| that you are using, on how to create public certificate and private
key pairs, signed by a Root CA, for HTTPS.
.. _installation-and-resource-planning-https-access-planning-d18e75:
-----------------------------------------------------------------
StarlingX REST API applications and the web administration server
-----------------------------------------------------------------
By default, |prod| provides HTTP access to StarlingX REST API application
endpoints \(Keystone, Barbican and StarlingX\) and the StarlingX web
administration server. For improved security, you can enable HTTPS access. When
HTTPS access is enabled, HTTP access is disabled.
When HTTPS is enabled for the first time on a |prod| system, a self-signed
certificate and key are automatically generated and installed for the StarlingX
REST and Web Server endpoints. In order to connect, remote clients must be
configured to accept the self-signed certificate without verifying it. This is
called *insecure mode*.
For secure mode connections, a Root |CA|-signed certificate and key are
required. The use of a Root |CA|-signed certificate is strongly recommended.
Refer to the documentation for the external |CA| that you are using, on how to
create public certificate and private key pairs for HTTPS.
You can update the certificate and key used by |prod| for the StarlingX REST
and Web Server endpoints at any time after installation.
For additional security, |prod| optionally supports storing the private key of
the StarlingX Rest and Web Server certificate in a StarlingX |TPM| hardware
device. |TPM| 2.0-compliant hardware must be available on the controller hosts.
.. _installation-and-resource-planning-https-access-planning-d18e105:
----------
Kubernetes
----------
For the Kubernetes API Server, HTTPS is always enabled. Similarly, by default,
a self-signed certificate and key is generated and installed for the Kubernetes
Root |CA| certificate and key. This Kubernetes Root |CA| is used to create and
sign various certificates used within Kubernetes, including the certificate
used by the kube-apiserver API endpoint.
It is recommended that you update the Kubernetes Root |CA| and with a custom
Root |CA| certificate and key, generated by yourself, and trusted by external
servers connecting to the |prod| system's Kubernetes API endpoint. The system's
Kubernetes Root |CA| is configured as part of the bootstrap during
installation.
.. _installation-and-resource-planning-https-access-planning-d18e117:
---------------------
Local Docker registry
---------------------
For the local Docker registry, HTTPS is always enabled. Similarly, by default,
a self-signed certificate and key is generated and installed for this endpoint.
However, it is recommended that you update the certificate used after
installation with a Root |CA|-signed certificate and key. Refer to the
documentation for the external |CA| that you are using, on how to create public
certificate and private key pairs for HTTPS.
.. _installation-and-resource-planning-https-access-planning-d18e126:
-----------
Trusted CAs
-----------
|prod| also supports the ability to update the trusted |CA| certificate bundle
on all nodes in the system. This is required, for example, when container
images are being pulled from an external docker registry with a certificate
signed by a non-well-known |CA|.

View File

@ -0,0 +1,57 @@
.. lla1552670572043
.. _internal-management-network-planning:
====================================
Internal Management Network Planning
====================================
The internal management network is a private network, visible only to the hosts
in the cluster.
.. note::
This network is not used with |prod| Simplex systems.
You must consider the following guidelines:
.. _internal-management-network-planning-ul-gqd-gj2-4n:
- The internal management network is used for |PXE| booting of new hosts, and
must be untagged. It is limited to IPv4, because the |prod| installer does
not support IPv6 |PXE| booting. For example, if the internal management
network needs to be on a |VLAN|-tagged network for deployment reasons, or
if it must support IPv6, you can configure the optional untagged |PXE| boot
network for |PXE| booting of new hosts using IPv4.
- You can use any 1 GB or 10 GB interface on the hosts to connect to this
network, provided that the interface supports network booting and can be
configured from the BIOS as the primary boot device.
- If static IP address assignment is used, you must use the :command:`system
host-add` command to add new hosts, and to assign IP addresses manually. In
this mode, new hosts are *not* automatically added to the inventory when
they are powered on, and they display the following message on the host
console:
.. code-block:: none
This system has been configured with static management
and infrastructure IP address allocation. This requires
that the node be manually provisioned in System
Inventory using the 'system host-add' CLI, GUI, or
stx API equivalent.
- For the IPv4 address plan, use a private IPv4 subnet as specified in RFC
1918. This helps prevent unwanted cross-network traffic on this network.
It is suggested that you use the default subnet and addresses provided by
the controller configuration script.
- You can assign a range of addresses on the management subnet for use by the
|prod|. If you do not assign a range, |prod| takes ownership of all
available addresses.
- On systems with two controllers, they use IP multicast messaging on the
internal management network. To prevent loss of controller synchronization,
ensure that the switches and other devices on these networks are configured
with appropriate settings.

View File

@ -0,0 +1,61 @@
.. kvt1552671101079
.. _l2-access-switches:
==================
L2 Access Switches
==================
L2 access switches connect the |prod| hosts to the different networks. Proper
configuration of the access ports is necessary to ensure proper traffic flow.
One or more L2 switches can be used to connect the |prod| hosts to the
different networks. When sharing a single L2 switch you must ensure proper
isolation of network traffic. A sample configuration for a shared L2 switch
could include:
.. _l2-access-switches-ul-obf-dyr-4n:
- one port-based |VLAN| for the internal management network with internal
cluster host network sharing this same L2 network \(default configuration\)
- one port-based |VLAN| for the |OAM| network
- one or more sets of |VLANs| for additional networks for external network
connectivity
When using multiple L2 switches, there are several deployment possibilities:
.. _l2-access-switches-ul-qmd-wyr-4n:
- A single L2 switch for the internal management cluster host and |OAM|
networks. Port or |MAC|-based network isolation is mandatory.
- An additional L2 switch for the one or more additional networks for
external network connectivity.
- Redundant L2 switches to support link aggregation, using either a failover
model, or |VPC| for more robust redundancy. For more information, see
:ref:`Redundant Top-of-Rack Switch Deployment Considerations
<redundant-top-of-rack-switch-deployment-considerations>`.
Switch ports that send tagged traffic are referred to as trunk ports. They
participate in |STP| from the moment the link goes up, which results in a
several second delay before the trunk port moves to the forwarding state. This
delay will impact services such as |DHCP| and |PXE| that are used during
regular operations of |prod|.
You must consider configuring the switch ports, to which the management
interfaces are attached, to transition to the forwarding state immediately
after the link goes up. This option is referred to as a PortFast.
You should consider configuring these ports to prevent them from participating
on any |STP| exchanges. This is done by configuring them to avoid processing
inbound and outbound |BPDU| |STP| packets completely. Consult your switch's
manual for details.
.. seealso::
:ref:`Redundant Top-of-Rack Switch Deployment Considerations <redundant-top-of-rack-switch-deployment-considerations>`

View File

@ -0,0 +1,54 @@
.. osh1552670597082
.. _multicast-subnets-for-the-management-network:
============================================
Multicast Subnets for the Management Network
============================================
A multicast subnet specifies the range of addresses that the system can use for
multicast messaging on the network. You can use this subnet to prevent
multicast leaks in multi-region environments. Addresses for the affected
services are allocated automatically from the subnet.
The requirements for multicast subnets are as follows:
.. _multicast-subnets-for-the-management-network-ul-ubf-ytc-b1b:
- IP multicast addresses must be in the range of 224.0.0.0 through
239.255.255.255
For IPv6, the recommended range is ffx5::/16.
- IP multicast address ranges for a particular region must not conflict or
overlap with the IP multicast address ranges of other regions.
- IP multicast address ranges must not conflict or overlap with the
well-known multicast addresses listed at:
`https://en.wikipedia.org/wiki/Multicast_address
<https://en.wikipedia.org/wiki/Multicast_address>`__
- IP multicast addresses must be unique within the network.
- The lower 23-bits of the IP multicast address, used to construct the
multicast MAC address, must be unique within the network.
- When interfaces of different regions are on the same L2 network / IP
subnet, a separate multicast subnet is required for each region.
- The minimum multicast network range is 16 host entries.
.. note::
Addresses used within the IP multicast address range apply to services
using IP multicast, not to hosts.
.. warning::
|ToR| switches with snooping enabled on this network segment require a
|IGMP|/|MLD| querier on that network to prevent nodes from being dropped
from the multicast group.
The default setting for the multicast subnet is 239.1.1.0/28. The default for
IPv6 is ff05::14:1:1:0/124.

View File

@ -0,0 +1,53 @@
.. elj1552671053086
.. _network-planning-ethernet-interface-configuration:
================================
Ethernet Interface Configuration
================================
You can review and modify the configuration for physical or virtual Ethernet
interfaces using the Horizon Web interface or the CLI.
.. _network-planning-ethernet-interface-configuration-section-N1001F-N1001C-N10001:
----------------------------
Physical Ethernet Interfaces
----------------------------
The Physical Ethernet interfaces on |prod| nodes are configured to use the
following networks:
.. _network-planning-ethernet-interface-configuration-ul-lk1-b4j-zq:
- the internal management network, with the cluster host network sharing this
interface \(default configuration\)
- the external |OAM| network
- additional networks for container workload connectivity to external
networks
A single interface can be configured to support more than one network using
|VLAN| tagging. See :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<shared-vlan-or-multi-netted-ethernet-interfaces>` for more information.
On the controller nodes, all Ethernet interfaces are configured when the nodes
are initialized based on the information provided in the Ansible Bootstrap
Playbook. For more information, see the `StarlingX Installation and Deployment
Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. On
worker and storage nodes, the Ethernet interface for the internal management
networks are configured. The remaining interfaces require manual configuration.
.. note::
If a network attachment uses |LAG|, the corresponding interfaces on the
storage and worker nodes must be configured manually to specify the
interface type.
You can review and modify physical interface configurations from Horizon or the
CLI.
.. xbooklink For more information, see |node-doc|: :ref:`Edit Interface Settings <editing-interface-settings>`.
You can save the interface configurations for a particular node to use as a
profile or template when setting up other nodes.

View File

@ -0,0 +1,103 @@
.. rmw1552671149311
.. _network-planning-firewall-options:
================
Firewall Options
================
|prod| incorporates a default firewall for the |OAM| network. You can configure
additional Kubernetes Network Policies in order to augment or override the
default rules.
The |prod| firewall uses the Kubernetes Network Policies \(using the Calico
|CNI|\) to implement a firewall on the |OAM| network.
A minimal set of rules is always applied before any custom rules, as follows:
.. _network-planning-firewall-options-d342e35:
- Non-|OAM| traffic is always accepted.
- Egress traffic is always accepted.
- Service manager \(SM\) traffic is always accepted.
- |SSH| traffic is always accepted.
You can introduce custom rules by creating and installing custom Kubernetes
Network Policies.
The following example opens up default HTTPS port 443.
.. code-block:: none
% cat <<EOF > gnp-oam-overrides.yaml
apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy
metadata:
name: gnp-oam-overrides
spec:
ingress:
- action: Allow
destination:
ports:
- 443
protocol: TCP
order: 500
selector: has(iftype) && iftype == 'oam'
types:
- Ingress
EOF
It can be applied using the :command:`kubectl apply` command. For example:
.. code-block:: none
$ kubectl apply -f gnp-oam-overrides.yaml
You can confirm the policy was applied properly using the :command:`kubectl
describe` command. For example:
.. code-block:: none
$ kubectl describe globalnetworkpolicy gnp-oam-overrides
Name: gnp-oam-overrides
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"crd.projectcalico.org/v1","kind":"GlobalNetworkPolicy","metadata":{"annotations":{},"name":"gnp-openstack-oam"},"spec...
API Version: crd.projectcalico.org/v1
Kind: GlobalNetworkPolicy
Metadata:
Creation Timestamp: 2019-05-16T13:07:45Z
Generation: 1
Resource Version: 296298
Self Link: /apis/crd.projectcalico.org/v1/globalnetworkpolicies/gnp-openstack-oam
UID: 98a324ab-77db-11e9-9f9f-a4bf010007e9
Spec:
Ingress:
Action: Allow
Destination:
Ports:
443
Protocol: TCP
Order: 500
Selector: has(iftype) && iftype == 'oam'
Types:
Ingress
Events: <none>
.. xbooklink For information about yaml rule syntax, see |sysconf-doc|: :ref:`Modify OAM Firewall Rules <modifying-oam-firewall-rules>`.
.. xbooklink For the default rules used by |prod| see |sec-doc|: :ref:`Default Firewall Rules <security-default-firewall-rules>`.
.. seealso::
For a full description of |GNP| syntax:
`https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy
<https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy>`__.

View File

@ -0,0 +1,21 @@
.. bvi1552670521399
.. _network-planning-the-pxe-boot-network:
================
PXE Boot Network
================
You can set up a |PXE| boot network for booting all nodes to allow a
non-standard management network configuration.
The internal management network is used for |PXE| booting of new hosts and the
|PXE| boot network is not required. However there are scenarios where the
internal management network cannot be used for |PXE| booting of new hosts. For
example, if the internal management network needs to be on a |VLAN|-tagged
network for deployment reasons, or if it must support IPv6, you must configure
the optional untagged |PXE| boot network for |PXE| booting of new hosts using
IPv4.
.. note::
|prod| does not support IPv6 |PXE| booting.

View File

@ -0,0 +1,28 @@
.. tss1516219381154
.. _network-requirements-ip-support:
==========
IP Support
==========
|prod| supports IPv4 and IPv6 versions for various networks.
The following table lists IPv4 and IPv6 support for different networks:
.. _network-requirements-ip-support-table-xqy-3cj-4cb:
.. table:: Table 1. IPv4 and IPv6 Support
:widths: auto
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | IPv4 Support | IPv6 Support | Comment |
+======================+==============+==============+========================================================================================================================================================================================================================================================================================================================================================================================+
| PXE boot | Y | N | If present, the PXE boot network is used for PXE booting of new hosts \(instead of using the internal management network\), and must be untagged. It is limited to IPv4, because the |prod| installer does not support IPv6 UEFI booting. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Internal Management | Y | Y | By default \(when PXE boot network is not present\), internal management is used for PXE booting of new hosts. It must be untagged and it must be IPv4. If, for deployment reasons, the internal management network needs to be on a VLAN-tagged network, or if it needs to be IPv6, you can configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OAM | Y | Y | The OAM network supports IPv4 or IPv6 addressing. For more information, see :ref:`OAM Network Planning <oam-network-planning>`. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cluster Host Network | Y | Y | The Cluster Host network supports IPv4 or IPv6 addressing. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,39 @@
.. jow1404333564380
.. _network-requirements:
====================
Network Requirements
====================
|prod| uses several different types of networks, depending on the size of the
system and the features in use.
Available networks include the optional |PXE| boot network, the internal
management network, the cluster host network, the |OAM| network, and other
optional networks for external network connectivity.
The internal management network is required by all deployment configurations
for internal communication.
The cluster host network is required by all deployment configurations to
support a Kubernetes cluster. It is used for private container-to-container
networking within a cluster. It can be used for external connectivity of
container workloads. If the cluster host network is not used for external
connectivity of container workloads, then either the |OAM| port or other
configured ports on both the controller and worker nodes can be used for
connectivity to external networks.
The |OAM| network is required for external control and board management access.
It can be required for container payload external connectivity, depending on
container payload application network requirements.
You can consolidate more than one network on a single physical interface. For
more information, see :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<shared-vlan-or-multi-netted-ethernet-interfaces>`.
.. note::
Systems with two controllers use IP multicast messaging on the internal
management network. To prevent loss of controller synchronization, ensure
that the switches and other devices on these networks are configured with
appropriate settings.

View File

@ -0,0 +1,38 @@
.. gju1463606289993
.. _networks-for-a-starlingx-duplex-system:
============================
Networks for a Duplex System
============================
For a |prod| Duplex system, |org| recommends a minimal network configuration.
|prod| Duplex uses a small hardware footprint consisting of two hosts, plus a
network switch for connectivity. Network loading is typically low. The
following network configuration typically meets the requirements of such a
system:
.. _networks-for-a-starlingx-duplex-system-ul-j2d-thb-1w:
- An internal management network
- An |OAM| network, optionally consolidated on the management interface.
- A cluster host network for private container-to-container networking within
a cluster. By default, this is consolidated on the management interface.
The cluster host network can also be used for external connectivity of
container workloads. In this case, the cluster host network would be
configured on an interface separate from the internal management interface.
- If a cluster host network is not used for external connectivity of
container workloads, then either the |OAM| port or additionally configured
ports on both the controller and worker nodes are used for the container
workloads connectivity to external networks.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -0,0 +1,25 @@
.. wzx1492541958551
.. _networks-for-a-simplex-system:
====================================
Networks for a |prod| Simplex System
====================================
For a |prod| Simplex system, only the |OAM| network and additional external
networks are used.
|prod| Simplex uses a small hardware footprint consisting of a single host.
Unlike other |prod| deployments, this configuration does not require management
or cluster host network connections for internal communications. An |OAM|
network connection is used for administrative and board management access. For
external connectivity of container payloads, either the |OAM| port or other
configured ports on the node can be used.
The management and cluster host networks are required internally. They are
configured to use the loopback interface.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -0,0 +1,42 @@
.. gbo1463606348114
.. _networks-for-a-starlingx-system-with-controller-storage:
=============================================
Networks for a System with Controller Storage
=============================================
For a system that uses controller storage, |org| recommends an intermediate
network configuration.
|prod| systems with controller storage use controller and worker hosts only.
Network loading is low to moderate, depending on the number of worker hosts and
containers. The following network configuration typically meets the
requirements of such a system:
.. _networks-for-a-starlingx-system-with-controller-storage-ul-j2d-thb-1w:
- An internal management network.
- An |OAM| network, optionally consolidated on the management interface.
- A cluster host network for private container-to-container networking within
the cluster, by default consolidated on the management interface.
The cluster host network can also be used for external connectivity of
container workloads, in which case it would be configured on an interface
separate from the internal management interface.
- If a cluster host network is not used for external connectivity of
container workloads, then either the |OAM| port or other configured ports on
both the controller and worker nodes are used for container workloads
connectivity to external networks.
- A |PXE| Boot server to support controller-0 initialization.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -0,0 +1,49 @@
.. leh1463606429329
.. _networks-for-a-starlingx-with-dedicated-storage:
============================================
Networks for a System with Dedicated Storage
============================================
For a system that uses dedicated storage, |org| recommends a full network
configuration.
|prod| systems with dedicated storage include storage hosts to provide
Ceph-backed block storage. Network loading is moderate to high, depending on
the number of worker hosts, containers, and storage hosts. The following
network configuration typically meets the requirements of such a system:
.. _networks-for-a-starlingx-with-dedicated-storage-ul-j2d-thb-1w:
- An internal management network.
- A 10GE cluster host network for disk IO traffic to storage nodes and for
private container-to-container networking within a cluster, by default
consolidated on the management interface.
The cluster host network can be configured on an interface separate from
the internal management interface for external connectivity of container
workloads.
- An |OAM| network.
- A cluster host network not used for external connectivity of container
workloads. Either the |OAM| port or other configured ports on the
controller and worker nodes would be used for container workloads'
connectivity to external networks.
- An optional |PXE| boot network:
- if the internal management network is required to be on a |VLAN|-tagged
network
- if the internal management network is shared with other equipment
On moderately loaded systems, the |OAM| network can be consolidated on the
management or infrastructure interfaces.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -0,0 +1,72 @@
.. ooz1552671180591
.. _oam-network-planning:
====================
OAM Network Planning
====================
The |OAM| network enables ingress access to the Horizon Web interface, the
command-line management clients -- using |SSH| and |SNMP| interfaces, and the
REST APIs to remotely manage the |prod| cluster.
The |OAM| Network is also used for egress access to remote Docker Registries,
and for Elastic Beats connectivity to a Remote Log server if |prod| remote
logging is configured.
The |OAM| network provides access to the board management controllers.
The |OAM| network supports IPv4 or IPv6 addressing. Use the following
guidelines:
.. _oam-network-planning-ul-uj3-yk2-4n:
- Dual-stack configuration is not supported. With the exception of the PXE
boot network, all networks must use either IPv4 or IPv6 addressing.
- Deploy proper firewall mechanisms to access this network. The primary
concern of a firewall is to ensure that access to the |prod| management
interfaces is not compromised.
|prod| includes a default firewall for the |OAM| network, using Kubernetes
Network Policies. You can configure the system to support additional rules.
For more information, see :ref:`Firewall Options
<network-planning-firewall-options>`.
- Consider whether the |OAM| network needs access to the internet. Limiting
access to an internal network might be advisable, although access to a
configured DNS server, a remote Docker registry with at least the |prod|
container images, and |NTP| or |PTP| servers may still be needed.
- |VLAN| tagging is supported, enabling the network to share an interface
with the internal management or infrastructure networks.
- The IP addresses of the DNS, and |NTP|/|PTP| servers must match the IP
address plan \(IPv4 or IPv6\) of the |OAM| network.
- For an IPv4 address plan:
- The |OAM| floating IP address is the only address that needs to be
visible externally. You must therefore plan for valid definitions of
its IPv4 subnet and default gateway.
- The physical IPv4 addresses for the controllers do not need to be
visible externally, unless you plan to use them during |SSH| sessions
to prevent potential service breaks during the connection. You need to
plan for their IPv4 subnet, but you can limit access to them as
required.
- Outgoing packets from the active or secondary controller use the
controller's IPv4 physical address, not the |OAM| floating IP address,
as the source address.
- For an IPv6 address plan:
- Outgoing packets from the active controller use the |OAM| floating IP
address as the source address. Outgoing packets from the secondary
controller use the secondary controller's IPv6 physical IP address.
- Systems with two controllers use IP multicast messaging on the
internal management network. To prevent loss of controller synchronization,
ensure that the switches and other devices on these networks are configured
with appropriate settings.

View File

@ -0,0 +1,31 @@
.. sii1465846708497
.. _overview-of-starlingx-planning:
===================================================
Overview of Installation and Configuration Planning
===================================================
Fully planning your |prod-long| installation and configuration helps to
expedite the process and ensure that you have everything required.
Planning helps ensure that the requirements of your containers, and the
requirements of your cloud administration and operations teams can be met. It
ensures proper integration of a |prod| into the target data center or telecom
office, and helps you plan up front for future cloud growth.
.. xbooklink This planning guide assumes that you have read both the |intro-doc|: :ref:`<introduction-to-starlingx>` and the |deploy-doc|: :ref:`<deployment-options>` guides in order to understand general concepts and assist you in choosing a particular deployment configuration.
The |planning-doc| guide is intended to help you plan for your installation. It
discusses detailed planning topics for the following areas:
.. _overview-of-starlingx-planning-ul-v2m-t5h-hw:
- Network Planning
- Storage Planning
- Node Installation Planning
- Node Resource Planning

View File

@ -0,0 +1,37 @@
.. gss1552671083817
.. _redundant-top-of-rack-switch-deployment-considerations:
======================================================
Redundant Top-of-Rack Switch Deployment Considerations
======================================================
For a system that uses link aggregation on some or all networks, you can
configure redundant |ToR| switches for additional reliability.
In a redundant |ToR| switch configuration, each link in a link aggregate is
connected to a different switch, as shown in the accompanying figure. If one
switch fails, another is available to service the link aggregate.
.. figure:: ../figures/jow1438030468959.png
*Redundant Top-of-Rack Switches*
|org| recommends that you use switches that support |VPC|. When |VPC| is used,
the aggregated links on the switches act as a single |LAG| interface. Both
switches are normally active, providing full bandwidth to the |LAG|. If there
are multiple failed links on both switches, at least one connection in each
aggregate pair is still functional. If one switch fails, the other continues to
provide connections for all |LAG| links that are operational on that switch.
For more about configuring |VPC|, refer to your switch documentation.
You can use an active/standby failover model for the switches, but at a cost to
overall reliability. If there are multiple failed links on both switches, then
the switch with the greatest number of functioning links is activated, but
links on that switch could be in a failed state. In addition, when only one
link in an aggregate is connected to an active switch, the |LAG| bandwidth is
limited to the single link.
.. note::
You can enhance system reliability by using redundant routers. For more
information, refer to your router documentation.

View File

@ -0,0 +1,58 @@
.. qzw1552672165570
.. _security-planning-uefi-secure-boot-planning:
=========================
UEFI Secure Boot Planning
=========================
|UEFI| Secure Boot Planning allows you to authenticate modules before they are
allowed to execute.
The initial installation of |prod| should be done in |UEFI| mode if you plan on
using the secure boot feature in the future.
The |prod| secure boot certificate can be found in the |prod| ISO, on the EFI
bootable FAT filesystem. The file is in the directory /CERTS. You must add this
certificate database to the motherboard's |UEFI| certificate database. How to
add this certificate to the database is determined by the |UEFI| implementation
provided by the motherboard manufacturer.
You may need to work with your hardware vendor to have the certificate
installed.
There is an option in the |UEFI| setup utility that allows a user to browse to
a file containing a certificate to be loaded in the authorized database. This
option may be hidden in the |UEFI| setup utility unless |UEFI| mode is enabled,
and secure boot is enabled.
The |UEFI| implementation may or may not require a |TPM| device to be present
and enabled before providing for secure boot functionality. Refer to your
server board's documentation.
Many motherboards ship with Microsoft secure boot certificates pre-programmed
in the |UEFI| certificate database. These certificates may be required to boot
|UEFI| drivers for video cards, |RAID| controllers, or |NICs| \(for example,
the |PXE| boot software for a |NIC| may have been signed by a Microsoft
certificate\). While certificates can be removed from the certificate database
\(this is |UEFI| implementation specific\) it may be required that you keep the
Microsoft certificates to allow for complete system operation.
Mixed combinations of secure boot and non-secure boot nodes are supported. For
example, a controller node may secure boot, while a worker node may not. Secure
boot must be enabled in the |UEFI| firmware of each node for that node to be
protected by secure boot.
.. _security-planning-uefi-secure-boot-planning-ul-h4z-lzg-bjb:
- Secure Boot is supported in |UEFI| installations only. It is not used when
booting |prod| as a legacy boot target.
- |prod| does not currently support switching from legacy to |UEFI| mode
after a system has been installed. Doing so requires a reinstall of the
system. This means that upgrading from a legacy install to a secure boot
install \(|UEFI|\) is not supported.
- When upgrading a |prod| system from a version that did not support secure
boot to a version that does, do not enable secure boot in |UEFI| firmware
until the upgrade is complete.

View File

@ -0,0 +1,52 @@
.. rei1552671031876
.. _shared-vlan-or-multi-netted-ethernet-interfaces:
===================================================
Shared \(VLAN or Multi-Netted\) Ethernet Interfaces
===================================================
The management, |OAM|, cluster host, and other networks for container workload
external connectivity, can share Ethernet or aggregated Ethernet interfaces
using |VLAN| tagging or IP Multi-Netting.
The |OAM| internal management cluster host, and other external networks, can
use |VLAN| tagging or IP Multi-Netting, allowing them to share an Ethernet or
aggregated Ethernet interface with other networks. If the internal management
network is implemented as a |VLAN|-tagged network then it must be on the same
physical interface used for |PXE| booting.
The following arrangements are possible:
.. _shared-vlan-or-multi-netted-ethernet-interfaces-ul-y5k-zg2-zq:
- One interface for the internal management network and internal cluster host
network using multi-netting, and another interface for |OAM| \(on which
container workloads are exposed externally\). This is the default
configuration.
- One interface for the internal management network and another interface for
the external |OAM| and external cluster host \(on which container workloads
are exposed externally\) networks. Both are implemented using |VLAN|
tagging.
- One interface for the internal management network, another interface for
the external |OAM| network, and a third for an external cluster host
network \(on which container workloads are exposed externally\).
- One interface for the internal management network and internal cluster host
network using multi-netting, another interface for |OAM| and a third
interface for an additional network on which container workloads are
exposed externally.
For some typical interface scenarios, see |planning-doc|: :ref:`Hardware
Requirements <starlingx-hardware-requirements>`.
Options to share an interface using |VLAN| tagging or Multi-Netting are
presented in the Ansible Bootstrap Playbook. To attach an interface to other
networks after configuration, you can edit the interface.
.. xbooklink For more information about configuring |VLAN| interfaces and Multi-Netted interfaces, see |node-doc|: :ref:`Configure VLAN Interfaces Using Horizon <configuring-vlan-interfaces-using-horizon>`.

View File

@ -0,0 +1,41 @@
.. lid1552672445221
.. _starlingx-boot-sequence-considerations:
===================================
System Boot Sequence Considerations
===================================
During |prod| software installation, each host must boot from different devices
at different times. In some cases, you may need to adjust the boot order.
The first controller node must be booted initially from a removable storage
device to install an operating system. The host then reboots from the hard
drive.
Each remaining host must be booted initially from the network using |PXE| to
install an operating system. The host then reboots from the hard drive.
To facilitate this process, ensure that the hard drive does not already contain
a bootable operating system, and set the following boot order in the BIOS.
.. _starlingx-boot-sequence-considerations-ol-htt-5qg-fn:
#. removable storage device \(USB flash drive or DVD drive\)
#. hard drive
#. network \(|PXE|\), over an interface connected to the internal management
network
#. network \(|PXE|\), over an interface connected to the |PXE| boot network
For BIOS configuration details, refer to the OEM documentation supplied with
the worker node.
.. note::
If a host contains a bootable hard drive, either erase the drive
beforehand, or ensure that the host is set to boot from the correct source
for initial configuration. If necessary, you can change the boot device at
boot time by pressing a dedicated key. For more information, refer to the
OEM documentation for the worker node.

View File

@ -0,0 +1,224 @@
.. kdl1464894372485
.. _starlingx-hardware-requirements:
============================
System Hardware Requirements
============================
|prod| has been tested to work with specific hardware configurations:
.. contents::
:local:
:depth: 1
If the minimum hardware requirements are not met, system performance cannot be
guaranteed.
.. _starlingx-hardware-requirements-section-N10044-N10024-N10001:
-------------------------------------
Controller, worker, and storage hosts
-------------------------------------
.. Row alterations don't work with spans
|row-alt-off|
.. _starlingx-hardware-requirements-table-nvy-52x-p5:
.. table:: Table 1. Hardware Requirements — |prod| Standard Configuration
:widths: auto
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Requirement | Controller | Storage | Worker |
+===========================================================+==================================================================================================================================================================================================================================================================+==============================================================================================+=======================================================================================+
| Minimum Qty of Servers | 2 \(required\) | \(if Ceph storage used\) | 2 100 |
| | | | |
| | | 2 8 \(for replication factor 2\) | |
| | | | |
| | | 3 9 \(for replication factor 3\) | |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB | 64 GB | 32 GB |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Primary Disk \(two-disk hardware RAID suggested\) | 500 GB - SSD or NVMe | 120 GB \(min. 10K RPM\) |
| | | |
+ +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| | .. note:: |
| | Installation on software RAID is not supported. |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Additional Disks | 1 X 500 GB \(min 10K RPM\) | 500 GB \(min. 10K RPM\) for OSD storage | 500 GB \(min. 10K RPM\) — 1 or more |
| | | | |
| | \(not required for systems with dedicated storage nodes\) | one or more SSDs or NVMe drives \(recommended for Ceph journals\); min. 1024 MiB per journal | .. note:: |
| | | | Single-disk hosts are supported, but must not be used for local ephemeral storage |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment\) |
| | |
| | |
| | |
| | |
| | |
+ +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | | | |
| | - OAM: 2 x 1GE LAG | | - Optionally external network ports 2 x 10GE LAG |
| | | | |
| | - Optionally external network ports 2 x 10GE LAG | | |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Board Management Controller \(BMC\) | 1 \(required\) | 1 \(required\) | 1 \(required\) |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| USB Interface | 1 | not required |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB | HD, PXE |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Disabled | Enabled |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
.. _starlingx-hardware-requirements-section-N102D0-N10024-N10001:
--------------------------------
Combined controller-worker hosts
--------------------------------
Hardware requirements for a |prod| Simplex or Duplex configuration are listed
in the following table.
.. _starlingx-hardware-requirements-table-cb2-lfx-p5:
.. table:: Table 2. Hardware Requirements — |prod| Simplex or Duplex Configuration
:widths: auto
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller + Worker |
| | |
| | \(Combined Server\) |
+===================================+==================================================================================================================================================================================================================================================================+
| Minimum Qty of Servers | Simplex―1 |
| | |
| | Duplex―2 |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | |
| | or |
| | |
| | Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk | 500 GB - SSD or NVMe |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Additional Disks | - Single-disk system: N/A |
| | |
| | - Two-disk system: |
| | |
| | |
| | - 1 x 500 GB SSD or NVMe for Persistent Volume Claim storage |
| | |
| | |
| | - Three-disk system: |
| | |
| | |
| | - 1 x 500 GB \(min 10K RPM\) for Persistent Volume Claim storage |
| | |
| | - 1 or more x 500 GB \(min. 10K RPM\) for Container ephemeral disk storage |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment.\) |
| | |
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | |
| | .. note:: |
| | Mgmt / Cluster Host ports are required for Duplex systems only |
| | |
| | - OAM: 2 x 1GE LAG |
| | |
| | - Optionally external network ports 2 x 10GE LAG |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Enabled |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _starlingx-hardware-requirements-section-if-scenarios:
---------------------------------
Interface configuration scenarios
---------------------------------
|prod| supports the use of consolidated interfaces for the management, cluster
host, and |OAM| networks. Some typical configurations are shown in the
following table. For best performance, |org| recommends dedicated interfaces.
|LAG| is optional in all instances.
.. _starlingx-hardware-requirements-table-if-scenarios:
.. table::
:widths: auto
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+
| Scenario | Controller | Storage | Worker |
+===========================================================================+===============================+===============================+===============================+
| - Physical interfaces on servers limited to two pairs | 2x 10GE LAG: | 2x 10GE LAG: | 2x 10GE LAG: |
| | | | |
| - Estimated aggregate average Container storage traffic less than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Cluster Host \(untagged\) |
| | | | |
| | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) | |
| | | | Optionally |
| | | | |
| | 2x 1GE LAG: | | 2x 10GE LAG |
| | | | |
| | - OAM \(untagged\) | | external network ports |
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+
| - No specific limit on number of physical interfaces | 2x 1GE LAG: | 2x 1GE LAG | 2x 1GE LAG |
| | | | |
| - Estimated aggregate average Container storage traffic greater than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | |
| | | | |
| | 2x 10GE LAG: | 2x 10GE LAG | 2x 10GE LAG: |
| | | | |
| | - Cluster Host | - Cluster Host | - Cluster Host |
| | | | |
| | | | |
| | 2x 1GE LAG: | | Optionally |
| | | | |
| | - OAM \(untagged\) | | 2x 10GE LAG |
| | | | |
| | | | - external network ports |
| | Optionally | | |
| | | | |
| | 2x 10GE LAG | | |
| | | | |
| | - external network ports | | |
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+

View File

@ -0,0 +1,182 @@
.. uyj1582118375814
.. _storage-planning-storage-on-controller-hosts:
===========================
Storage on Controller Hosts
===========================
The controller's root disk provides storage for the |prod| system databases,
system configuration files, local Docker images, container's ephemeral
filesystems, the local Docker registry container image store, platform backup,
and the system backup operations.
.. contents:: In this section:
:local:
:depth: 1
Container local storage is derived from the cgts-vg volume group on the root
disk. You can add additional storage to the cgts-vg volume by assigning a
partition or disk to make it larger. This will allow you to increase the size
of the container local storage for the host, however, you cannot assign it
specifically to a non-root disk.
On All-in-one Simplex, All-in-one Duplex, and Standard with controller storage
systems, at least one additional disk for each controller host is required for
backing container |PVCs|.
Two disks are required with one being a Ceph |OSD|.
.. _storage-planning-storage-on-controller-hosts-d103e57:
-----------------------
Root Filesystem Storage
-----------------------
Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the Horizon
Web interface or the CLI. The following commands are available to increase
various filesystem sizes: :command:`system controllerfs`, and :command:`system
host-fs`.
.. _storage-planning-storage-on-controller-hosts-d103e93:
------------------------
Synchronized Filesystems
------------------------
Synchronized filesystems ensure that files stored in several different physical
locations are up to date. The following commands can be used to resize an
|DRBD|-synced filesystem \(Database, Docker-distribution, Etcd, Extension,
Platform\) on controllers.
:command:`controllerfs-list`, :command:`controllerfs-modify`, and
:command:`controllerfs-show`.
.. xbooklink For more information, see *Increasing Controller Filesystem Storage Allotments Using Horizon*.
**Platform Storage**
This is the storage allotment for a variety of platform items including the
local helm repository, the StarlingX application repository, and internal
platform configuration data files.
**Database storage**
The storage allotment for the platform's postgres database is used by
StarlingX, System Inventory, Keystone and Barbican.
Internal database storage is provided using |DRBD|-synchronized partitions
on the controller primary disks. The size of the database grows with the
number of system resources created by the system administrator. This
includes objects of all kinds such as hosts, interfaces, and service
parameters.
If you add a database filesystem or increase its size, you must also
increase the size of the backup filesystem.
**Docker-distribution storage \(local Docker registry storage\)**
The storage allotment for container images stored in the local Docker
registry. This storage is provided using a |DRBD|-synchronized partition on
the controller primary disk.
**Etcd Storage**
The storage allotment for the Kubernetes etcd database.
Internal database storage is provided using a |DRBD|-synchronized partition
on the controller primary disk. The size of the database grows with the
number of system resources created by the system administrator and the
users. This includes objects of all kinds such as pods, services, and
secrets.
**Ceph-mon**
Ceph-mon is the cluster monitor daemon for the Ceph distributed file system
that is used for Ceph monitors to synchronize.
**Extension Storage**
This filesystem is reserved for future use. This storage is implemented on
a |DRBD|-synchronized partition on the controller primary disk.
.. _storage-planning-storage-on-controller-hosts-d103e219:
----------------
Host Filesystems
----------------
The following host filesystem commands can be used to resize non-|DRBD|
filesystems \(Backup, Docker, Kubelet and Scratch\) and do not apply to all
hosts of a give personality type:
:command:`host-fs-list`, :command:`host-fs-modify`, and :command:`host-fs-show`
The :command:`host-fs-modify` command increases the storage configuration for
the filesystem specified on a per-host basis. For example, the following
command increases the scratch filesystem size to 10 GB:
.. code-block:: none
~(keystone_admin)]$ system host-fs-modify controller-1 scratch=10
**Backup storage**
This is the storage allotment for backup operations. This is a backup area,
where:
backup=2\*database+platform size
**Docker Storage**
This storage allotment is for ephemeral filesystems for containers on the
host, and for Docker image cache.
**Kubelet Storage**
This storage allotment is for ephemeral storage size related to Kubernetes
pods on this host.
**Scratch Storage**
This storage allotment is used by the host as a temporary area for a
variety of miscellaneous transient host operations.
**Logs Storage**
This is the storage allotment for log data. This filesystem is not
resizable. Logs are rotated within the fixed space allocated.
Replacement root disks for a reinstalled controller should be the same size or
larger to ensure that existing allocation sizes for filesystems will fit on the
replacement disk.
.. _storage-planning-storage-on-controller-hosts-d103e334:
-------------------------------------------------
Persistent Volume Claims storage \(Ceph Cluster\)
-------------------------------------------------
For controller-storage systems, additional disks on the controller, configured
as Ceph |OSDs|, provide a small Ceph cluster for backing |PVCs| storage for
containers.
.. _storage-planning-storage-on-controller-hosts-d103e345:
-----------
Replication
-----------
On |AIO|-Simplex systems, replication is done between |OSDs| within the host.
The following three replication factors are supported:
**1**
This is the default, and requires one or more |OSD| disks.
**2**
This requires two or more |OSD| disks.
**3**
This requires three or more |OSD| disks.
On |AIO|-Duplex systems replication is between the two controllers. Only one
replication group is supported and additional controllers cannot be added.
The following replication factor is supported:
**2**
There can be any number of |OSDs| on each controller, with a minimum of one
each. It is recommended that you use the same number and same size |OSD|
disks on the controllers.

View File

@ -0,0 +1,48 @@
.. mrn1582121375412
.. _storage-planning-storage-on-storage-hosts:
========================
Storage on Storage Hosts
========================
Storage hosts provide a large-scale, persistent and highly available Ceph
cluster for backing |PVCs|.
The storage hosts can only be provisioned in a Standard with dedicated storage
deployment and comprise the storage cluster for the system. Within the storage
cluster, the storage hosts are deployed in replication groups for redundancy.
On dedicated storage setups Ceph storage backend is enabled automatically, and
the replication factor is updated later, depending on the number of storage
hosts provisioned.
.. _storage-planning-storage-on-storage-hosts-section-N1003F-N1002B-N10001:
----------------------
OSD Replication Factor
----------------------
.. _storage-planning-storage-on-storage-hosts-d99e23:
.. table::
:widths: auto
+--------------------+-----------------------------+--------------------------------------+
| Replication Factor | Hosts per Replication Group | Maximum Replication Groups Supported |
+====================+=============================+======================================+
| 2 | 2 | 4 |
+--------------------+-----------------------------+--------------------------------------+
| 3 | 3 | 3 |
+--------------------+-----------------------------+--------------------------------------+
You can add up to 16 |OSDs| per storage host for data storage.
Space on the storage hosts must be configured at installation before you can
unlock the hosts. You can change the configuration after installation by adding
resources to existing storage hosts or adding more storage hosts. For more
information, see the `StarlingX Installation and Deployment Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__.
Storage hosts can achieve faster data access using |SSD|-backed transaction
journals \(journal functions\). |NVMe|-compatible |SSDs| are supported.

View File

@ -0,0 +1,53 @@
.. dbg1582122084062
.. _storage-planning-storage-on-worker-hosts:
=======================
Storage on Worker Hosts
=======================
A worker host's root disk provides storage for host configuration files, local
Docker images, and hosted container's ephemeral filesystems.
.. note::
On |prod| Simplex or Duplex systems, worker storage is provided using
resources on the combined host. For more information, see
:ref:`Storage on Controller Hosts
<storage-planning-storage-on-controller-hosts>`.
.. _storage-planning-storage-on-worker-hosts-d56e38:
-----------------------
Root filesystem storage
-----------------------
Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the Horizon
Web interface or the CLI. Resizing must be done on a host-by-host basis for
non-|DRBD| synced filesystems.
**Docker Storage**
The storage allotment for the Docker image cache for this host, and for the
ephemeral filesystems of containers on this host.
**Kubelet Storage**
The storage allotment for ephemeral storage related to Kubernetes pods on
this host.
**Scratch Storage**
The storage allotment for a variety of miscellaneous transient host
operations.
**Logs Storage**
The storage allotment for log data. This filesystem is not resizable. Logs
are rotated within the fixed space as allocated.
.. seealso::
:ref:`Storage Resources <storage-planning-storage-resources>`
:ref:`Storage on Controller Hosts
<storage-planning-storage-on-controller-hosts>`
:ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>`

View File

@ -0,0 +1,127 @@
.. llf1552671530365
.. _storage-planning-storage-resources:
=================
Storage Resources
=================
|prod| uses storage resources on the controller and worker hosts, and on
storage hosts if they are present.
The |prod| storage configuration is highly flexible. The specific configuration
depends on the type of system installed, and the requirements of the system.
.. contents:: In this section:
:local:
:depth: 1
.. _storage-planning-storage-resources-d199e38:
--------------------
Uses of Disk Storage
--------------------
**System**
The |prod| system uses root disk storage for the operating system and
related files, and for internal databases. On controller nodes, the
database storage and selected root file-systems are synchronized between
the controller nodes using |DRBD|.
**Local Docker Registry**
An HA local docker registry is deployed on controller nodes to provide
local centralized storage of container images. Its image store is a |DRBD|
synchronized file system.
**Docker Container Images**
Container images are pulled from either a remote or local Docker Registry,
and cached locally by docker on the host worker or controller node when a
container is launched.
**Container Ephemeral Local Disk**
Containers have local filesystems for ephemeral storage of data. This data
is lost when the container is terminated.
Kubernetes Docker ephemeral storage is allocated as part of the docker-lv
and kubelet-lv file systems from the cgts-vg volume group on the root disk.
These filesystems are resizable.
**Container Persistent Volume Claims \(PVCs\)**
Containers can mount remote HA replicated volumes backed by the Ceph
Storage Cluster for managing persistent data. This data survives restarts
of the container.
.. note::
Ceph is not configured by default.
.. xbooklink For more information, see the |stor-doc|: :ref:`Configure the Internal Ceph Storage Backend <configuring-the-internal-ceph-storage-backend>`
.. _storage-planning-storage-resources-d199e134:
-----------------
Storage Locations
-----------------
In addition to the root disks present on each host for system storage, the
following storage may be used only for:
.. _storage-planning-storage-resources-d199e143:
- Controller hosts: |PVCs| on dedicated storage hosts when using that setup
or on controller hosts. Additional Ceph |OSD| disk\(s\) are present on
controllers in configurations without dedicated storage hosts. These |OSDs|
provide storage to fill |PVCs| made by Kubernetes pods or containers.
- Worker hosts: This is storage is derived from docker-lv/kubelet-lv as
defined on the cgts-vg \(root disk\). You can add a disk to cgts-vg and
increase the size of the docker-lv/kubelet-lv.
**Combined Controller-Worker Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local ephemeral storage for containers, and a Ceph
cluster for backing Persistent Volume Claims.
Container/Pod ephemeral storage is implemented on the root disk on all
controllers/workers regardless of labeling.
**Storage Hosts**
One or more disks are used on storage hosts to realize a large scale Ceph
cluster providing backing for |PVCs| for containers. Storage hosts are used
only on |prod| with Dedicated Storage systems.
.. _storage-planning-storage-resources-section-N1015E-N10031-N1000F-N10001:
-----------------------
External Netapp Trident
-----------------------
|prod| can be configured to connect-to and use an external Netapp Trident
deployment as its storage backend.
Netapp Trident supports:
.. _storage-planning-storage-resources-d247e23:
- |AWS| Cloud Volumes
- E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire
- Azure NetApp Files service
.. _storage-planning-storage-resources-d247e56:
For more information about Trident, see
`https://netapp-trident.readthedocs.io <https://netapp-trident.readthedocs.io>`__.
.. seealso::
:ref:`Storage on Controller Hosts <storage-planning-storage-on-controller-hosts>`
:ref:`Storage on Worker Hosts <storage-planning-storage-on-worker-hosts>`
:ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>`

View File

@ -0,0 +1,47 @@
.. srt1552049815547
.. _the-cluster-host-network:
====================
Cluster Host Network
====================
The cluster host network provides the physical network required for Kubernetes
management and control, as well as private container networking.
Kubernetes uses logical networks for communication between containers, pods,
services, and external sites. These networks are implemented over the cluster
host network using the |CNI| service, Calico, in |prod|.
All nodes in the cluster must be attached to the cluster host network. This
network shares an interface with the management network. A container workload's
external connectivity is either through the |OAM| port or through other
configured ports on both the controller and worker nodes, depending on
containerized workload requirements. Container network endpoints will be
exposed externally with **NodePort** Kubernetes services. This exposes selected
application containers network ports on *all* interfaces of both controller
nodes and *all* worker nodes, on either the |OAM| interface or other configured
interfaces for external connectivity on all nodes. This is typically done
either directly to the application containers service or through an ingress
controller service to reduce external port usage. HA would be achieved through
either an external HA load balancer across two or more controller and/or worker
nodes, or simply using multiple records \(two or more destination controller
and/or worker node IPs\) for the application's external DNS entry.
Alternatively, the cluster host network can be deployed as an external network
and provides the container workload's external connectivity as well. Container
network endpoints will be exposed externally with **NodePort** Kubernetes
services. This exposes selected Application Containers network ports on *all*
interfaces \(e.g. external cluster host interfaces\) of both controller nodes
and *all* worker nodes. This would typically be done either directly to the
Application Containers service or through an ingress controller service to
reduce external port usage. HA would be achieved through either an external HA
load balancer across two or more controller and/or worker nodes, or simply
using multiple records \(2 or more destination controller and/or worker node
IPs\) for the Application's external DNS Entry.
If using an external cluster host network, container network endpoints could be
exposed through |BGP| within the Calico |CNI| service. Calico |BGP|
configuration could be modified to advertise selected Application Container
services or the ingress controller service to a |BGP| Peer, specifying the
available next hop controller and/or worker nodes' cluster host IP Addresses.

View File

@ -0,0 +1,43 @@
.. acc1552590687558
.. _the-ethernet-mtu:
============
Ethernet MTU
============
The |MTU| of an Ethernet frame is a configurable attribute in |prod|. Changing
its default size must be done in coordination with other network elements on
the Ethernet link.
In the context of |prod|, the |MTU| refers to the largest possible payload on
the Ethernet frame on a particular network link. The payload is enclosed by the
Ethernet header \(14 bytes\) and the CRC \(4 bytes\), resulting in an Ethernet
frame that is 18 bytes longer than the |MTU| size.
The original IEEE 802.3 specification defines a valid standard Ethernet frame
size to be from 64 to 1518 bytes, accommodating payloads ranging in size from
46 to 1500 bytes. Ethernet frames with a payload larger than 1500 bytes are
considered to be jumbo frames.
For a |VLAN| network, the frame also includes a 4-byte |VLAN| ID header,
resulting in a frame size 22 bytes longer than the |MTU| size.
In |prod|, you can configure the |MTU| size for the following interfaces and
networks:
.. _the-ethernet-mtu-ul-qmn-yvn-m4:
- The management, cluster host and |OAM| network interfaces on the
controller. The |MTU| size for these interfaces is set during initial
installation.
.. xbooklink For more information, see the `StarlingX Installation and Deployment Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. To make changes after installation, see |sysconf-doc|: :ref:`Change the MTU of an OAM Interface Using Horizon <changing-the-mtu-of-an-oam-interface-using-horizon>`.
- Additional interfaces configured for container workload connectivity to
external networks,
In all cases, the default |MTU| size is 1500. The minimum value is 576, and the
maximum is 9216.

View File

@ -0,0 +1,44 @@
.. yxu1552670544024
.. _the-internal-management-network:
====================================
Internal Management Network Overview
====================================
The internal management network must be implemented as a single, dedicated,
Layer 2 broadcast domain for the exclusive use of each |prod| cluster.
Sharing of this network by more than one |prod| cluster is not supported.
.. note::
This network is not used with |prod| Simplex systems.
During the |prod| software installation process, several network services
such as |BOOTP|, |DHCP|, and |PXE|, are expected to run over the internal
management network. These services are used to bring up the different hosts
to an operational state. It is therefore mandatory that this network be
operational and available in advance, to ensure a successful installation.
On each host, the internal management network can be implemented using a 1
Gb or 10 Gb Ethernet port. Requirements for this port are that:
.. _the-internal-management-network-ul-uh1-pqs-hp:
- It must be capable of |PXE|-booting.
- It can be used by the motherboard as a primary boot device.
.. note::
If required, the internal management network can be configured as a
|VLAN|-tagged network. In this case, a separate IPv4 |PXE| boot
network must be implemented as the untagged network on the same
physical interface. This configuration must also be used if the
management network must support IPv6.
.. seealso::
:ref:`Internal Management Network Planning
<internal-management-network-planning>`
:ref:`Multicast Subnets for the Management Network
<multicast-subnets-for-the-management-network>`

View File

@ -0,0 +1,30 @@
.. hzz1585077472404
.. _the-storage-network:
===============
Storage Network
===============
The storage network is an optional network that is only required if using an
external Netapp Trident cluster as a storage backend.
The storage network provides connectivity between all nodes in the
|prod-long| cluster \(controller/master nodes and worker nodes\) and the
Netapp Trident cluster.
For the most part, the storage network shares the design considerations
applicable to the internal management network.
.. _the-storage-network-ul-c41-qwm-dlb:
- It can be implemented using a 10 Gb Ethernet interface.
- It can be |VLAN|-tagged, enabling it to share an interface with the
management or |OAM| network.
- It can own the entire IP address range on the subnet, or a specified
range.
- It supports dynamic or static IP address assignment.

View File

@ -0,0 +1,25 @@
.. cvf1552672201332
.. _tpm-planning:
============
TPM Planning
============
|TPM| is an industry standard crypto processor that enables secure storage
of HTTPS |SSL| private keys. It is used in support of advanced security
features.
|TPM| is an optional requirement for |UEFI| Secure Boot.
If you plan to use |TPM| for secure protection of REST API and Web Server
HTTPS |SSL| keys, ensure that |TPM| 2.0 compliant hardware devices are
fitted on controller nodes before provisioning them. If properly connected,
the BIOS should detect these new devices and display appropriate
configuration options. |TPM| must be enabled from the BIOS before it can be
used in software.
.. note::
|prod| allows post installation configuration of HTTPS mode. It is
possible to transition a live HTTP system to a system that uses |TPM|
for storage of HTTPS |SSL| keys without reinstalling the system.

View File

@ -0,0 +1,183 @@
.. svs1552672428539
.. _verified-commercial-hardware:
============================
Verified Commercial Hardware
============================
Verified and approved hardware components for use with |prod| are listed
here.
.. _verified-commercial-hardware-verified-components:
.. table:: Table 1. Verified Components
:widths: 100, 200
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Component | Approved Hardware |
+==========================================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================+
| Hardware Platforms | - Hewlett Packard Enterprise |
| | |
| | |
| | - HPE ProLiant DL360p Gen8 Server |
| | |
| | - HPE ProLiant DL360p Gen9 Server |
| | |
| | - HPE ProLiant DL360 Gen10 Server |
| | |
| | - HPE ProLiant DL380p Gen8 Server |
| | |
| | - HPE ProLiant DL380p Gen9 Server |
| | |
| | - HPE ProLiant ML350 Gen10 Server |
| | |
| | - c7000 Enclosure with HPE ProLiant BL460 Gen9 Server |
| | |
| | .. caution:: |
| | LAG support is dependent on the switch cards deployed with the c7000 enclosure. To determine whether LAG can be configured, consult the switch card documentation. |
| | |
| | |
| | - Dell |
| | |
| | |
| | - Dell PowerEdge R430 |
| | |
| | - Dell PowerEdge R630 |
| | |
| | - Dell PowerEdge R640 |
| | |
| | - Dell PowerEdge R720 |
| | |
| | - Dell PowerEdge R730 |
| | |
| | - Dell PowerEdge R740 |
| | |
| | |
| | - Kontron Symkloud MS2920 |
| | |
| | .. note:: |
| | The Kontron platform does not support power ON/OFF or reset through the BMC interface on |prod|. As a result, it is not possible for the system to properly fence a node in the event of a management network isolation event. In order to mitigate this, hosted application auto recovery needs to be disabled. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Supported Reference Platforms | - Intel Iron Pass |
| | |
| | - Intel Canoe Pass |
| | |
| | - Intel Grizzly Pass |
| | |
| | - Intel Wildcat Pass |
| | |
| | - Intel Wolf Pass |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Disk Controllers | - Dell |
| | |
| | |
| | - PERC H310 Mini |
| | |
| | - PERC H730 Mini |
| | |
| | - PERC H740P |
| | |
| | - PERC H330 |
| | |
| | - PERC HBA330 |
| | |
| | |
| | |
| | - HPE Smart Array |
| | |
| | |
| | - P440ar |
| | |
| | - P420i |
| | |
| | - P408i-a |
| | |
| | - P816i-a |
| | |
| | |
| | - LSI 2308 |
| | |
| | - LSI 3008 |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for PXE Boot, Management, and OAM Networks | - Intel I210 \(Springville\) 1G |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X540 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Intel X722 \(Fortville\) 10G |
| | |
| | - Emulex XE102 10G |
| | |
| | - Broadcom BCM5719 1G |
| | |
| | - Broadcom BCM57810 10G |
| | |
| | - Mellanox MT27710 Family \(ConnectX-4 Lx\) 10G/25G |
| | |
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for Data Interfaces [#]_ | The following NICs are supported: |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10 G |
| | |
| | - Intel X552 \(Xeon-D\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI passthrough or PCI SR-IOV NICs | - Intel 82599 \(Niantic\) 10 G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27500 Family \(ConnectX-3\) 10G \(support for PCI passthrough only\) [#]_ |
| | |
| | |
| | |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
| | |
| | .. note:: |
| | For a Mellanox CX3 using PCI passthrough or a CX4 using PCI passthrough or SR-IOV, SR-IOV must be enabled in the CX3/CX4 firmware. For more information, see `How To Configure SR-IOV for ConnectX-3 with KVM (Ethernet): Enable SR-IOV on the Firmware <https://community.mellanox.com/docs/DOC-2365#jive_content_id_I_Enable_SRIOV_on_the_Firmware>`__. |
| | |
| | |
| | .. note:: |
| | The maximum number of VFs per hosted application instance, across all PCI devices, is 32. |
| | |
| | For example, a hardware encryption hosted application can be launched with virtio interfaces and 32 QAT VFs. However, a hardware encryption hosted application with an SR-IOV network interface \(with 1 VF\) can only be launched with 31 VFs. |
| | |
| | .. note:: |
| | Dual-use configuration \(PCI passthrough or PCI SR-IOV on the same interface\) is supported for Fortville NICs only. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI SR-IOV Hardware Accelerators | - Intel AV-ICE02 VPN Acceleration Card, based on the Intel Coleto Creek 8925/8950, and C62x device with QuickAssist ®. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GPUs Verified for PCI Passthrough | - NVIDIA Corporation: VGA compatible controller - GM204GL \(Tesla M60 rev a1\) |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Board Management Controllers | - HPE iLO3 |
| | |
| | - HPE iLO4 |
| | |
| | - Quanta |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. include:: ../../_includes/verified-commercial-hardware.rest

View File

@ -43,16 +43,8 @@
.. File name prefix, as in stx-remote-cli-<version>.tgz. May also be
used in sample domain names etc.
.. File name prefix, as in stx-remote-cli-<version>.tgz. May also be
used in sample domain names etc.
Note: This substitution is used in tabular output. Strings with lenghts
other than 3 will cause table borders to misalign. You can pad shorter
substitutions with an escaped leading space, as in replace:: \ yz
You need to insert two spaces after the backslash to preserve one in
the outpout.
.. |prefix| replace:: stx
.. |prefix| replace:: stx
.. space character. Needed for padding in tabular output. Currently
used where |prefix| replacement is a length shorter than 3.
@ -61,7 +53,15 @@
.. |s| replace:: \
.. Common and domain-specific sbbreviations.
.. Table row alternation inline override. Alternation styling is confused
.. by spans. Applies to all tables in an rST file.
.. |row-alt-off| raw:: html
<style>table.docutils tr.row-odd {background-color: #fff;}</style>
.. Common and domain-specific abbreviations.
.. Plural forms must be defined seperately from singular as
.. replacements like |PVC|s won't work.
@ -71,6 +71,7 @@
.. |AE| replace:: :abbr:`AE (Aggregated Ethernet)`
.. |AIO| replace:: :abbr:`AIO (All-In-One)`
.. |AVP| replace:: :abbr:`AVP (Accelerated Virtual Port)`
.. |AWS| replace:: :abbr:`AWS (Amazon Web Services)`
.. |BGP| replace:: :abbr:`BGP (Border Gateway Protocol)`
.. |BMC| replace:: :abbr:`BMC (Board Management Controller)`
.. |BMCs| replace:: :abbr:`BMCs (Board Management Controllers)`