Openstack planning

Updates for patchset 2 review comments
Changed link depth of main Planning index and added some narrative guidance
Added planning/openstack as sibling of planning/kubernetes
Related additions to abbrevs.txt
Added max-workers substitution to accomodate StarlingX/vendor variants

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Ibff9af74ab3f2c00958eff0e33c91465f1dab6b4
Signed-off-by: Stone <ronald.stone@windriver.com>
This commit is contained in:
Ron Stone 2021-01-20 13:37:25 -05:00 committed by Stone
parent ebdf63ec68
commit 3143d86b69
100 changed files with 4517 additions and 2454 deletions

View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@
.. [#f1] See :ref:`Data Network Planning <data-network-planning>` for more information.

View File

@ -0,0 +1,4 @@
.. vxlan-begin
- To minimize flooding of multicast packets, |IGMP| and |MLD| snooping is
recommended on all Layer 2 switches.

View File

@ -0,0 +1,2 @@
.. unmodified-guests-virtio-begin
.. highest-performance-begin

96
doc/source/_vendor/vendor_strings.txt Normal file → Executable file
View File

@ -1,46 +1,50 @@
.. Common string substitutions for brand customization and consistency. .. Common string substitutions for brand customization and consistency.
.. NOTE: Do not use underscores in these substitution names. .. NOTE: Do not use underscores in these substitution names.
.. For more information, see .. For more information, see
.. https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#substitutions .. https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#substitutions
.. Organization name .. Organization name
.. |org| replace:: StarlingX .. |org| replace:: StarlingX
.. Short and long product names such as "StarlingX" and "Acme Co. StarlingX" .. Short and long product names such as "StarlingX" and "Acme Co. StarlingX"
.. |prod| replace:: StarlingX .. |prod| replace:: StarlingX
.. |prod-long| replace:: StarlingX .. |prod-long| replace:: StarlingX
.. |prod-os| replace:: StarlingX OpenStack .. |prod-os| replace:: StarlingX OpenStack
.. |prod-dc| replace:: Distributed Cloud .. |prod-dc| replace:: Distributed Cloud
.. Guide names; will be formatted in italics by default. .. Guide names; will be formatted in italics by default.
.. |node-doc| replace:: :title:`StarlingX Node Configuration and Management` .. |node-doc| replace:: :title:`StarlingX Node Configuration and Management`
.. |planning-doc| replace:: :title:`StarlingX Planning` .. |planning-doc| replace:: :title:`StarlingX Planning`
.. |sec-doc| replace:: :title:`StarlingX Security` .. |sec-doc| replace:: :title:`StarlingX Security`
.. |inst-doc| replace:: :title:`StarlingX Installation` .. |inst-doc| replace:: :title:`StarlingX Installation`
.. |stor-doc| replace:: :title:`StarlingX Storage Configuration and Management` .. |stor-doc| replace:: :title:`StarlingX Storage Configuration and Management`
.. |intro-doc| replace:: :title:`StarlingX Introduction` .. |intro-doc| replace:: :title:`StarlingX Introduction`
.. |fault-doc| replace:: :title:`StarlingX Fault Management` .. |fault-doc| replace:: :title:`StarlingX Fault Management`
.. |sysconf-doc| replace:: :title:`StarlingX System Configuration` .. |sysconf-doc| replace:: :title:`StarlingX System Configuration`
.. |backup-doc| replace:: :title:`StarlingX Backup and Restore` .. |backup-doc| replace:: :title:`StarlingX Backup and Restore`
.. |deploy-doc| replace:: :title:`StarlingX Deployment Configurations` .. |deploy-doc| replace:: :title:`StarlingX Deployment Configurations`
.. |distcloud-doc| replace:: :title:`StarlingX Distributed Cloud` .. |distcloud-doc| replace:: :title:`StarlingX Distributed Cloud`
.. |usertasks-doc| replace:: :title:`StarlingX User Tasks` .. |usertasks-doc| replace:: :title:`StarlingX User Tasks`
.. |admintasks-doc| replace:: :title:`StarlingX Administrator Tasks` .. |admintasks-doc| replace:: :title:`StarlingX Administrator Tasks`
.. |datanet-doc| replace:: :title:`StarlingX Data Networks` .. |datanet-doc| replace:: :title:`StarlingX Data Networks`
.. Name of downloads location .. Name of downloads location
.. |dnload-loc| replace:: a StarlingX mirror .. |dnload-loc| replace:: a StarlingX mirror
.. File name prefix, as in stx-remote-cli-<version>.tgz. May also be .. File name prefix, as in stx-remote-cli-<version>.tgz. May also be
used in sample domain names etc. used in sample domain names etc.
.. |prefix| replace:: stx .. |prefix| replace:: stx
.. space character. Needed for padding in tabular output. Currently .. space character. Needed for padding in tabular output. Currently
used where |prefix| replacement is a length shorter than 3. used where |prefix| replacement is a length shorter than 3.
To insert a space, use "replace:: \ \" (with two spaces) To insert a space, use "replace:: \ \" (with two spaces)
To insert no spaces, use "replace:: \" To insert no spaces, use "replace:: \"
.. |s| replace:: \ .. |s| replace:: \
.. product capabilities
.. |max-workers| replace:: 99

View File

@ -15,7 +15,7 @@ Bare Metal
Worker Worker
A node within a |prod| edge cloud that is dedicated to running application A node within a |prod| edge cloud that is dedicated to running application
workloads. There can be 0 to 99 worker nodes in a |prod| edge cloud. workloads. There can be 0 to |max-workers| worker nodes in a |prod| edge cloud.
- Runs virtual switch for realizing virtual networks. - Runs virtual switch for realizing virtual networks.
- Provides L3 routing and NET services. - Provides L3 routing and NET services.

3
doc/source/planning/.vscode/settings.json vendored Normal file → Executable file
View File

@ -1,3 +1,2 @@
{ {
"restructuredtext.confPath": "/mnt/c/Users/rstone/Desktop/upstream/planning/docs/doc/source"
} }

0
doc/source/planning/figures/eag1565612501060.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

0
doc/source/planning/figures/fqq1554387160841.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

0
doc/source/planning/figures/gnc1565626763250.jpeg Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

0
doc/source/planning/figures/jow1404333560781.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

0
doc/source/planning/figures/jow1438030468959.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

0
doc/source/planning/figures/jrh1581365123827.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

0
doc/source/planning/figures/noc1581364555316.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

0
doc/source/planning/figures/rld1581365711865.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

0
doc/source/planning/figures/rsn1565611176484.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

0
doc/source/planning/figures/sye1565216249447.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

0
doc/source/planning/figures/uac1581365928043.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

0
doc/source/planning/figures/vzz1565620523528.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

0
doc/source/planning/figures/xjf1565612136985.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

0
doc/source/planning/figures/zpk1486667625575.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

159
doc/source/planning/index.rst Normal file → Executable file
View File

@ -1,122 +1,37 @@
.. Planning file, created by .. Planning file, created by
sphinx-quickstart on Thu Sep 3 15:14:59 2020. sphinx-quickstart on Thu Sep 3 15:14:59 2020.
You can adapt this file completely to your liking, but it should at least You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive. contain the root `toctree` directive.
======== ========
Planning Planning
======== ========
---------- ----------
Kubernetes Kubernetes
---------- ----------
************ |prod| platform planning helps ensure that the requirements of your containers,
Introduction and the requirements of your cloud administration and operations teams can be
************ met. It ensures proper integration of a |prod| into the target data center or
telecom office, and helps you plan up front for future cloud growth.
.. toctree::
:maxdepth: 1 Planning your |prod| installation is a prerequisite for further |prod-os|
installation planning.
kubernetes/overview-of-starlingx-planning
.. toctree::
**************** :maxdepth: 2
Network planning
**************** kubernetes/index
.. toctree:: ---------
:maxdepth: 1 OpenStack
---------
kubernetes/network-requirements
kubernetes/networks-for-a-simplex-system |prod-os| is installed as an application in a deployed |prod| environment and
kubernetes/networks-for-a-duplex-system requires additional network, storage, security and resource planning.
kubernetes/networks-for-a-system-with-controller-storage
kubernetes/networks-for-a-system-with-dedicated-storage .. toctree::
kubernetes/network-requirements-ip-support :maxdepth: 2
kubernetes/network-planning-the-pxe-boot-network
kubernetes/the-cluster-host-network openstack/index
kubernetes/the-storage-network
Internal management network
***************************
.. toctree::
:maxdepth: 1
kubernetes/the-internal-management-network
kubernetes/internal-management-network-planning
kubernetes/multicast-subnets-for-the-management-network
OAM network
***********
.. toctree::
:maxdepth: 1
kubernetes/about-the-oam-network
kubernetes/oam-network-planning
kubernetes/dns-and-ntp-servers
kubernetes/network-planning-firewall-options
L2 access switches
******************
.. toctree::
:maxdepth: 1
kubernetes/l2-access-switches
kubernetes/redundant-top-of-rack-switch-deployment-considerations
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
kubernetes/about-ethernet-interfaces
kubernetes/network-planning-ethernet-interface-configuration
kubernetes/the-ethernet-mtu
kubernetes/shared-vlan-or-multi-netted-ethernet-interfaces
****************
Storage planning
****************
.. toctree::
:maxdepth: 1
kubernetes/storage-planning-storage-resources
kubernetes/storage-planning-storage-on-controller-hosts
kubernetes/storage-planning-storage-on-worker-hosts
kubernetes/storage-planning-storage-on-storage-hosts
kubernetes/external-netapp-trident-storage
*****************
Security planning
*****************
.. toctree::
:maxdepth: 1
kubernetes/security-planning-uefi-secure-boot-planning
kubernetes/tpm-planning
**********************************
Installation and resource planning
**********************************
.. toctree::
:maxdepth: 1
kubernetes/installation-and-resource-planning-https-access-planning
kubernetes/starlingx-hardware-requirements
kubernetes/verified-commercial-hardware
kubernetes/starlingx-boot-sequence-considerations
kubernetes/hard-drive-options
kubernetes/controller-disk-configurations-for-all-in-one-systems
---------
OpenStack
---------
Coming soon.

View File

@ -1,28 +1,28 @@
.. buu1552671069267 .. buu1552671069267
.. _about-ethernet-interfaces: .. _about-ethernet-interfaces:
========================= =========================
About Ethernet Interfaces About Ethernet Interfaces
========================= =========================
Ethernet interfaces, both physical and virtual, play a key role in the overall Ethernet interfaces, both physical and virtual, play a key role in the overall
performance of the virtualized network. It is important to understand the performance of the virtualized network. It is important to understand the
available interface types, their configuration options, and their impact on available interface types, their configuration options, and their impact on
network design. network design.
.. _about-ethernet-interfaces-section-N1006F-N1001A-N10001: .. _about-ethernet-interfaces-section-N1006F-N1001A-N10001:
----------------------- -----------------------
About LAG/AE interfaces About LAG/AE interfaces
----------------------- -----------------------
You can use |LAG| for Ethernet interfaces. |prod| supports up to four ports in You can use |LAG| for Ethernet interfaces. |prod| supports up to four ports in
a |LAG| group. a |LAG| group.
Ethernet interfaces in a |LAG| group can be attached either to the same L2 Ethernet interfaces in a |LAG| group can be attached either to the same L2
switch, or to multiple switches in a redundant configuration. For more switch, or to multiple switches in a redundant configuration. For more
information about L2 switch configurations, see |planning-doc|: :ref:`L2 Access information about L2 switch configurations, see |planning-doc|: :ref:`L2 Access
Switches <l2-access-switches>`. Switches <l2-access-switches>`.
.. xbooklink For information about the different |LAG| modes, see |node-doc|: :ref:`Link Aggregation Settings <link-aggregation-settings>`. .. xbooklink For information about the different |LAG| modes, see |node-doc|: :ref:`Link Aggregation Settings <link-aggregation-settings>`.

126
doc/source/planning/kubernetes/about-the-oam-network.rst Normal file → Executable file
View File

@ -1,63 +1,63 @@
.. ozd1552671198357 .. ozd1552671198357
.. _about-the-oam-network: .. _about-the-oam-network:
===================== =====================
About the OAM Network About the OAM Network
===================== =====================
The |OAM| network provides for control access. The |OAM| network provides for control access.
You should ensure that the following services are available on the |OAM| You should ensure that the following services are available on the |OAM|
Network: Network:
**DNS Service** **DNS Service**
Needed to facilitate the name resolution of servers reachable on the |OAM| Needed to facilitate the name resolution of servers reachable on the |OAM|
Network. Network.
|prod| can operate without a configured DNS service. However, a DNS service |prod| can operate without a configured DNS service. However, a DNS service
should be in place to ensure that links to external references in the should be in place to ensure that links to external references in the
current and future versions of the Horizon Web interface work as expected. current and future versions of the Horizon Web interface work as expected.
**Docker Registry Service** **Docker Registry Service**
A private or public Docker registry service needed to serve remote A private or public Docker registry service needed to serve remote
container image requests from Kubernetes and the underlying Docker service. container image requests from Kubernetes and the underlying Docker service.
This remote Docker registry must hold the required |prod| container images This remote Docker registry must hold the required |prod| container images
for the appropriate release, to fully install a |prod| system. for the appropriate release, to fully install a |prod| system.
**NTP Service** **NTP Service**
|NTP| can be used by the |prod| controller nodes to synchronize their local |NTP| can be used by the |prod| controller nodes to synchronize their local
clocks with a reliable external time reference. |org| strongly recommends clocks with a reliable external time reference. |org| strongly recommends
that this service be available to ensure that system-wide log reports that this service be available to ensure that system-wide log reports
present a unified view of the day-to-day operations. present a unified view of the day-to-day operations.
The |prod| worker nodes and storage nodes always use the controller nodes The |prod| worker nodes and storage nodes always use the controller nodes
as the de-facto time server for the entire |prod| cluster. as the de-facto time server for the entire |prod| cluster.
**PTP Service** **PTP Service**
As an alternative to |NTP| services, |PTP| can be used by the |prod| As an alternative to |NTP| services, |PTP| can be used by the |prod|
controller nodes to synchronize clocks in a network. It provides: controller nodes to synchronize clocks in a network. It provides:
- more accurate clock synchronization - more accurate clock synchronization
- the ability to extend the clock synchronization, not only to |prod| - the ability to extend the clock synchronization, not only to |prod|
hosts \(controllers, workers, and storage nodes\), but also to hosted hosts \(controllers, workers, and storage nodes\), but also to hosted
applications on |prod| hosts. applications on |prod| hosts.
When used in conjunction with hardware support on the |OAM| and Management When used in conjunction with hardware support on the |OAM| and Management
network interface cards, |PTP| is capable of sub-microsecond accuracy. network interface cards, |PTP| is capable of sub-microsecond accuracy.
|org| strongly recommends that this service, or |NTP|, if available, be |org| strongly recommends that this service, or |NTP|, if available, be
used to ensure that system-wide log reports present a unified view of the used to ensure that system-wide log reports present a unified view of the
day-to-day operations, and that other time-sensitive operations are day-to-day operations, and that other time-sensitive operations are
performed accurately. performed accurately.
Various |NICs| and network switches provide the hardware support for |PTP| Various |NICs| and network switches provide the hardware support for |PTP|
used by |OAM| and Management networks. This hardware provides an on-board used by |OAM| and Management networks. This hardware provides an on-board
clock that is synchronized to the |PTP| master. The computer's system clock clock that is synchronized to the |PTP| master. The computer's system clock
is synchronized to the |PTP| hardware clock on the |NIC| used to stamp is synchronized to the |PTP| hardware clock on the |NIC| used to stamp
transmitted and received |PTP| messages. For more information, see the transmitted and received |PTP| messages. For more information, see the
*IEEE 1588-2002* standard. *IEEE 1588-2002* standard.
.. note:: .. note::
|NTP| and |PTP| can be configured on a per host basis. |NTP| and |PTP| can be configured on a per host basis.

View File

@ -1,48 +1,48 @@
.. kll1552672476085 .. kll1552672476085
.. _controller-disk-configurations-for-all-in-one-systems: .. _controller-disk-configurations-for-all-in-one-systems:
===================================================== =====================================================
Controller Disk Configurations for All-in-one Systems Controller Disk Configurations for All-in-one Systems
===================================================== =====================================================
For |prod| Simplex and Duplex Systems, the controller disk configuration is For |prod| Simplex and Duplex Systems, the controller disk configuration is
highly flexible to support different system requirements for Cinder and highly flexible to support different system requirements for Cinder and
nova-local storage. nova-local storage.
You can also change the disk configuration after installation to increase the You can also change the disk configuration after installation to increase the
persistent volume claim or container-ephemeral storage. persistent volume claim or container-ephemeral storage.
.. _controller-disk-configurations-for-all-in-one-systems-table-h4n-rmg-3jb: .. _controller-disk-configurations-for-all-in-one-systems-table-h4n-rmg-3jb:
.. table:: Table 1. Disk Configurations for |prod| Simplex or Duplex systems .. table:: Table 1. Disk Configurations for |prod| Simplex or Duplex systems
:widths: auto :widths: auto
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| No. of Disks | Disk | BIOS Boot | Boot | Root | Platform File System Volume Group \(cgts-vg\) | Root Disk Unallocated Space | Ceph OSD \(PVCs\) | Notes | | No. of Disks | Disk | BIOS Boot | Boot | Root | Platform File System Volume Group \(cgts-vg\) | Root Disk Unallocated Space | Ceph OSD \(PVCs\) | Notes |
+==============+==========+===========+===========+===========+===============================================+=============================+===================+=========================================================================+ +==============+==========+===========+===========+===========+===============================================+=============================+===================+=========================================================================+
| 1 | /dev/sda | | | | | | | Not supported | | 1 | /dev/sda | | | | | | | Not supported |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use | | 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdb | | | | | | | AIO-SX [#fntarg1]_ \(replication = 1\); AIO-DX \(replication = 2\) | | | /dev/sdb | | | | | | | AIO-SX [#fntarg1]_ \(replication = 1\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion | | 2 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdb | | | | | | | AIO-SX \(replication = 1\); AIO-DX \(replication = 2\) | | | /dev/sdb | | | | | | | AIO-SX \(replication = 1\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use | | 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | Not allocated | Disk | Space left unallocated for future application use |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) | | | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) | | | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
| 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion | | 3 | /dev/sda | /dev/sda1 | /dev/sda2 | /dev/sda3 | /dev/sda4 | /dev/sda5 \(cgts-vg\) | Disk | Space allocated to cgts-vg to allow filesystem expansion |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) | | | /dev/sdb | | | | | | Disk | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
| | | | | | | | | | | | | | | | | | | |
| | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) | | | /dev/sdc | | | | | | | AIO-SX:superscript:`1:` \(replication = 2\); AIO-DX \(replication = 2\) |
+--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+ +--------------+----------+-----------+-----------+-----------+-----------------------------------------------+-----------------------------+-------------------+-------------------------------------------------------------------------+
.. [#fntarg1] |AIO|-Simplex Ceph replication is disk-based. .. [#fntarg1] |AIO|-Simplex Ceph replication is disk-based.

28
doc/source/planning/kubernetes/dns-and-ntp-servers.rst Normal file → Executable file
View File

@ -1,14 +1,14 @@
.. olk1552671165229 .. olk1552671165229
.. _dns-and-ntp-servers: .. _dns-and-ntp-servers:
=================== ===================
DNS and NTP Servers DNS and NTP Servers
=================== ===================
|prod| supports configuring up to three remote DNS servers and three remote |prod| supports configuring up to three remote DNS servers and three remote
|NTP| servers to use for name resolution and network time synchronization |NTP| servers to use for name resolution and network time synchronization
respectively. respectively.
These can be specified during Ansible bootstrapping of controller-0 or at a These can be specified during Ansible bootstrapping of controller-0 or at a
later time. later time.

View File

@ -1,28 +1,28 @@
.. pci1585052341505 .. pci1585052341505
.. _external-netapp-trident-storage: .. _external-netapp-trident-storage:
=============================== ===============================
External Netapp Trident Storage External Netapp Trident Storage
=============================== ===============================
|prod| can utilize the open source Netapp Trident block storage backend as an |prod| can utilize the open source Netapp Trident block storage backend as an
alternative to Ceph-based internal block storage. alternative to Ceph-based internal block storage.
Netapp Trident supports: Netapp Trident supports:
.. _external-netapp-trident-storage-d247e23: .. _external-netapp-trident-storage-d247e23:
- |AWS| Cloud Volumes - |AWS| Cloud Volumes
- E and EF-Series SANtricity - E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud - ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire - Element HCI and SolidFire
- Azure NetApp Files service - Azure NetApp Files service
For more information about Trident, see `https://netapp-trident.readthedocs.io For more information about Trident, see `https://netapp-trident.readthedocs.io
<https://netapp-trident.readthedocs.io>`__. <https://netapp-trident.readthedocs.io>`__.

56
doc/source/planning/kubernetes/hard-drive-options.rst Normal file → Executable file
View File

@ -1,28 +1,28 @@
.. lqo1552672461538 .. lqo1552672461538
.. _hard-drive-options: .. _hard-drive-options:
================== ==================
Hard Drive Options Hard Drive Options
================== ==================
For hard drive storage, |prod| supports high-performance |SSD| and |NVMe| For hard drive storage, |prod| supports high-performance |SSD| and |NVMe|
drives as well as rotational disks. drives as well as rotational disks.
To increase system performance, you can use a |SSD| or a |NVMe| drive on |prod| To increase system performance, you can use a |SSD| or a |NVMe| drive on |prod|
hosts in place of any rotational drive. |SSD| provides faster read-write access hosts in place of any rotational drive. |SSD| provides faster read-write access
than mechanical drives. |NVMe| supports the full performance potential of |SSD| than mechanical drives. |NVMe| supports the full performance potential of |SSD|
by providing a faster communications bus compared to the |SATA| or |SAS| by providing a faster communications bus compared to the |SATA| or |SAS|
technology used with standard |SSDs|. technology used with standard |SSDs|.
On storage hosts, |SSD| or |NVMe| drives are required for journals or Ceph On storage hosts, |SSD| or |NVMe| drives are required for journals or Ceph
caching. caching.
.. xrefbook For more information about these features, see |stor-doc|: :ref:`Storage on Storage Hosts <storage-hosts-storage-on-storage-hosts>`. .. xrefbook For more information about these features, see |stor-doc|: :ref:`Storage on Storage Hosts <storage-hosts-storage-on-storage-hosts>`.
For |NVMe| drives, a host with an |NVMe|-ready BIOS and |NVMe| connectors or For |NVMe| drives, a host with an |NVMe|-ready BIOS and |NVMe| connectors or
adapters is required. adapters is required.
To use an |NVMe| drive as a root drive, you must enable |UEFI| support in the To use an |NVMe| drive as a root drive, you must enable |UEFI| support in the
host BIOS. In addition, when installing the host, you must perform extra steps host BIOS. In addition, when installing the host, you must perform extra steps
to assign the drive as the boot device. to assign the drive as the boot device.

View File

@ -0,0 +1,109 @@
.. _planning_kubernetes_index:
----------
Kubernetes
----------
************
Introduction
************
.. toctree::
:maxdepth: 1
overview-of-starlingx-planning
****************
Network planning
****************
.. toctree::
:maxdepth: 1
network-requirements
networks-for-a-simplex-system
networks-for-a-duplex-system
networks-for-a-system-with-controller-storage
networks-for-a-system-with-dedicated-storage
network-requirements-ip-support
network-planning-the-pxe-boot-network
the-cluster-host-network
the-storage-network
Internal management network
***************************
.. toctree::
:maxdepth: 1
the-internal-management-network
internal-management-network-planning
multicast-subnets-for-the-management-network
OAM network
***********
.. toctree::
:maxdepth: 1
about-the-oam-network
oam-network-planning
dns-and-ntp-servers
network-planning-firewall-options
L2 access switches
******************
.. toctree::
:maxdepth: 1
l2-access-switches
redundant-top-of-rack-switch-deployment-considerations
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
about-ethernet-interfaces
network-planning-ethernet-interface-configuration
the-ethernet-mtu
shared-vlan-or-multi-netted-ethernet-interfaces
****************
Storage planning
****************
.. toctree::
:maxdepth: 1
storage-planning-storage-resources
storage-planning-storage-on-controller-hosts
storage-planning-storage-on-worker-hosts
storage-planning-storage-on-storage-hosts
external-netapp-trident-storage
*****************
Security planning
*****************
.. toctree::
:maxdepth: 1
security-planning-uefi-secure-boot-planning
tpm-planning
**********************************
Installation and resource planning
**********************************
.. toctree::
:maxdepth: 1
installation-and-resource-planning-https-access-planning
starlingx-hardware-requirements
verified-commercial-hardware
starlingx-boot-sequence-considerations
hard-drive-options
controller-disk-configurations-for-all-in-one-systems

View File

@ -1,109 +1,109 @@
.. cxj1582060027471 .. cxj1582060027471
.. _installation-and-resource-planning-https-access-planning: .. _installation-and-resource-planning-https-access-planning:
===================== =====================
HTTPS Access Planning HTTPS Access Planning
===================== =====================
You can enable secure HTTPS access and manage HTTPS certificates for all You can enable secure HTTPS access and manage HTTPS certificates for all
external |prod| service endpoints. external |prod| service endpoints.
These include: These include:
.. _installation-and-resource-planning-https-access-planning-d18e34: .. _installation-and-resource-planning-https-access-planning-d18e34:
.. contents:: .. contents:: |minitoc|
:local: :local:
:depth: 1 :depth: 1
.. note:: .. note::
Only self-signed or Root |CA|-signed certificates are supported for the Only self-signed or Root |CA|-signed certificates are supported for the
above |prod| service endpoints. See `https://en.wikipedia.org/wiki/X.509 above |prod| service endpoints. See `https://en.wikipedia.org/wiki/X.509
<https://en.wikipedia.org/wiki/X.509>`__ for an overview of root, <https://en.wikipedia.org/wiki/X.509>`__ for an overview of root,
intermediate, and end-entity certificates. intermediate, and end-entity certificates.
You can also add a trusted |CA| for the |prod| system. You can also add a trusted |CA| for the |prod| system.
.. note:: .. note::
The default HTTPS X.509 certificates that are used by |prod-long| for The default HTTPS X.509 certificates that are used by |prod-long| for
authentication are not signed by a known authority. For increased security, authentication are not signed by a known authority. For increased security,
obtain, install, and use certificates that have been signed by a Root obtain, install, and use certificates that have been signed by a Root
certificate authority. Refer to the documentation for the external Root certificate authority. Refer to the documentation for the external Root
|CA| that you are using, on how to create public certificate and private |CA| that you are using, on how to create public certificate and private
key pairs, signed by a Root CA, for HTTPS. key pairs, signed by a Root CA, for HTTPS.
.. _installation-and-resource-planning-https-access-planning-d18e75: .. _installation-and-resource-planning-https-access-planning-d18e75:
----------------------------------------------------------------- -----------------------------------------------------------------
StarlingX REST API applications and the web administration server StarlingX REST API applications and the web administration server
----------------------------------------------------------------- -----------------------------------------------------------------
By default, |prod| provides HTTP access to StarlingX REST API application By default, |prod| provides HTTP access to StarlingX REST API application
endpoints \(Keystone, Barbican and StarlingX\) and the StarlingX web endpoints \(Keystone, Barbican and StarlingX\) and the StarlingX web
administration server. For improved security, you can enable HTTPS access. When administration server. For improved security, you can enable HTTPS access. When
HTTPS access is enabled, HTTP access is disabled. HTTPS access is enabled, HTTP access is disabled.
When HTTPS is enabled for the first time on a |prod| system, a self-signed When HTTPS is enabled for the first time on a |prod| system, a self-signed
certificate and key are automatically generated and installed for the StarlingX certificate and key are automatically generated and installed for the StarlingX
REST and Web Server endpoints. In order to connect, remote clients must be REST and Web Server endpoints. In order to connect, remote clients must be
configured to accept the self-signed certificate without verifying it. This is configured to accept the self-signed certificate without verifying it. This is
called *insecure mode*. called *insecure mode*.
For secure mode connections, a Root |CA|-signed certificate and key are For secure mode connections, a Root |CA|-signed certificate and key are
required. The use of a Root |CA|-signed certificate is strongly recommended. required. The use of a Root |CA|-signed certificate is strongly recommended.
Refer to the documentation for the external |CA| that you are using, on how to Refer to the documentation for the external |CA| that you are using, on how to
create public certificate and private key pairs for HTTPS. create public certificate and private key pairs for HTTPS.
You can update the certificate and key used by |prod| for the StarlingX REST You can update the certificate and key used by |prod| for the StarlingX REST
and Web Server endpoints at any time after installation. and Web Server endpoints at any time after installation.
For additional security, |prod| optionally supports storing the private key of For additional security, |prod| optionally supports storing the private key of
the StarlingX Rest and Web Server certificate in a StarlingX |TPM| hardware the StarlingX Rest and Web Server certificate in a StarlingX |TPM| hardware
device. |TPM| 2.0-compliant hardware must be available on the controller hosts. device. |TPM| 2.0-compliant hardware must be available on the controller hosts.
.. _installation-and-resource-planning-https-access-planning-d18e105: .. _installation-and-resource-planning-https-access-planning-d18e105:
---------- ----------
Kubernetes Kubernetes
---------- ----------
For the Kubernetes API Server, HTTPS is always enabled. Similarly, by default, For the Kubernetes API Server, HTTPS is always enabled. Similarly, by default,
a self-signed certificate and key is generated and installed for the Kubernetes a self-signed certificate and key is generated and installed for the Kubernetes
Root |CA| certificate and key. This Kubernetes Root |CA| is used to create and Root |CA| certificate and key. This Kubernetes Root |CA| is used to create and
sign various certificates used within Kubernetes, including the certificate sign various certificates used within Kubernetes, including the certificate
used by the kube-apiserver API endpoint. used by the kube-apiserver API endpoint.
It is recommended that you update the Kubernetes Root |CA| and with a custom It is recommended that you update the Kubernetes Root |CA| and with a custom
Root |CA| certificate and key, generated by yourself, and trusted by external Root |CA| certificate and key, generated by yourself, and trusted by external
servers connecting to the |prod| system's Kubernetes API endpoint. The system's servers connecting to the |prod| system's Kubernetes API endpoint. The system's
Kubernetes Root |CA| is configured as part of the bootstrap during Kubernetes Root |CA| is configured as part of the bootstrap during
installation. installation.
.. _installation-and-resource-planning-https-access-planning-d18e117: .. _installation-and-resource-planning-https-access-planning-d18e117:
--------------------- ---------------------
Local Docker registry Local Docker registry
--------------------- ---------------------
For the local Docker registry, HTTPS is always enabled. Similarly, by default, For the local Docker registry, HTTPS is always enabled. Similarly, by default,
a self-signed certificate and key is generated and installed for this endpoint. a self-signed certificate and key is generated and installed for this endpoint.
However, it is recommended that you update the certificate used after However, it is recommended that you update the certificate used after
installation with a Root |CA|-signed certificate and key. Refer to the installation with a Root |CA|-signed certificate and key. Refer to the
documentation for the external |CA| that you are using, on how to create public documentation for the external |CA| that you are using, on how to create public
certificate and private key pairs for HTTPS. certificate and private key pairs for HTTPS.
.. _installation-and-resource-planning-https-access-planning-d18e126: .. _installation-and-resource-planning-https-access-planning-d18e126:
----------- -----------
Trusted CAs Trusted CAs
----------- -----------
|prod| also supports the ability to update the trusted |CA| certificate bundle |prod| also supports the ability to update the trusted |CA| certificate bundle
on all nodes in the system. This is required, for example, when container on all nodes in the system. This is required, for example, when container
images are being pulled from an external docker registry with a certificate images are being pulled from an external docker registry with a certificate
signed by a non-well-known |CA|. signed by a non-well-known |CA|.

View File

@ -1,57 +1,57 @@
.. lla1552670572043 .. lla1552670572043
.. _internal-management-network-planning: .. _internal-management-network-planning:
==================================== ===============================================
Internal Management Network Planning Kubernetes Internal Management Network Planning
==================================== ===============================================
The internal management network is a private network, visible only to the hosts The internal management network is a private network, visible only to the hosts
in the cluster. in the cluster.
.. note:: .. note::
This network is not used with |prod| Simplex systems. This network is not used with |prod| Simplex systems.
You must consider the following guidelines: You must consider the following guidelines:
.. _internal-management-network-planning-ul-gqd-gj2-4n: .. _internal-management-network-planning-ul-gqd-gj2-4n:
- The internal management network is used for |PXE| booting of new hosts, and - The internal management network is used for |PXE| booting of new hosts, and
must be untagged. It is limited to IPv4, because the |prod| installer does must be untagged. It is limited to IPv4, because the |prod| installer does
not support IPv6 |PXE| booting. For example, if the internal management not support IPv6 |PXE| booting. For example, if the internal management
network needs to be on a |VLAN|-tagged network for deployment reasons, or network needs to be on a |VLAN|-tagged network for deployment reasons, or
if it must support IPv6, you can configure the optional untagged |PXE| boot if it must support IPv6, you can configure the optional untagged |PXE| boot
network for |PXE| booting of new hosts using IPv4. network for |PXE| booting of new hosts using IPv4.
- You can use any 1 GB or 10 GB interface on the hosts to connect to this - You can use any 1 GB or 10 GB interface on the hosts to connect to this
network, provided that the interface supports network booting and can be network, provided that the interface supports network booting and can be
configured from the BIOS as the primary boot device. configured from the BIOS as the primary boot device.
- If static IP address assignment is used, you must use the :command:`system - If static IP address assignment is used, you must use the :command:`system
host-add` command to add new hosts, and to assign IP addresses manually. In host-add` command to add new hosts, and to assign IP addresses manually. In
this mode, new hosts are *not* automatically added to the inventory when this mode, new hosts are *not* automatically added to the inventory when
they are powered on, and they display the following message on the host they are powered on, and they display the following message on the host
console: console:
.. code-block:: none .. code-block:: none
This system has been configured with static management This system has been configured with static management
and infrastructure IP address allocation. This requires and infrastructure IP address allocation. This requires
that the node be manually provisioned in System that the node be manually provisioned in System
Inventory using the 'system host-add' CLI, GUI, or Inventory using the 'system host-add' CLI, GUI, or
stx API equivalent. stx API equivalent.
- For the IPv4 address plan, use a private IPv4 subnet as specified in RFC - For the IPv4 address plan, use a private IPv4 subnet as specified in RFC
1918. This helps prevent unwanted cross-network traffic on this network. 1918. This helps prevent unwanted cross-network traffic on this network.
It is suggested that you use the default subnet and addresses provided by It is suggested that you use the default subnet and addresses provided by
the controller configuration script. the controller configuration script.
- You can assign a range of addresses on the management subnet for use by the - You can assign a range of addresses on the management subnet for use by the
|prod|. If you do not assign a range, |prod| takes ownership of all |prod|. If you do not assign a range, |prod| takes ownership of all
available addresses. available addresses.
- On systems with two controllers, they use IP multicast messaging on the - On systems with two controllers, they use IP multicast messaging on the
internal management network. To prevent loss of controller synchronization, internal management network. To prevent loss of controller synchronization,
ensure that the switches and other devices on these networks are configured ensure that the switches and other devices on these networks are configured
with appropriate settings. with appropriate settings.

122
doc/source/planning/kubernetes/l2-access-switches.rst Normal file → Executable file
View File

@ -1,61 +1,61 @@
.. kvt1552671101079 .. kvt1552671101079
.. _l2-access-switches: .. _l2-access-switches:
================== =============================
L2 Access Switches Kubernetes L2 Access Switches
================== =============================
L2 access switches connect the |prod| hosts to the different networks. Proper L2 access switches connect the |prod| hosts to the different networks. Proper
configuration of the access ports is necessary to ensure proper traffic flow. configuration of the access ports is necessary to ensure proper traffic flow.
One or more L2 switches can be used to connect the |prod| hosts to the One or more L2 switches can be used to connect the |prod| hosts to the
different networks. When sharing a single L2 switch you must ensure proper different networks. When sharing a single L2 switch you must ensure proper
isolation of network traffic. A sample configuration for a shared L2 switch isolation of network traffic. A sample configuration for a shared L2 switch
could include: could include:
.. _l2-access-switches-ul-obf-dyr-4n: .. _l2-access-switches-ul-obf-dyr-4n:
- one port-based |VLAN| for the internal management network with internal - one port-based |VLAN| for the internal management network with internal
cluster host network sharing this same L2 network \(default configuration\) cluster host network sharing this same L2 network \(default configuration\)
- one port-based |VLAN| for the |OAM| network - one port-based |VLAN| for the |OAM| network
- one or more sets of |VLANs| for additional networks for external network - one or more sets of |VLANs| for additional networks for external network
connectivity connectivity
When using multiple L2 switches, there are several deployment possibilities: When using multiple L2 switches, there are several deployment possibilities:
.. _l2-access-switches-ul-qmd-wyr-4n: .. _l2-access-switches-ul-qmd-wyr-4n:
- A single L2 switch for the internal management cluster host and |OAM| - A single L2 switch for the internal management cluster host and |OAM|
networks. Port or |MAC|-based network isolation is mandatory. networks. Port or |MAC|-based network isolation is mandatory.
- An additional L2 switch for the one or more additional networks for - An additional L2 switch for the one or more additional networks for
external network connectivity. external network connectivity.
- Redundant L2 switches to support link aggregation, using either a failover - Redundant L2 switches to support link aggregation, using either a failover
model, or |VPC| for more robust redundancy. For more information, see model, or |VPC| for more robust redundancy. For more information, see
:ref:`Redundant Top-of-Rack Switch Deployment Considerations :ref:`Redundant Top-of-Rack Switch Deployment Considerations
<redundant-top-of-rack-switch-deployment-considerations>`. <redundant-top-of-rack-switch-deployment-considerations>`.
Switch ports that send tagged traffic are referred to as trunk ports. They Switch ports that send tagged traffic are referred to as trunk ports. They
participate in |STP| from the moment the link goes up, which results in a participate in |STP| from the moment the link goes up, which results in a
several second delay before the trunk port moves to the forwarding state. This several second delay before the trunk port moves to the forwarding state. This
delay will impact services such as |DHCP| and |PXE| that are used during delay will impact services such as |DHCP| and |PXE| that are used during
regular operations of |prod|. regular operations of |prod|.
You must consider configuring the switch ports, to which the management You must consider configuring the switch ports, to which the management
interfaces are attached, to transition to the forwarding state immediately interfaces are attached, to transition to the forwarding state immediately
after the link goes up. This option is referred to as a PortFast. after the link goes up. This option is referred to as a PortFast.
You should consider configuring these ports to prevent them from participating You should consider configuring these ports to prevent them from participating
on any |STP| exchanges. This is done by configuring them to avoid processing on any |STP| exchanges. This is done by configuring them to avoid processing
inbound and outbound |BPDU| |STP| packets completely. Consult your switch's inbound and outbound |BPDU| |STP| packets completely. Consult your switch's
manual for details. manual for details.
.. seealso:: .. seealso::
:ref:`Redundant Top-of-Rack Switch Deployment Considerations <redundant-top-of-rack-switch-deployment-considerations>` :ref:`Redundant Top-of-Rack Switch Deployment Considerations <redundant-top-of-rack-switch-deployment-considerations>`

View File

@ -1,54 +1,54 @@
.. osh1552670597082 .. osh1552670597082
.. _multicast-subnets-for-the-management-network: .. _multicast-subnets-for-the-management-network:
============================================ ============================================
Multicast Subnets for the Management Network Multicast Subnets for the Management Network
============================================ ============================================
A multicast subnet specifies the range of addresses that the system can use for A multicast subnet specifies the range of addresses that the system can use for
multicast messaging on the network. You can use this subnet to prevent multicast messaging on the network. You can use this subnet to prevent
multicast leaks in multi-region environments. Addresses for the affected multicast leaks in multi-region environments. Addresses for the affected
services are allocated automatically from the subnet. services are allocated automatically from the subnet.
The requirements for multicast subnets are as follows: The requirements for multicast subnets are as follows:
.. _multicast-subnets-for-the-management-network-ul-ubf-ytc-b1b: .. _multicast-subnets-for-the-management-network-ul-ubf-ytc-b1b:
- IP multicast addresses must be in the range of 224.0.0.0 through - IP multicast addresses must be in the range of 224.0.0.0 through
239.255.255.255 239.255.255.255
For IPv6, the recommended range is ffx5::/16. For IPv6, the recommended range is ffx5::/16.
- IP multicast address ranges for a particular region must not conflict or - IP multicast address ranges for a particular region must not conflict or
overlap with the IP multicast address ranges of other regions. overlap with the IP multicast address ranges of other regions.
- IP multicast address ranges must not conflict or overlap with the - IP multicast address ranges must not conflict or overlap with the
well-known multicast addresses listed at: well-known multicast addresses listed at:
`https://en.wikipedia.org/wiki/Multicast_address `https://en.wikipedia.org/wiki/Multicast_address
<https://en.wikipedia.org/wiki/Multicast_address>`__ <https://en.wikipedia.org/wiki/Multicast_address>`__
- IP multicast addresses must be unique within the network. - IP multicast addresses must be unique within the network.
- The lower 23-bits of the IP multicast address, used to construct the - The lower 23-bits of the IP multicast address, used to construct the
multicast MAC address, must be unique within the network. multicast MAC address, must be unique within the network.
- When interfaces of different regions are on the same L2 network / IP - When interfaces of different regions are on the same L2 network / IP
subnet, a separate multicast subnet is required for each region. subnet, a separate multicast subnet is required for each region.
- The minimum multicast network range is 16 host entries. - The minimum multicast network range is 16 host entries.
.. note:: .. note::
Addresses used within the IP multicast address range apply to services Addresses used within the IP multicast address range apply to services
using IP multicast, not to hosts. using IP multicast, not to hosts.
.. warning:: .. warning::
|ToR| switches with snooping enabled on this network segment require a |ToR| switches with snooping enabled on this network segment require a
|IGMP|/|MLD| querier on that network to prevent nodes from being dropped |IGMP|/|MLD| querier on that network to prevent nodes from being dropped
from the multicast group. from the multicast group.
The default setting for the multicast subnet is 239.1.1.0/28. The default for The default setting for the multicast subnet is 239.1.1.0/28. The default for
IPv6 is ff05::14:1:1:0/124. IPv6 is ff05::14:1:1:0/124.

View File

@ -1,53 +1,53 @@
.. elj1552671053086 .. elj1552671053086
.. _network-planning-ethernet-interface-configuration: .. _network-planning-ethernet-interface-configuration:
================================ ================================
Ethernet Interface Configuration Ethernet Interface Configuration
================================ ================================
You can review and modify the configuration for physical or virtual Ethernet You can review and modify the configuration for physical or virtual Ethernet
interfaces using the Horizon Web interface or the CLI. interfaces using the Horizon Web interface or the CLI.
.. _network-planning-ethernet-interface-configuration-section-N1001F-N1001C-N10001: .. _network-planning-ethernet-interface-configuration-section-N1001F-N1001C-N10001:
---------------------------- ----------------------------
Physical Ethernet Interfaces Physical Ethernet Interfaces
---------------------------- ----------------------------
The Physical Ethernet interfaces on |prod| nodes are configured to use the The Physical Ethernet interfaces on |prod| nodes are configured to use the
following networks: following networks:
.. _network-planning-ethernet-interface-configuration-ul-lk1-b4j-zq: .. _network-planning-ethernet-interface-configuration-ul-lk1-b4j-zq:
- the internal management network, with the cluster host network sharing this - the internal management network, with the cluster host network sharing this
interface \(default configuration\) interface \(default configuration\)
- the external |OAM| network - the external |OAM| network
- additional networks for container workload connectivity to external - additional networks for container workload connectivity to external
networks networks
A single interface can be configured to support more than one network using A single interface can be configured to support more than one network using
|VLAN| tagging. See :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces |VLAN| tagging. See :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<shared-vlan-or-multi-netted-ethernet-interfaces>` for more information. <shared-vlan-or-multi-netted-ethernet-interfaces>` for more information.
On the controller nodes, all Ethernet interfaces are configured when the nodes On the controller nodes, all Ethernet interfaces are configured when the nodes
are initialized based on the information provided in the Ansible Bootstrap are initialized based on the information provided in the Ansible Bootstrap
Playbook. For more information, see the `StarlingX Installation and Deployment Playbook. For more information, see the `StarlingX Installation and Deployment
Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. On Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. On
worker and storage nodes, the Ethernet interface for the internal management worker and storage nodes, the Ethernet interface for the internal management
networks are configured. The remaining interfaces require manual configuration. networks are configured. The remaining interfaces require manual configuration.
.. note:: .. note::
If a network attachment uses |LAG|, the corresponding interfaces on the If a network attachment uses |LAG|, the corresponding interfaces on the
storage and worker nodes must be configured manually to specify the storage and worker nodes must be configured manually to specify the
interface type. interface type.
You can review and modify physical interface configurations from Horizon or the You can review and modify physical interface configurations from Horizon or the
CLI. CLI.
.. xbooklink For more information, see |node-doc|: :ref:`Edit Interface Settings <editing-interface-settings>`. .. xbooklink For more information, see |node-doc|: :ref:`Edit Interface Settings <editing-interface-settings>`.
You can save the interface configurations for a particular node to use as a You can save the interface configurations for a particular node to use as a
profile or template when setting up other nodes. profile or template when setting up other nodes.

View File

@ -1,103 +1,103 @@
.. rmw1552671149311 .. rmw1552671149311
.. _network-planning-firewall-options: .. _network-planning-firewall-options:
================ ================
Firewall Options Firewall Options
================ ================
|prod| incorporates a default firewall for the |OAM| network. You can configure |prod| incorporates a default firewall for the |OAM| network. You can configure
additional Kubernetes Network Policies in order to augment or override the additional Kubernetes Network Policies in order to augment or override the
default rules. default rules.
The |prod| firewall uses the Kubernetes Network Policies \(using the Calico The |prod| firewall uses the Kubernetes Network Policies \(using the Calico
|CNI|\) to implement a firewall on the |OAM| network. |CNI|\) to implement a firewall on the |OAM| network.
A minimal set of rules is always applied before any custom rules, as follows: A minimal set of rules is always applied before any custom rules, as follows:
.. _network-planning-firewall-options-d342e35: .. _network-planning-firewall-options-d342e35:
- Non-|OAM| traffic is always accepted. - Non-|OAM| traffic is always accepted.
- Egress traffic is always accepted. - Egress traffic is always accepted.
- Service manager \(SM\) traffic is always accepted. - Service manager \(SM\) traffic is always accepted.
- |SSH| traffic is always accepted. - |SSH| traffic is always accepted.
You can introduce custom rules by creating and installing custom Kubernetes You can introduce custom rules by creating and installing custom Kubernetes
Network Policies. Network Policies.
The following example opens up default HTTPS port 443. The following example opens up default HTTPS port 443.
.. code-block:: none .. code-block:: none
% cat <<EOF > gnp-oam-overrides.yaml % cat <<EOF > gnp-oam-overrides.yaml
apiVersion: crd.projectcalico.org/v1 apiVersion: crd.projectcalico.org/v1
kind: GlobalNetworkPolicy kind: GlobalNetworkPolicy
metadata: metadata:
name: gnp-oam-overrides name: gnp-oam-overrides
spec: spec:
ingress: ingress:
- action: Allow - action: Allow
destination: destination:
ports: ports:
- 443 - 443
protocol: TCP protocol: TCP
order: 500 order: 500
selector: has(iftype) && iftype == 'oam' selector: has(iftype) && iftype == 'oam'
types: types:
- Ingress - Ingress
EOF EOF
It can be applied using the :command:`kubectl apply` command. For example: It can be applied using the :command:`kubectl apply` command. For example:
.. code-block:: none .. code-block:: none
$ kubectl apply -f gnp-oam-overrides.yaml $ kubectl apply -f gnp-oam-overrides.yaml
You can confirm the policy was applied properly using the :command:`kubectl You can confirm the policy was applied properly using the :command:`kubectl
describe` command. For example: describe` command. For example:
.. code-block:: none .. code-block:: none
$ kubectl describe globalnetworkpolicy gnp-oam-overrides $ kubectl describe globalnetworkpolicy gnp-oam-overrides
Name: gnp-oam-overrides Name: gnp-oam-overrides
Namespace: Namespace:
Labels: <none> Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration: Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"crd.projectcalico.org/v1","kind":"GlobalNetworkPolicy","metadata":{"annotations":{},"name":"gnp-openstack-oam"},"spec... {"apiVersion":"crd.projectcalico.org/v1","kind":"GlobalNetworkPolicy","metadata":{"annotations":{},"name":"gnp-openstack-oam"},"spec...
API Version: crd.projectcalico.org/v1 API Version: crd.projectcalico.org/v1
Kind: GlobalNetworkPolicy Kind: GlobalNetworkPolicy
Metadata: Metadata:
Creation Timestamp: 2019-05-16T13:07:45Z Creation Timestamp: 2019-05-16T13:07:45Z
Generation: 1 Generation: 1
Resource Version: 296298 Resource Version: 296298
Self Link: /apis/crd.projectcalico.org/v1/globalnetworkpolicies/gnp-openstack-oam Self Link: /apis/crd.projectcalico.org/v1/globalnetworkpolicies/gnp-openstack-oam
UID: 98a324ab-77db-11e9-9f9f-a4bf010007e9 UID: 98a324ab-77db-11e9-9f9f-a4bf010007e9
Spec: Spec:
Ingress: Ingress:
Action: Allow Action: Allow
Destination: Destination:
Ports: Ports:
443 443
Protocol: TCP Protocol: TCP
Order: 500 Order: 500
Selector: has(iftype) && iftype == 'oam' Selector: has(iftype) && iftype == 'oam'
Types: Types:
Ingress Ingress
Events: <none> Events: <none>
.. xbooklink For information about yaml rule syntax, see |sysconf-doc|: :ref:`Modify OAM Firewall Rules <modifying-oam-firewall-rules>`. .. xbooklink For information about yaml rule syntax, see |sysconf-doc|: :ref:`Modify OAM Firewall Rules <modifying-oam-firewall-rules>`.
.. xbooklink For the default rules used by |prod| see |sec-doc|: :ref:`Default Firewall Rules <security-default-firewall-rules>`. .. xbooklink For the default rules used by |prod| see |sec-doc|: :ref:`Default Firewall Rules <security-default-firewall-rules>`.
.. seealso:: .. seealso::
For a full description of |GNP| syntax: For a full description of |GNP| syntax:
`https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy `https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy
<https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy>`__. <https://docs.projectcalico.org/v3.6/reference/calicoctl/resources/globalnetworkpolicy>`__.

View File

@ -1,21 +1,21 @@
.. bvi1552670521399 .. bvi1552670521399
.. _network-planning-the-pxe-boot-network: .. _network-planning-the-pxe-boot-network:
================ ================
PXE Boot Network PXE Boot Network
================ ================
You can set up a |PXE| boot network for booting all nodes to allow a You can set up a |PXE| boot network for booting all nodes to allow a
non-standard management network configuration. non-standard management network configuration.
The internal management network is used for |PXE| booting of new hosts and the The internal management network is used for |PXE| booting of new hosts and the
|PXE| boot network is not required. However there are scenarios where the |PXE| boot network is not required. However there are scenarios where the
internal management network cannot be used for |PXE| booting of new hosts. For internal management network cannot be used for |PXE| booting of new hosts. For
example, if the internal management network needs to be on a |VLAN|-tagged example, if the internal management network needs to be on a |VLAN|-tagged
network for deployment reasons, or if it must support IPv6, you must configure network for deployment reasons, or if it must support IPv6, you must configure
the optional untagged |PXE| boot network for |PXE| booting of new hosts using the optional untagged |PXE| boot network for |PXE| booting of new hosts using
IPv4. IPv4.
.. note:: .. note::
|prod| does not support IPv6 |PXE| booting. |prod| does not support IPv6 |PXE| booting.

View File

@ -1,28 +1,28 @@
.. tss1516219381154 .. tss1516219381154
.. _network-requirements-ip-support: .. _network-requirements-ip-support:
========== ==========
IP Support IP Support
========== ==========
|prod| supports IPv4 and IPv6 versions for various networks. |prod| supports IPv4 and IPv6 versions for various networks.
The following table lists IPv4 and IPv6 support for different networks: The following table lists IPv4 and IPv6 support for different networks:
.. _network-requirements-ip-support-table-xqy-3cj-4cb: .. _network-requirements-ip-support-table-xqy-3cj-4cb:
.. table:: Table 1. IPv4 and IPv6 Support .. table:: Table 1. IPv4 and IPv6 Support
:widths: auto :widths: auto
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | IPv4 Support | IPv6 Support | Comment | | Networks | IPv4 Support | IPv6 Support | Comment |
+======================+==============+==============+========================================================================================================================================================================================================================================================================================================================================================================================+ +======================+==============+==============+========================================================================================================================================================================================================================================================================================================================================================================================+
| PXE boot | Y | N | If present, the PXE boot network is used for PXE booting of new hosts \(instead of using the internal management network\), and must be untagged. It is limited to IPv4, because the |prod| installer does not support IPv6 UEFI booting. | | PXE boot | Y | N | If present, the PXE boot network is used for PXE booting of new hosts \(instead of using the internal management network\), and must be untagged. It is limited to IPv4, because the |prod| installer does not support IPv6 UEFI booting. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Internal Management | Y | Y | By default \(when PXE boot network is not present\), internal management is used for PXE booting of new hosts. It must be untagged and it must be IPv4. If, for deployment reasons, the internal management network needs to be on a VLAN-tagged network, or if it needs to be IPv6, you can configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. | | Internal Management | Y | Y | By default \(when PXE boot network is not present\), internal management is used for PXE booting of new hosts. It must be untagged and it must be IPv4. If, for deployment reasons, the internal management network needs to be on a VLAN-tagged network, or if it needs to be IPv6, you can configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OAM | Y | Y | The OAM network supports IPv4 or IPv6 addressing. For more information, see :ref:`OAM Network Planning <oam-network-planning>`. | | OAM | Y | Y | The OAM network supports IPv4 or IPv6 addressing. For more information, see :ref:`OAM Network Planning <oam-network-planning>`. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cluster Host Network | Y | Y | The Cluster Host network supports IPv4 or IPv6 addressing. | | Cluster Host Network | Y | Y | The Cluster Host network supports IPv4 or IPv6 addressing. |
+----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------+--------------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

78
doc/source/planning/kubernetes/network-requirements.rst Normal file → Executable file
View File

@ -1,39 +1,39 @@
.. jow1404333564380 .. jow1404333564380
.. _network-requirements: .. _network-requirements:
==================== ====================
Network Requirements Network Requirements
==================== ====================
|prod| uses several different types of networks, depending on the size of the |prod| uses several different types of networks, depending on the size of the
system and the features in use. system and the features in use.
Available networks include the optional |PXE| boot network, the internal Available networks include the optional |PXE| boot network, the internal
management network, the cluster host network, the |OAM| network, and other management network, the cluster host network, the |OAM| network, and other
optional networks for external network connectivity. optional networks for external network connectivity.
The internal management network is required by all deployment configurations The internal management network is required by all deployment configurations
for internal communication. for internal communication.
The cluster host network is required by all deployment configurations to The cluster host network is required by all deployment configurations to
support a Kubernetes cluster. It is used for private container-to-container support a Kubernetes cluster. It is used for private container-to-container
networking within a cluster. It can be used for external connectivity of networking within a cluster. It can be used for external connectivity of
container workloads. If the cluster host network is not used for external container workloads. If the cluster host network is not used for external
connectivity of container workloads, then either the |OAM| port or other connectivity of container workloads, then either the |OAM| port or other
configured ports on both the controller and worker nodes can be used for configured ports on both the controller and worker nodes can be used for
connectivity to external networks. connectivity to external networks.
The |OAM| network is required for external control and board management access. The |OAM| network is required for external control and board management access.
It can be required for container payload external connectivity, depending on It can be required for container payload external connectivity, depending on
container payload application network requirements. container payload application network requirements.
You can consolidate more than one network on a single physical interface. For You can consolidate more than one network on a single physical interface. For
more information, see :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces more information, see :ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<shared-vlan-or-multi-netted-ethernet-interfaces>`. <shared-vlan-or-multi-netted-ethernet-interfaces>`.
.. note:: .. note::
Systems with two controllers use IP multicast messaging on the internal Systems with two controllers use IP multicast messaging on the internal
management network. To prevent loss of controller synchronization, ensure management network. To prevent loss of controller synchronization, ensure
that the switches and other devices on these networks are configured with that the switches and other devices on these networks are configured with
appropriate settings. appropriate settings.

View File

@ -1,38 +1,38 @@
.. gju1463606289993 .. gju1463606289993
.. _networks-for-a-starlingx-duplex-system: .. _networks-for-a-starlingx-duplex-system:
============================ ============================
Networks for a Duplex System Networks for a Duplex System
============================ ============================
For a |prod| Duplex system, |org| recommends a minimal network configuration. For a |prod| Duplex system, |org| recommends a minimal network configuration.
|prod| Duplex uses a small hardware footprint consisting of two hosts, plus a |prod| Duplex uses a small hardware footprint consisting of two hosts, plus a
network switch for connectivity. Network loading is typically low. The network switch for connectivity. Network loading is typically low. The
following network configuration typically meets the requirements of such a following network configuration typically meets the requirements of such a
system: system:
.. _networks-for-a-starlingx-duplex-system-ul-j2d-thb-1w: .. _networks-for-a-starlingx-duplex-system-ul-j2d-thb-1w:
- An internal management network - An internal management network
- An |OAM| network, optionally consolidated on the management interface. - An |OAM| network, optionally consolidated on the management interface.
- A cluster host network for private container-to-container networking within - A cluster host network for private container-to-container networking within
a cluster. By default, this is consolidated on the management interface. a cluster. By default, this is consolidated on the management interface.
The cluster host network can also be used for external connectivity of The cluster host network can also be used for external connectivity of
container workloads. In this case, the cluster host network would be container workloads. In this case, the cluster host network would be
configured on an interface separate from the internal management interface. configured on an interface separate from the internal management interface.
- If a cluster host network is not used for external connectivity of - If a cluster host network is not used for external connectivity of
container workloads, then either the |OAM| port or additionally configured container workloads, then either the |OAM| port or additionally configured
ports on both the controller and worker nodes are used for the container ports on both the controller and worker nodes are used for the container
workloads connectivity to external networks. workloads connectivity to external networks.
.. note:: .. note::
You can enable secure HTTPS connectivity on the |OAM| network. You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>` .. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -1,25 +1,25 @@
.. wzx1492541958551 .. wzx1492541958551
.. _networks-for-a-simplex-system: .. _networks-for-a-simplex-system:
==================================== ====================================
Networks for a |prod| Simplex System Networks for a |prod| Simplex System
==================================== ====================================
For a |prod| Simplex system, only the |OAM| network and additional external For a |prod| Simplex system, only the |OAM| network and additional external
networks are used. networks are used.
|prod| Simplex uses a small hardware footprint consisting of a single host. |prod| Simplex uses a small hardware footprint consisting of a single host.
Unlike other |prod| deployments, this configuration does not require management Unlike other |prod| deployments, this configuration does not require management
or cluster host network connections for internal communications. An |OAM| or cluster host network connections for internal communications. An |OAM|
network connection is used for administrative and board management access. For network connection is used for administrative and board management access. For
external connectivity of container payloads, either the |OAM| port or other external connectivity of container payloads, either the |OAM| port or other
configured ports on the node can be used. configured ports on the node can be used.
The management and cluster host networks are required internally. They are The management and cluster host networks are required internally. They are
configured to use the loopback interface. configured to use the loopback interface.
.. note:: .. note::
You can enable secure HTTPS connectivity on the |OAM| network. You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>` .. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -1,42 +1,42 @@
.. gbo1463606348114 .. gbo1463606348114
.. _networks-for-a-starlingx-system-with-controller-storage: .. _networks-for-a-starlingx-system-with-controller-storage:
============================================= =============================================
Networks for a System with Controller Storage Networks for a System with Controller Storage
============================================= =============================================
For a system that uses controller storage, |org| recommends an intermediate For a system that uses controller storage, |org| recommends an intermediate
network configuration. network configuration.
|prod| systems with controller storage use controller and worker hosts only. |prod| systems with controller storage use controller and worker hosts only.
Network loading is low to moderate, depending on the number of worker hosts and Network loading is low to moderate, depending on the number of worker hosts and
containers. The following network configuration typically meets the containers. The following network configuration typically meets the
requirements of such a system: requirements of such a system:
.. _networks-for-a-starlingx-system-with-controller-storage-ul-j2d-thb-1w: .. _networks-for-a-starlingx-system-with-controller-storage-ul-j2d-thb-1w:
- An internal management network. - An internal management network.
- An |OAM| network, optionally consolidated on the management interface. - An |OAM| network, optionally consolidated on the management interface.
- A cluster host network for private container-to-container networking within - A cluster host network for private container-to-container networking within
the cluster, by default consolidated on the management interface. the cluster, by default consolidated on the management interface.
The cluster host network can also be used for external connectivity of The cluster host network can also be used for external connectivity of
container workloads, in which case it would be configured on an interface container workloads, in which case it would be configured on an interface
separate from the internal management interface. separate from the internal management interface.
- If a cluster host network is not used for external connectivity of - If a cluster host network is not used for external connectivity of
container workloads, then either the |OAM| port or other configured ports on container workloads, then either the |OAM| port or other configured ports on
both the controller and worker nodes are used for container workloads both the controller and worker nodes are used for container workloads
connectivity to external networks. connectivity to external networks.
- A |PXE| Boot server to support controller-0 initialization. - A |PXE| Boot server to support controller-0 initialization.
.. note:: .. note::
You can enable secure HTTPS connectivity on the |OAM| network. You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>` .. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

View File

@ -1,49 +1,49 @@
.. leh1463606429329 .. leh1463606429329
.. _networks-for-a-starlingx-with-dedicated-storage: .. _networks-for-a-starlingx-with-dedicated-storage:
============================================ ============================================
Networks for a System with Dedicated Storage Networks for a System with Dedicated Storage
============================================ ============================================
For a system that uses dedicated storage, |org| recommends a full network For a system that uses dedicated storage, |org| recommends a full network
configuration. configuration.
|prod| systems with dedicated storage include storage hosts to provide |prod| systems with dedicated storage include storage hosts to provide
Ceph-backed block storage. Network loading is moderate to high, depending on Ceph-backed block storage. Network loading is moderate to high, depending on
the number of worker hosts, containers, and storage hosts. The following the number of worker hosts, containers, and storage hosts. The following
network configuration typically meets the requirements of such a system: network configuration typically meets the requirements of such a system:
.. _networks-for-a-starlingx-with-dedicated-storage-ul-j2d-thb-1w: .. _networks-for-a-starlingx-with-dedicated-storage-ul-j2d-thb-1w:
- An internal management network. - An internal management network.
- A 10GE cluster host network for disk IO traffic to storage nodes and for - A 10GE cluster host network for disk IO traffic to storage nodes and for
private container-to-container networking within a cluster, by default private container-to-container networking within a cluster, by default
consolidated on the management interface. consolidated on the management interface.
The cluster host network can be configured on an interface separate from The cluster host network can be configured on an interface separate from
the internal management interface for external connectivity of container the internal management interface for external connectivity of container
workloads. workloads.
- An |OAM| network. - An |OAM| network.
- A cluster host network not used for external connectivity of container - A cluster host network not used for external connectivity of container
workloads. Either the |OAM| port or other configured ports on the workloads. Either the |OAM| port or other configured ports on the
controller and worker nodes would be used for container workloads' controller and worker nodes would be used for container workloads'
connectivity to external networks. connectivity to external networks.
- An optional |PXE| boot network: - An optional |PXE| boot network:
- if the internal management network is required to be on a |VLAN|-tagged - if the internal management network is required to be on a |VLAN|-tagged
network network
- if the internal management network is shared with other equipment - if the internal management network is shared with other equipment
On moderately loaded systems, the |OAM| network can be consolidated on the On moderately loaded systems, the |OAM| network can be consolidated on the
management or infrastructure interfaces. management or infrastructure interfaces.
.. note:: .. note::
You can enable secure HTTPS connectivity on the |OAM| network. You can enable secure HTTPS connectivity on the |OAM| network.
.. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>` .. xbooklink For more information, see |sec-doc|: :ref:`Secure HTTPS Connectivity <starlingx-rest-api-applications-and-the-web-administration-server>`

144
doc/source/planning/kubernetes/oam-network-planning.rst Normal file → Executable file
View File

@ -1,72 +1,72 @@
.. ooz1552671180591 .. ooz1552671180591
.. _oam-network-planning: .. _oam-network-planning:
==================== ====================
OAM Network Planning OAM Network Planning
==================== ====================
The |OAM| network enables ingress access to the Horizon Web interface, the The |OAM| network enables ingress access to the Horizon Web interface, the
command-line management clients -- using |SSH| and |SNMP| interfaces, and the command-line management clients -- using |SSH| and |SNMP| interfaces, and the
REST APIs to remotely manage the |prod| cluster. REST APIs to remotely manage the |prod| cluster.
The |OAM| Network is also used for egress access to remote Docker Registries, The |OAM| Network is also used for egress access to remote Docker Registries,
and for Elastic Beats connectivity to a Remote Log server if |prod| remote and for Elastic Beats connectivity to a Remote Log server if |prod| remote
logging is configured. logging is configured.
The |OAM| network provides access to the board management controllers. The |OAM| network provides access to the board management controllers.
The |OAM| network supports IPv4 or IPv6 addressing. Use the following The |OAM| network supports IPv4 or IPv6 addressing. Use the following
guidelines: guidelines:
.. _oam-network-planning-ul-uj3-yk2-4n: .. _oam-network-planning-ul-uj3-yk2-4n:
- Dual-stack configuration is not supported. With the exception of the PXE - Dual-stack configuration is not supported. With the exception of the PXE
boot network, all networks must use either IPv4 or IPv6 addressing. boot network, all networks must use either IPv4 or IPv6 addressing.
- Deploy proper firewall mechanisms to access this network. The primary - Deploy proper firewall mechanisms to access this network. The primary
concern of a firewall is to ensure that access to the |prod| management concern of a firewall is to ensure that access to the |prod| management
interfaces is not compromised. interfaces is not compromised.
|prod| includes a default firewall for the |OAM| network, using Kubernetes |prod| includes a default firewall for the |OAM| network, using Kubernetes
Network Policies. You can configure the system to support additional rules. Network Policies. You can configure the system to support additional rules.
For more information, see :ref:`Firewall Options For more information, see :ref:`Firewall Options
<network-planning-firewall-options>`. <network-planning-firewall-options>`.
- Consider whether the |OAM| network needs access to the internet. Limiting - Consider whether the |OAM| network needs access to the internet. Limiting
access to an internal network might be advisable, although access to a access to an internal network might be advisable, although access to a
configured DNS server, a remote Docker registry with at least the |prod| configured DNS server, a remote Docker registry with at least the |prod|
container images, and |NTP| or |PTP| servers may still be needed. container images, and |NTP| or |PTP| servers may still be needed.
- |VLAN| tagging is supported, enabling the network to share an interface - |VLAN| tagging is supported, enabling the network to share an interface
with the internal management or infrastructure networks. with the internal management or infrastructure networks.
- The IP addresses of the DNS, and |NTP|/|PTP| servers must match the IP - The IP addresses of the DNS, and |NTP|/|PTP| servers must match the IP
address plan \(IPv4 or IPv6\) of the |OAM| network. address plan \(IPv4 or IPv6\) of the |OAM| network.
- For an IPv4 address plan: - For an IPv4 address plan:
- The |OAM| floating IP address is the only address that needs to be - The |OAM| floating IP address is the only address that needs to be
visible externally. You must therefore plan for valid definitions of visible externally. You must therefore plan for valid definitions of
its IPv4 subnet and default gateway. its IPv4 subnet and default gateway.
- The physical IPv4 addresses for the controllers do not need to be - The physical IPv4 addresses for the controllers do not need to be
visible externally, unless you plan to use them during |SSH| sessions visible externally, unless you plan to use them during |SSH| sessions
to prevent potential service breaks during the connection. You need to to prevent potential service breaks during the connection. You need to
plan for their IPv4 subnet, but you can limit access to them as plan for their IPv4 subnet, but you can limit access to them as
required. required.
- Outgoing packets from the active or secondary controller use the - Outgoing packets from the active or secondary controller use the
controller's IPv4 physical address, not the |OAM| floating IP address, controller's IPv4 physical address, not the |OAM| floating IP address,
as the source address. as the source address.
- For an IPv6 address plan: - For an IPv6 address plan:
- Outgoing packets from the active controller use the |OAM| floating IP - Outgoing packets from the active controller use the |OAM| floating IP
address as the source address. Outgoing packets from the secondary address as the source address. Outgoing packets from the secondary
controller use the secondary controller's IPv6 physical IP address. controller use the secondary controller's IPv6 physical IP address.
- Systems with two controllers use IP multicast messaging on the - Systems with two controllers use IP multicast messaging on the
internal management network. To prevent loss of controller synchronization, internal management network. To prevent loss of controller synchronization,
ensure that the switches and other devices on these networks are configured ensure that the switches and other devices on these networks are configured
with appropriate settings. with appropriate settings.

View File

@ -1,31 +1,31 @@
.. sii1465846708497 .. sii1465846708497
.. _overview-of-starlingx-planning: .. _overview-of-starlingx-planning:
=================================================== ===================================================
Overview of Installation and Configuration Planning Overview of Installation and Configuration Planning
=================================================== ===================================================
Fully planning your |prod-long| installation and configuration helps to Fully planning your |prod-long| installation and configuration helps to
expedite the process and ensure that you have everything required. expedite the process and ensure that you have everything required.
Planning helps ensure that the requirements of your containers, and the Planning helps ensure that the requirements of your containers, and the
requirements of your cloud administration and operations teams can be met. It requirements of your cloud administration and operations teams can be met. It
ensures proper integration of a |prod| into the target data center or telecom ensures proper integration of a |prod| into the target data center or telecom
office, and helps you plan up front for future cloud growth. office, and helps you plan up front for future cloud growth.
.. xbooklink This planning guide assumes that you have read both the |intro-doc|: :ref:`<introduction-to-starlingx>` and the |deploy-doc|: :ref:`<deployment-options>` guides in order to understand general concepts and assist you in choosing a particular deployment configuration. .. xbooklink This planning guide assumes that you have read both the |intro-doc|: :ref:`<introduction-to-starlingx>` and the |deploy-doc|: :ref:`<deployment-options>` guides in order to understand general concepts and assist you in choosing a particular deployment configuration.
The |planning-doc| guide is intended to help you plan for your installation. It The |planning-doc| guide is intended to help you plan for your installation. It
discusses detailed planning topics for the following areas: discusses detailed planning topics for the following areas:
.. _overview-of-starlingx-planning-ul-v2m-t5h-hw: .. _overview-of-starlingx-planning-ul-v2m-t5h-hw:
- Network Planning - Network Planning
- Storage Planning - Storage Planning
- Node Installation Planning - Node Installation Planning
- Node Resource Planning - Node Resource Planning

View File

@ -1,37 +1,37 @@
.. gss1552671083817 .. gss1552671083817
.. _redundant-top-of-rack-switch-deployment-considerations: .. _redundant-top-of-rack-switch-deployment-considerations:
====================================================== ======================================================
Redundant Top-of-Rack Switch Deployment Considerations Redundant Top-of-Rack Switch Deployment Considerations
====================================================== ======================================================
For a system that uses link aggregation on some or all networks, you can For a system that uses link aggregation on some or all networks, you can
configure redundant |ToR| switches for additional reliability. configure redundant |ToR| switches for additional reliability.
In a redundant |ToR| switch configuration, each link in a link aggregate is In a redundant |ToR| switch configuration, each link in a link aggregate is
connected to a different switch, as shown in the accompanying figure. If one connected to a different switch, as shown in the accompanying figure. If one
switch fails, another is available to service the link aggregate. switch fails, another is available to service the link aggregate.
.. figure:: ../figures/jow1438030468959.png .. figure:: ../figures/jow1438030468959.png
*Redundant Top-of-Rack Switches* *Redundant Top-of-Rack Switches*
|org| recommends that you use switches that support |VPC|. When |VPC| is used, |org| recommends that you use switches that support |VPC|. When |VPC| is used,
the aggregated links on the switches act as a single |LAG| interface. Both the aggregated links on the switches act as a single |LAG| interface. Both
switches are normally active, providing full bandwidth to the |LAG|. If there switches are normally active, providing full bandwidth to the |LAG|. If there
are multiple failed links on both switches, at least one connection in each are multiple failed links on both switches, at least one connection in each
aggregate pair is still functional. If one switch fails, the other continues to aggregate pair is still functional. If one switch fails, the other continues to
provide connections for all |LAG| links that are operational on that switch. provide connections for all |LAG| links that are operational on that switch.
For more about configuring |VPC|, refer to your switch documentation. For more about configuring |VPC|, refer to your switch documentation.
You can use an active/standby failover model for the switches, but at a cost to You can use an active/standby failover model for the switches, but at a cost to
overall reliability. If there are multiple failed links on both switches, then overall reliability. If there are multiple failed links on both switches, then
the switch with the greatest number of functioning links is activated, but the switch with the greatest number of functioning links is activated, but
links on that switch could be in a failed state. In addition, when only one links on that switch could be in a failed state. In addition, when only one
link in an aggregate is connected to an active switch, the |LAG| bandwidth is link in an aggregate is connected to an active switch, the |LAG| bandwidth is
limited to the single link. limited to the single link.
.. note:: .. note::
You can enhance system reliability by using redundant routers. For more You can enhance system reliability by using redundant routers. For more
information, refer to your router documentation. information, refer to your router documentation.

View File

@ -1,58 +1,58 @@
.. qzw1552672165570 .. qzw1552672165570
.. _security-planning-uefi-secure-boot-planning: .. _security-planning-uefi-secure-boot-planning:
========================= ====================================
UEFI Secure Boot Planning Kubernetes UEFI Secure Boot Planning
========================= ====================================
|UEFI| Secure Boot Planning allows you to authenticate modules before they are |UEFI| Secure Boot Planning allows you to authenticate modules before they are
allowed to execute. allowed to execute.
The initial installation of |prod| should be done in |UEFI| mode if you plan on The initial installation of |prod| should be done in |UEFI| mode if you plan on
using the secure boot feature in the future. using the secure boot feature in the future.
The |prod| secure boot certificate can be found in the |prod| ISO, on the EFI The |prod| secure boot certificate can be found in the |prod| ISO, on the EFI
bootable FAT filesystem. The file is in the directory /CERTS. You must add this bootable FAT filesystem. The file is in the directory /CERTS. You must add this
certificate database to the motherboard's |UEFI| certificate database. How to certificate database to the motherboard's |UEFI| certificate database. How to
add this certificate to the database is determined by the |UEFI| implementation add this certificate to the database is determined by the |UEFI| implementation
provided by the motherboard manufacturer. provided by the motherboard manufacturer.
You may need to work with your hardware vendor to have the certificate You may need to work with your hardware vendor to have the certificate
installed. installed.
There is an option in the |UEFI| setup utility that allows a user to browse to There is an option in the |UEFI| setup utility that allows a user to browse to
a file containing a certificate to be loaded in the authorized database. This a file containing a certificate to be loaded in the authorized database. This
option may be hidden in the |UEFI| setup utility unless |UEFI| mode is enabled, option may be hidden in the |UEFI| setup utility unless |UEFI| mode is enabled,
and secure boot is enabled. and secure boot is enabled.
The |UEFI| implementation may or may not require a |TPM| device to be present The |UEFI| implementation may or may not require a |TPM| device to be present
and enabled before providing for secure boot functionality. Refer to your and enabled before providing for secure boot functionality. Refer to your
server board's documentation. server board's documentation.
Many motherboards ship with Microsoft secure boot certificates pre-programmed Many motherboards ship with Microsoft secure boot certificates pre-programmed
in the |UEFI| certificate database. These certificates may be required to boot in the |UEFI| certificate database. These certificates may be required to boot
|UEFI| drivers for video cards, |RAID| controllers, or |NICs| \(for example, |UEFI| drivers for video cards, |RAID| controllers, or |NICs| \(for example,
the |PXE| boot software for a |NIC| may have been signed by a Microsoft the |PXE| boot software for a |NIC| may have been signed by a Microsoft
certificate\). While certificates can be removed from the certificate database certificate\). While certificates can be removed from the certificate database
\(this is |UEFI| implementation specific\) it may be required that you keep the \(this is |UEFI| implementation specific\) it may be required that you keep the
Microsoft certificates to allow for complete system operation. Microsoft certificates to allow for complete system operation.
Mixed combinations of secure boot and non-secure boot nodes are supported. For Mixed combinations of secure boot and non-secure boot nodes are supported. For
example, a controller node may secure boot, while a worker node may not. Secure example, a controller node may secure boot, while a worker node may not. Secure
boot must be enabled in the |UEFI| firmware of each node for that node to be boot must be enabled in the |UEFI| firmware of each node for that node to be
protected by secure boot. protected by secure boot.
.. _security-planning-uefi-secure-boot-planning-ul-h4z-lzg-bjb: .. _security-planning-uefi-secure-boot-planning-ul-h4z-lzg-bjb:
- Secure Boot is supported in |UEFI| installations only. It is not used when - Secure Boot is supported in |UEFI| installations only. It is not used when
booting |prod| as a legacy boot target. booting |prod| as a legacy boot target.
- |prod| does not currently support switching from legacy to |UEFI| mode - |prod| does not currently support switching from legacy to |UEFI| mode
after a system has been installed. Doing so requires a reinstall of the after a system has been installed. Doing so requires a reinstall of the
system. This means that upgrading from a legacy install to a secure boot system. This means that upgrading from a legacy install to a secure boot
install \(|UEFI|\) is not supported. install \(|UEFI|\) is not supported.
- When upgrading a |prod| system from a version that did not support secure - When upgrading a |prod| system from a version that did not support secure
boot to a version that does, do not enable secure boot in |UEFI| firmware boot to a version that does, do not enable secure boot in |UEFI| firmware
until the upgrade is complete. until the upgrade is complete.

View File

@ -1,52 +1,52 @@
.. rei1552671031876 .. rei1552671031876
.. _shared-vlan-or-multi-netted-ethernet-interfaces: .. _shared-vlan-or-multi-netted-ethernet-interfaces:
=================================================== ===================================================
Shared \(VLAN or Multi-Netted\) Ethernet Interfaces Shared \(VLAN or Multi-Netted\) Ethernet Interfaces
=================================================== ===================================================
The management, |OAM|, cluster host, and other networks for container workload The management, |OAM|, cluster host, and other networks for container workload
external connectivity, can share Ethernet or aggregated Ethernet interfaces external connectivity, can share Ethernet or aggregated Ethernet interfaces
using |VLAN| tagging or IP Multi-Netting. using |VLAN| tagging or IP Multi-Netting.
The |OAM| internal management cluster host, and other external networks, can The |OAM| internal management cluster host, and other external networks, can
use |VLAN| tagging or IP Multi-Netting, allowing them to share an Ethernet or use |VLAN| tagging or IP Multi-Netting, allowing them to share an Ethernet or
aggregated Ethernet interface with other networks. If the internal management aggregated Ethernet interface with other networks. If the internal management
network is implemented as a |VLAN|-tagged network then it must be on the same network is implemented as a |VLAN|-tagged network then it must be on the same
physical interface used for |PXE| booting. physical interface used for |PXE| booting.
The following arrangements are possible: The following arrangements are possible:
.. _shared-vlan-or-multi-netted-ethernet-interfaces-ul-y5k-zg2-zq: .. _shared-vlan-or-multi-netted-ethernet-interfaces-ul-y5k-zg2-zq:
- One interface for the internal management network and internal cluster host - One interface for the internal management network and internal cluster host
network using multi-netting, and another interface for |OAM| \(on which network using multi-netting, and another interface for |OAM| \(on which
container workloads are exposed externally\). This is the default container workloads are exposed externally\). This is the default
configuration. configuration.
- One interface for the internal management network and another interface for - One interface for the internal management network and another interface for
the external |OAM| and external cluster host \(on which container workloads the external |OAM| and external cluster host \(on which container workloads
are exposed externally\) networks. Both are implemented using |VLAN| are exposed externally\) networks. Both are implemented using |VLAN|
tagging. tagging.
- One interface for the internal management network, another interface for - One interface for the internal management network, another interface for
the external |OAM| network, and a third for an external cluster host the external |OAM| network, and a third for an external cluster host
network \(on which container workloads are exposed externally\). network \(on which container workloads are exposed externally\).
- One interface for the internal management network and internal cluster host - One interface for the internal management network and internal cluster host
network using multi-netting, another interface for |OAM| and a third network using multi-netting, another interface for |OAM| and a third
interface for an additional network on which container workloads are interface for an additional network on which container workloads are
exposed externally. exposed externally.
For some typical interface scenarios, see |planning-doc|: :ref:`Hardware For some typical interface scenarios, see |planning-doc|: :ref:`Hardware
Requirements <starlingx-hardware-requirements>`. Requirements <starlingx-hardware-requirements>`.
Options to share an interface using |VLAN| tagging or Multi-Netting are Options to share an interface using |VLAN| tagging or Multi-Netting are
presented in the Ansible Bootstrap Playbook. To attach an interface to other presented in the Ansible Bootstrap Playbook. To attach an interface to other
networks after configuration, you can edit the interface. networks after configuration, you can edit the interface.
.. xbooklink For more information about configuring |VLAN| interfaces and Multi-Netted interfaces, see |node-doc|: :ref:`Configure VLAN Interfaces Using Horizon <configuring-vlan-interfaces-using-horizon>`. .. xbooklink For more information about configuring |VLAN| interfaces and Multi-Netted interfaces, see |node-doc|: :ref:`Configure VLAN Interfaces Using Horizon <configuring-vlan-interfaces-using-horizon>`.

View File

@ -1,41 +1,41 @@
.. lid1552672445221 .. lid1552672445221
.. _starlingx-boot-sequence-considerations: .. _starlingx-boot-sequence-considerations:
=================================== ===================================
System Boot Sequence Considerations System Boot Sequence Considerations
=================================== ===================================
During |prod| software installation, each host must boot from different devices During |prod| software installation, each host must boot from different devices
at different times. In some cases, you may need to adjust the boot order. at different times. In some cases, you may need to adjust the boot order.
The first controller node must be booted initially from a removable storage The first controller node must be booted initially from a removable storage
device to install an operating system. The host then reboots from the hard device to install an operating system. The host then reboots from the hard
drive. drive.
Each remaining host must be booted initially from the network using |PXE| to Each remaining host must be booted initially from the network using |PXE| to
install an operating system. The host then reboots from the hard drive. install an operating system. The host then reboots from the hard drive.
To facilitate this process, ensure that the hard drive does not already contain To facilitate this process, ensure that the hard drive does not already contain
a bootable operating system, and set the following boot order in the BIOS. a bootable operating system, and set the following boot order in the BIOS.
.. _starlingx-boot-sequence-considerations-ol-htt-5qg-fn: .. _starlingx-boot-sequence-considerations-ol-htt-5qg-fn:
#. removable storage device \(USB flash drive or DVD drive\) #. removable storage device \(USB flash drive or DVD drive\)
#. hard drive #. hard drive
#. network \(|PXE|\), over an interface connected to the internal management #. network \(|PXE|\), over an interface connected to the internal management
network network
#. network \(|PXE|\), over an interface connected to the |PXE| boot network #. network \(|PXE|\), over an interface connected to the |PXE| boot network
For BIOS configuration details, refer to the OEM documentation supplied with For BIOS configuration details, refer to the OEM documentation supplied with
the worker node. the worker node.
.. note:: .. note::
If a host contains a bootable hard drive, either erase the drive If a host contains a bootable hard drive, either erase the drive
beforehand, or ensure that the host is set to boot from the correct source beforehand, or ensure that the host is set to boot from the correct source
for initial configuration. If necessary, you can change the boot device at for initial configuration. If necessary, you can change the boot device at
boot time by pressing a dedicated key. For more information, refer to the boot time by pressing a dedicated key. For more information, refer to the
OEM documentation for the worker node. OEM documentation for the worker node.

View File

@ -1,224 +1,224 @@
.. kdl1464894372485 .. kdl1464894372485
.. _starlingx-hardware-requirements: .. _starlingx-hardware-requirements:
============================ ============================
System Hardware Requirements System Hardware Requirements
============================ ============================
|prod| has been tested to work with specific hardware configurations: |prod| has been tested to work with specific hardware configurations:
.. contents:: .. contents:: |minitoc|
:local: :local:
:depth: 1 :depth: 1
If the minimum hardware requirements are not met, system performance cannot be If the minimum hardware requirements are not met, system performance cannot be
guaranteed. guaranteed.
.. _starlingx-hardware-requirements-section-N10044-N10024-N10001: .. _starlingx-hardware-requirements-section-N10044-N10024-N10001:
------------------------------------- -------------------------------------
Controller, worker, and storage hosts Controller, worker, and storage hosts
------------------------------------- -------------------------------------
.. Row alterations don't work with spans .. Row alterations don't work with spans
|row-alt-off| |row-alt-off|
.. _starlingx-hardware-requirements-table-nvy-52x-p5: .. _starlingx-hardware-requirements-table-nvy-52x-p5:
.. table:: Table 1. Hardware Requirements — |prod| Standard Configuration .. table:: Table 1. Hardware Requirements — |prod| Standard Configuration
:widths: auto :widths: auto
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Requirement | Controller | Storage | Worker | | Minimum Requirement | Controller | Storage | Worker |
+===========================================================+==================================================================================================================================================================================================================================================================+==============================================================================================+=======================================================================================+ +===========================================================+==================================================================================================================================================================================================================================================================+==============================================================================================+=======================================================================================+
| Minimum Qty of Servers | 2 \(required\) | \(if Ceph storage used\) | 2 100 | | Minimum Qty of Servers | 2 \(required\) | \(if Ceph storage used\) | 2 100 |
| | | | | | | | | |
| | | 2 8 \(for replication factor 2\) | | | | | 2 8 \(for replication factor 2\) | |
| | | | | | | | | |
| | | 3 9 \(for replication factor 3\) | | | | | 3 9 \(for replication factor 3\) | |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket | | Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB | 64 GB | 32 GB | | Minimum Memory | 64 GB | 64 GB | 32 GB |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Minimum Primary Disk \(two-disk hardware RAID suggested\) | 500 GB - SSD or NVMe | 120 GB \(min. 10K RPM\) | | Minimum Primary Disk \(two-disk hardware RAID suggested\) | 500 GB - SSD or NVMe | 120 GB \(min. 10K RPM\) |
| | | | | | | |
+ +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| | .. note:: | | | .. note:: |
| | Installation on software RAID is not supported. | | | Installation on software RAID is not supported. |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Additional Disks | 1 X 500 GB \(min 10K RPM\) | 500 GB \(min. 10K RPM\) for OSD storage | 500 GB \(min. 10K RPM\) — 1 or more | | Additional Disks | 1 X 500 GB \(min 10K RPM\) | 500 GB \(min. 10K RPM\) for OSD storage | 500 GB \(min. 10K RPM\) — 1 or more |
| | | | | | | | | |
| | \(not required for systems with dedicated storage nodes\) | one or more SSDs or NVMe drives \(recommended for Ceph journals\); min. 1024 MiB per journal | .. note:: | | | \(not required for systems with dedicated storage nodes\) | one or more SSDs or NVMe drives \(recommended for Ceph journals\); min. 1024 MiB per journal | .. note:: |
| | | | Single-disk hosts are supported, but must not be used for local ephemeral storage | | | | | Single-disk hosts are supported, but must not be used for local ephemeral storage |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment\) | | Network Ports | \(Typical deployment\) |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
+ +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | | | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | | | | | | | | |
| | - OAM: 2 x 1GE LAG | | - Optionally external network ports 2 x 10GE LAG | | | - OAM: 2 x 1GE LAG | | - Optionally external network ports 2 x 10GE LAG |
| | | | | | | | | |
| | - Optionally external network ports 2 x 10GE LAG | | | | | - Optionally external network ports 2 x 10GE LAG | | |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Board Management Controller \(BMC\) | 1 \(required\) | 1 \(required\) | 1 \(required\) | | Board Management Controller \(BMC\) | 1 \(required\) | 1 \(required\) | 1 \(required\) |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| USB Interface | 1 | not required | | USB Interface | 1 | not required |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Power Profile | Max Performance | | Power Profile | Max Performance |
| | | | | |
| | Min Proc Idle Power:No C States | | | Min Proc Idle Power:No C States |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB | HD, PXE | | Boot Order | HD, PXE, USB | HD, PXE |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI | | BIOS Mode | BIOS or UEFI |
| | | | | |
| | .. note:: | | | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. | | | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled | | Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Disabled | Enabled | | Intel Virtualization \(VTD, VTX\) | Disabled | Enabled |
+-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+ +-----------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------+
.. _starlingx-hardware-requirements-section-N102D0-N10024-N10001: .. _starlingx-hardware-requirements-section-N102D0-N10024-N10001:
-------------------------------- --------------------------------
Combined controller-worker hosts Combined controller-worker hosts
-------------------------------- --------------------------------
Hardware requirements for a |prod| Simplex or Duplex configuration are listed Hardware requirements for a |prod| Simplex or Duplex configuration are listed
in the following table. in the following table.
.. _starlingx-hardware-requirements-table-cb2-lfx-p5: .. _starlingx-hardware-requirements-table-cb2-lfx-p5:
.. table:: Table 2. Hardware Requirements — |prod| Simplex or Duplex Configuration .. table:: Table 2. Hardware Requirements — |prod| Simplex or Duplex Configuration
:widths: auto :widths: auto
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller + Worker | | Minimum Requirement | Controller + Worker |
| | | | | |
| | \(Combined Server\) | | | \(Combined Server\) |
+===================================+==================================================================================================================================================================================================================================================================+ +===================================+==================================================================================================================================================================================================================================================================+
| Minimum Qty of Servers | Simplex―1 | | Minimum Qty of Servers | Simplex―1 |
| | | | | |
| | Duplex―2 | | | Duplex―2 |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket | | Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | | | | |
| | or | | | or |
| | | | | |
| | Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) | | | Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB | | Minimum Memory | 64 GB |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk | 500 GB - SSD or NVMe | | Minimum Primary Disk | 500 GB - SSD or NVMe |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Additional Disks | - Single-disk system: N/A | | Additional Disks | - Single-disk system: N/A |
| | | | | |
| | - Two-disk system: | | | - Two-disk system: |
| | | | | |
| | | | | |
| | - 1 x 500 GB SSD or NVMe for Persistent Volume Claim storage | | | - 1 x 500 GB SSD or NVMe for Persistent Volume Claim storage |
| | | | | |
| | | | | |
| | - Three-disk system: | | | - Three-disk system: |
| | | | | |
| | | | | |
| | - 1 x 500 GB \(min 10K RPM\) for Persistent Volume Claim storage | | | - 1 x 500 GB \(min 10K RPM\) for Persistent Volume Claim storage |
| | | | | |
| | - 1 or more x 500 GB \(min. 10K RPM\) for Container ephemeral disk storage | | | - 1 or more x 500 GB \(min. 10K RPM\) for Container ephemeral disk storage |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment.\) | | Network Ports | \(Typical deployment.\) |
| | | | | |
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | | | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | | | | |
| | .. note:: | | | .. note:: |
| | Mgmt / Cluster Host ports are required for Duplex systems only | | | Mgmt / Cluster Host ports are required for Duplex systems only |
| | | | | |
| | - OAM: 2 x 1GE LAG | | | - OAM: 2 x 1GE LAG |
| | | | | |
| | - Optionally external network ports 2 x 10GE LAG | | | - Optionally external network ports 2 x 10GE LAG |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 | | USB Interface | 1 |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance | | Power Profile | Max Performance |
| | | | | |
| | Min Proc Idle Power:No C States | | | Min Proc Idle Power:No C States |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB | | Boot Order | HD, PXE, USB |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI | | BIOS Mode | BIOS or UEFI |
| | | | | |
| | .. note:: | | | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. | | | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`PXE Boot Network <network-planning-the-pxe-boot-network>`. |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled | | Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Enabled | | Intel Virtualization \(VTD, VTX\) | Enabled |
+-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +-----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _starlingx-hardware-requirements-section-if-scenarios: .. _starlingx-hardware-requirements-section-if-scenarios:
--------------------------------- ---------------------------------
Interface configuration scenarios Interface configuration scenarios
--------------------------------- ---------------------------------
|prod| supports the use of consolidated interfaces for the management, cluster |prod| supports the use of consolidated interfaces for the management, cluster
host, and |OAM| networks. Some typical configurations are shown in the host, and |OAM| networks. Some typical configurations are shown in the
following table. For best performance, |org| recommends dedicated interfaces. following table. For best performance, |org| recommends dedicated interfaces.
|LAG| is optional in all instances. |LAG| is optional in all instances.
.. _starlingx-hardware-requirements-table-if-scenarios: .. _starlingx-hardware-requirements-table-if-scenarios:
.. table:: .. table::
:widths: auto :widths: auto
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+ +---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+
| Scenario | Controller | Storage | Worker | | Scenario | Controller | Storage | Worker |
+===========================================================================+===============================+===============================+===============================+ +===========================================================================+===============================+===============================+===============================+
| - Physical interfaces on servers limited to two pairs | 2x 10GE LAG: | 2x 10GE LAG: | 2x 10GE LAG: | | - Physical interfaces on servers limited to two pairs | 2x 10GE LAG: | 2x 10GE LAG: | 2x 10GE LAG: |
| | | | | | | | | |
| - Estimated aggregate average Container storage traffic less than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Cluster Host \(untagged\) | | - Estimated aggregate average Container storage traffic less than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Cluster Host \(untagged\) |
| | | | | | | | | |
| | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) | | | | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) | |
| | | | Optionally | | | | | Optionally |
| | | | | | | | | |
| | 2x 1GE LAG: | | 2x 10GE LAG | | | 2x 1GE LAG: | | 2x 10GE LAG |
| | | | | | | | | |
| | - OAM \(untagged\) | | external network ports | | | - OAM \(untagged\) | | external network ports |
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+ +---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+
| - No specific limit on number of physical interfaces | 2x 1GE LAG: | 2x 1GE LAG | 2x 1GE LAG | | - No specific limit on number of physical interfaces | 2x 1GE LAG: | 2x 1GE LAG | 2x 1GE LAG |
| | | | | | | | | |
| - Estimated aggregate average Container storage traffic greater than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) | | - Estimated aggregate average Container storage traffic greater than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | | | | | | |
| | | | | | | | | |
| | 2x 10GE LAG: | 2x 10GE LAG | 2x 10GE LAG: | | | 2x 10GE LAG: | 2x 10GE LAG | 2x 10GE LAG: |
| | | | | | | | | |
| | - Cluster Host | - Cluster Host | - Cluster Host | | | - Cluster Host | - Cluster Host | - Cluster Host |
| | | | | | | | | |
| | | | | | | | | |
| | 2x 1GE LAG: | | Optionally | | | 2x 1GE LAG: | | Optionally |
| | | | | | | | | |
| | - OAM \(untagged\) | | 2x 10GE LAG | | | - OAM \(untagged\) | | 2x 10GE LAG |
| | | | | | | | | |
| | | | - external network ports | | | | | - external network ports |
| | Optionally | | | | | Optionally | | |
| | | | | | | | | |
| | 2x 10GE LAG | | | | | 2x 10GE LAG | | |
| | | | | | | | | |
| | - external network ports | | | | | - external network ports | | |
+---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+ +---------------------------------------------------------------------------+-------------------------------+-------------------------------+-------------------------------+

View File

@ -1,182 +1,182 @@
.. uyj1582118375814 .. uyj1582118375814
.. _storage-planning-storage-on-controller-hosts: .. _storage-planning-storage-on-controller-hosts:
=========================== ===========================
Storage on Controller Hosts Storage on Controller Hosts
=========================== ===========================
The controller's root disk provides storage for the |prod| system databases, The controller's root disk provides storage for the |prod| system databases,
system configuration files, local Docker images, container's ephemeral system configuration files, local Docker images, container's ephemeral
filesystems, the local Docker registry container image store, platform backup, filesystems, the local Docker registry container image store, platform backup,
and the system backup operations. and the system backup operations.
.. contents:: In this section: .. contents:: |minitoc|
:local: :local:
:depth: 1 :depth: 1
Container local storage is derived from the cgts-vg volume group on the root Container local storage is derived from the cgts-vg volume group on the root
disk. You can add additional storage to the cgts-vg volume by assigning a disk. You can add additional storage to the cgts-vg volume by assigning a
partition or disk to make it larger. This will allow you to increase the size partition or disk to make it larger. This will allow you to increase the size
of the container local storage for the host, however, you cannot assign it of the container local storage for the host, however, you cannot assign it
specifically to a non-root disk. specifically to a non-root disk.
On All-in-one Simplex, All-in-one Duplex, and Standard with controller storage On All-in-one Simplex, All-in-one Duplex, and Standard with controller storage
systems, at least one additional disk for each controller host is required for systems, at least one additional disk for each controller host is required for
backing container |PVCs|. backing container |PVCs|.
Two disks are required with one being a Ceph |OSD|. Two disks are required with one being a Ceph |OSD|.
.. _storage-planning-storage-on-controller-hosts-d103e57: .. _storage-planning-storage-on-controller-hosts-d103e57:
----------------------- -----------------------
Root Filesystem Storage Root Filesystem Storage
----------------------- -----------------------
Space on the root disk is allocated to provide filesystem storage. Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the Horizon You can increase the allotments for the following filesystems using the Horizon
Web interface or the CLI. The following commands are available to increase Web interface or the CLI. The following commands are available to increase
various filesystem sizes: :command:`system controllerfs`, and :command:`system various filesystem sizes: :command:`system controllerfs`, and :command:`system
host-fs`. host-fs`.
.. _storage-planning-storage-on-controller-hosts-d103e93: .. _storage-planning-storage-on-controller-hosts-d103e93:
------------------------ ------------------------
Synchronized Filesystems Synchronized Filesystems
------------------------ ------------------------
Synchronized filesystems ensure that files stored in several different physical Synchronized filesystems ensure that files stored in several different physical
locations are up to date. The following commands can be used to resize an locations are up to date. The following commands can be used to resize an
|DRBD|-synced filesystem \(Database, Docker-distribution, Etcd, Extension, |DRBD|-synced filesystem \(Database, Docker-distribution, Etcd, Extension,
Platform\) on controllers. Platform\) on controllers.
:command:`controllerfs-list`, :command:`controllerfs-modify`, and :command:`controllerfs-list`, :command:`controllerfs-modify`, and
:command:`controllerfs-show`. :command:`controllerfs-show`.
.. xbooklink For more information, see *Increasing Controller Filesystem Storage Allotments Using Horizon*. .. xbooklink For more information, see *Increasing Controller Filesystem Storage Allotments Using Horizon*.
**Platform Storage** **Platform Storage**
This is the storage allotment for a variety of platform items including the This is the storage allotment for a variety of platform items including the
local helm repository, the StarlingX application repository, and internal local helm repository, the StarlingX application repository, and internal
platform configuration data files. platform configuration data files.
**Database storage** **Database storage**
The storage allotment for the platform's postgres database is used by The storage allotment for the platform's postgres database is used by
StarlingX, System Inventory, Keystone and Barbican. StarlingX, System Inventory, Keystone and Barbican.
Internal database storage is provided using |DRBD|-synchronized partitions Internal database storage is provided using |DRBD|-synchronized partitions
on the controller primary disks. The size of the database grows with the on the controller primary disks. The size of the database grows with the
number of system resources created by the system administrator. This number of system resources created by the system administrator. This
includes objects of all kinds such as hosts, interfaces, and service includes objects of all kinds such as hosts, interfaces, and service
parameters. parameters.
If you add a database filesystem or increase its size, you must also If you add a database filesystem or increase its size, you must also
increase the size of the backup filesystem. increase the size of the backup filesystem.
**Docker-distribution storage \(local Docker registry storage\)** **Docker-distribution storage \(local Docker registry storage\)**
The storage allotment for container images stored in the local Docker The storage allotment for container images stored in the local Docker
registry. This storage is provided using a |DRBD|-synchronized partition on registry. This storage is provided using a |DRBD|-synchronized partition on
the controller primary disk. the controller primary disk.
**Etcd Storage** **Etcd Storage**
The storage allotment for the Kubernetes etcd database. The storage allotment for the Kubernetes etcd database.
Internal database storage is provided using a |DRBD|-synchronized partition Internal database storage is provided using a |DRBD|-synchronized partition
on the controller primary disk. The size of the database grows with the on the controller primary disk. The size of the database grows with the
number of system resources created by the system administrator and the number of system resources created by the system administrator and the
users. This includes objects of all kinds such as pods, services, and users. This includes objects of all kinds such as pods, services, and
secrets. secrets.
**Ceph-mon** **Ceph-mon**
Ceph-mon is the cluster monitor daemon for the Ceph distributed file system Ceph-mon is the cluster monitor daemon for the Ceph distributed file system
that is used for Ceph monitors to synchronize. that is used for Ceph monitors to synchronize.
**Extension Storage** **Extension Storage**
This filesystem is reserved for future use. This storage is implemented on This filesystem is reserved for future use. This storage is implemented on
a |DRBD|-synchronized partition on the controller primary disk. a |DRBD|-synchronized partition on the controller primary disk.
.. _storage-planning-storage-on-controller-hosts-d103e219: .. _storage-planning-storage-on-controller-hosts-d103e219:
---------------- ----------------
Host Filesystems Host Filesystems
---------------- ----------------
The following host filesystem commands can be used to resize non-|DRBD| The following host filesystem commands can be used to resize non-|DRBD|
filesystems \(Backup, Docker, Kubelet and Scratch\) and do not apply to all filesystems \(Backup, Docker, Kubelet and Scratch\) and do not apply to all
hosts of a give personality type: hosts of a give personality type:
:command:`host-fs-list`, :command:`host-fs-modify`, and :command:`host-fs-show` :command:`host-fs-list`, :command:`host-fs-modify`, and :command:`host-fs-show`
The :command:`host-fs-modify` command increases the storage configuration for The :command:`host-fs-modify` command increases the storage configuration for
the filesystem specified on a per-host basis. For example, the following the filesystem specified on a per-host basis. For example, the following
command increases the scratch filesystem size to 10 GB: command increases the scratch filesystem size to 10 GB:
.. code-block:: none .. code-block:: none
~(keystone_admin)]$ system host-fs-modify controller-1 scratch=10 ~(keystone_admin)]$ system host-fs-modify controller-1 scratch=10
**Backup storage** **Backup storage**
This is the storage allotment for backup operations. This is a backup area, This is the storage allotment for backup operations. This is a backup area,
where: where:
backup=2\*database+platform size backup=2\*database+platform size
**Docker Storage** **Docker Storage**
This storage allotment is for ephemeral filesystems for containers on the This storage allotment is for ephemeral filesystems for containers on the
host, and for Docker image cache. host, and for Docker image cache.
**Kubelet Storage** **Kubelet Storage**
This storage allotment is for ephemeral storage size related to Kubernetes This storage allotment is for ephemeral storage size related to Kubernetes
pods on this host. pods on this host.
**Scratch Storage** **Scratch Storage**
This storage allotment is used by the host as a temporary area for a This storage allotment is used by the host as a temporary area for a
variety of miscellaneous transient host operations. variety of miscellaneous transient host operations.
**Logs Storage** **Logs Storage**
This is the storage allotment for log data. This filesystem is not This is the storage allotment for log data. This filesystem is not
resizable. Logs are rotated within the fixed space allocated. resizable. Logs are rotated within the fixed space allocated.
Replacement root disks for a reinstalled controller should be the same size or Replacement root disks for a reinstalled controller should be the same size or
larger to ensure that existing allocation sizes for filesystems will fit on the larger to ensure that existing allocation sizes for filesystems will fit on the
replacement disk. replacement disk.
.. _storage-planning-storage-on-controller-hosts-d103e334: .. _storage-planning-storage-on-controller-hosts-d103e334:
------------------------------------------------- -------------------------------------------------
Persistent Volume Claims storage \(Ceph Cluster\) Persistent Volume Claims storage \(Ceph Cluster\)
------------------------------------------------- -------------------------------------------------
For controller-storage systems, additional disks on the controller, configured For controller-storage systems, additional disks on the controller, configured
as Ceph |OSDs|, provide a small Ceph cluster for backing |PVCs| storage for as Ceph |OSDs|, provide a small Ceph cluster for backing |PVCs| storage for
containers. containers.
.. _storage-planning-storage-on-controller-hosts-d103e345: .. _storage-planning-storage-on-controller-hosts-d103e345:
----------- -----------
Replication Replication
----------- -----------
On |AIO|-Simplex systems, replication is done between |OSDs| within the host. On |AIO|-Simplex systems, replication is done between |OSDs| within the host.
The following three replication factors are supported: The following three replication factors are supported:
**1** **1**
This is the default, and requires one or more |OSD| disks. This is the default, and requires one or more |OSD| disks.
**2** **2**
This requires two or more |OSD| disks. This requires two or more |OSD| disks.
**3** **3**
This requires three or more |OSD| disks. This requires three or more |OSD| disks.
On |AIO|-Duplex systems replication is between the two controllers. Only one On |AIO|-Duplex systems replication is between the two controllers. Only one
replication group is supported and additional controllers cannot be added. replication group is supported and additional controllers cannot be added.
The following replication factor is supported: The following replication factor is supported:
**2** **2**
There can be any number of |OSDs| on each controller, with a minimum of one There can be any number of |OSDs| on each controller, with a minimum of one
each. It is recommended that you use the same number and same size |OSD| each. It is recommended that you use the same number and same size |OSD|
disks on the controllers. disks on the controllers.

View File

@ -1,48 +1,48 @@
.. mrn1582121375412 .. mrn1582121375412
.. _storage-planning-storage-on-storage-hosts: .. _storage-planning-storage-on-storage-hosts:
======================== ========================
Storage on Storage Hosts Storage on Storage Hosts
======================== ========================
Storage hosts provide a large-scale, persistent and highly available Ceph Storage hosts provide a large-scale, persistent and highly available Ceph
cluster for backing |PVCs|. cluster for backing |PVCs|.
The storage hosts can only be provisioned in a Standard with dedicated storage The storage hosts can only be provisioned in a Standard with dedicated storage
deployment and comprise the storage cluster for the system. Within the storage deployment and comprise the storage cluster for the system. Within the storage
cluster, the storage hosts are deployed in replication groups for redundancy. cluster, the storage hosts are deployed in replication groups for redundancy.
On dedicated storage setups Ceph storage backend is enabled automatically, and On dedicated storage setups Ceph storage backend is enabled automatically, and
the replication factor is updated later, depending on the number of storage the replication factor is updated later, depending on the number of storage
hosts provisioned. hosts provisioned.
.. _storage-planning-storage-on-storage-hosts-section-N1003F-N1002B-N10001: .. _storage-planning-storage-on-storage-hosts-section-N1003F-N1002B-N10001:
---------------------- ----------------------
OSD Replication Factor OSD Replication Factor
---------------------- ----------------------
.. _storage-planning-storage-on-storage-hosts-d99e23: .. _storage-planning-storage-on-storage-hosts-d99e23:
.. table:: .. table::
:widths: auto :widths: auto
+--------------------+-----------------------------+--------------------------------------+ +--------------------+-----------------------------+--------------------------------------+
| Replication Factor | Hosts per Replication Group | Maximum Replication Groups Supported | | Replication Factor | Hosts per Replication Group | Maximum Replication Groups Supported |
+====================+=============================+======================================+ +====================+=============================+======================================+
| 2 | 2 | 4 | | 2 | 2 | 4 |
+--------------------+-----------------------------+--------------------------------------+ +--------------------+-----------------------------+--------------------------------------+
| 3 | 3 | 3 | | 3 | 3 | 3 |
+--------------------+-----------------------------+--------------------------------------+ +--------------------+-----------------------------+--------------------------------------+
You can add up to 16 |OSDs| per storage host for data storage. You can add up to 16 |OSDs| per storage host for data storage.
Space on the storage hosts must be configured at installation before you can Space on the storage hosts must be configured at installation before you can
unlock the hosts. You can change the configuration after installation by adding unlock the hosts. You can change the configuration after installation by adding
resources to existing storage hosts or adding more storage hosts. For more resources to existing storage hosts or adding more storage hosts. For more
information, see the `StarlingX Installation and Deployment Guide information, see the `StarlingX Installation and Deployment Guide
<https://docs.starlingx.io/deploy_install_guides/index.html>`__. <https://docs.starlingx.io/deploy_install_guides/index.html>`__.
Storage hosts can achieve faster data access using |SSD|-backed transaction Storage hosts can achieve faster data access using |SSD|-backed transaction
journals \(journal functions\). |NVMe|-compatible |SSDs| are supported. journals \(journal functions\). |NVMe|-compatible |SSDs| are supported.

View File

@ -1,53 +1,53 @@
.. dbg1582122084062 .. dbg1582122084062
.. _storage-planning-storage-on-worker-hosts: .. _storage-planning-storage-on-worker-hosts:
======================= =======================
Storage on Worker Hosts Storage on Worker Hosts
======================= =======================
A worker host's root disk provides storage for host configuration files, local A worker host's root disk provides storage for host configuration files, local
Docker images, and hosted container's ephemeral filesystems. Docker images, and hosted container's ephemeral filesystems.
.. note:: .. note::
On |prod| Simplex or Duplex systems, worker storage is provided using On |prod| Simplex or Duplex systems, worker storage is provided using
resources on the combined host. For more information, see resources on the combined host. For more information, see
:ref:`Storage on Controller Hosts :ref:`Storage on Controller Hosts
<storage-planning-storage-on-controller-hosts>`. <storage-planning-storage-on-controller-hosts>`.
.. _storage-planning-storage-on-worker-hosts-d56e38: .. _storage-planning-storage-on-worker-hosts-d56e38:
----------------------- -----------------------
Root filesystem storage Root filesystem storage
----------------------- -----------------------
Space on the root disk is allocated to provide filesystem storage. Space on the root disk is allocated to provide filesystem storage.
You can increase the allotments for the following filesystems using the Horizon You can increase the allotments for the following filesystems using the Horizon
Web interface or the CLI. Resizing must be done on a host-by-host basis for Web interface or the CLI. Resizing must be done on a host-by-host basis for
non-|DRBD| synced filesystems. non-|DRBD| synced filesystems.
**Docker Storage** **Docker Storage**
The storage allotment for the Docker image cache for this host, and for the The storage allotment for the Docker image cache for this host, and for the
ephemeral filesystems of containers on this host. ephemeral filesystems of containers on this host.
**Kubelet Storage** **Kubelet Storage**
The storage allotment for ephemeral storage related to Kubernetes pods on The storage allotment for ephemeral storage related to Kubernetes pods on
this host. this host.
**Scratch Storage** **Scratch Storage**
The storage allotment for a variety of miscellaneous transient host The storage allotment for a variety of miscellaneous transient host
operations. operations.
**Logs Storage** **Logs Storage**
The storage allotment for log data. This filesystem is not resizable. Logs The storage allotment for log data. This filesystem is not resizable. Logs
are rotated within the fixed space as allocated. are rotated within the fixed space as allocated.
.. seealso:: .. seealso::
:ref:`Storage Resources <storage-planning-storage-resources>` :ref:`Storage Resources <storage-planning-storage-resources>`
:ref:`Storage on Controller Hosts :ref:`Storage on Controller Hosts
<storage-planning-storage-on-controller-hosts>` <storage-planning-storage-on-controller-hosts>`
:ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>` :ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>`

View File

@ -1,127 +1,127 @@
.. llf1552671530365 .. llf1552671530365
.. _storage-planning-storage-resources: .. _storage-planning-storage-resources:
================= =================
Storage Resources Storage Resources
================= =================
|prod| uses storage resources on the controller and worker hosts, and on |prod| uses storage resources on the controller and worker hosts, and on
storage hosts if they are present. storage hosts if they are present.
The |prod| storage configuration is highly flexible. The specific configuration The |prod| storage configuration is highly flexible. The specific configuration
depends on the type of system installed, and the requirements of the system. depends on the type of system installed, and the requirements of the system.
.. contents:: In this section: .. contents:: |minitoc|
:local: :local:
:depth: 1 :depth: 1
.. _storage-planning-storage-resources-d199e38: .. _storage-planning-storage-resources-d199e38:
-------------------- --------------------
Uses of Disk Storage Uses of Disk Storage
-------------------- --------------------
**System** **System**
The |prod| system uses root disk storage for the operating system and The |prod| system uses root disk storage for the operating system and
related files, and for internal databases. On controller nodes, the related files, and for internal databases. On controller nodes, the
database storage and selected root file-systems are synchronized between database storage and selected root file-systems are synchronized between
the controller nodes using |DRBD|. the controller nodes using |DRBD|.
**Local Docker Registry** **Local Docker Registry**
An HA local docker registry is deployed on controller nodes to provide An HA local docker registry is deployed on controller nodes to provide
local centralized storage of container images. Its image store is a |DRBD| local centralized storage of container images. Its image store is a |DRBD|
synchronized file system. synchronized file system.
**Docker Container Images** **Docker Container Images**
Container images are pulled from either a remote or local Docker Registry, Container images are pulled from either a remote or local Docker Registry,
and cached locally by docker on the host worker or controller node when a and cached locally by docker on the host worker or controller node when a
container is launched. container is launched.
**Container Ephemeral Local Disk** **Container Ephemeral Local Disk**
Containers have local filesystems for ephemeral storage of data. This data Containers have local filesystems for ephemeral storage of data. This data
is lost when the container is terminated. is lost when the container is terminated.
Kubernetes Docker ephemeral storage is allocated as part of the docker-lv Kubernetes Docker ephemeral storage is allocated as part of the docker-lv
and kubelet-lv file systems from the cgts-vg volume group on the root disk. and kubelet-lv file systems from the cgts-vg volume group on the root disk.
These filesystems are resizable. These filesystems are resizable.
**Container Persistent Volume Claims \(PVCs\)** **Container Persistent Volume Claims \(PVCs\)**
Containers can mount remote HA replicated volumes backed by the Ceph Containers can mount remote HA replicated volumes backed by the Ceph
Storage Cluster for managing persistent data. This data survives restarts Storage Cluster for managing persistent data. This data survives restarts
of the container. of the container.
.. note:: .. note::
Ceph is not configured by default. Ceph is not configured by default.
.. xbooklink For more information, see the |stor-doc|: :ref:`Configure the Internal Ceph Storage Backend <configuring-the-internal-ceph-storage-backend>` .. xbooklink For more information, see the |stor-doc|: :ref:`Configure the Internal Ceph Storage Backend <configuring-the-internal-ceph-storage-backend>`
.. _storage-planning-storage-resources-d199e134: .. _storage-planning-storage-resources-d199e134:
----------------- -----------------
Storage Locations Storage Locations
----------------- -----------------
In addition to the root disks present on each host for system storage, the In addition to the root disks present on each host for system storage, the
following storage may be used only for: following storage may be used only for:
.. _storage-planning-storage-resources-d199e143: .. _storage-planning-storage-resources-d199e143:
- Controller hosts: |PVCs| on dedicated storage hosts when using that setup - Controller hosts: |PVCs| on dedicated storage hosts when using that setup
or on controller hosts. Additional Ceph |OSD| disk\(s\) are present on or on controller hosts. Additional Ceph |OSD| disk\(s\) are present on
controllers in configurations without dedicated storage hosts. These |OSDs| controllers in configurations without dedicated storage hosts. These |OSDs|
provide storage to fill |PVCs| made by Kubernetes pods or containers. provide storage to fill |PVCs| made by Kubernetes pods or containers.
- Worker hosts: This is storage is derived from docker-lv/kubelet-lv as - Worker hosts: This is storage is derived from docker-lv/kubelet-lv as
defined on the cgts-vg \(root disk\). You can add a disk to cgts-vg and defined on the cgts-vg \(root disk\). You can add a disk to cgts-vg and
increase the size of the docker-lv/kubelet-lv. increase the size of the docker-lv/kubelet-lv.
**Combined Controller-Worker Hosts** **Combined Controller-Worker Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local ephemeral storage for containers, and a Ceph systems to provide local ephemeral storage for containers, and a Ceph
cluster for backing Persistent Volume Claims. cluster for backing Persistent Volume Claims.
Container/Pod ephemeral storage is implemented on the root disk on all Container/Pod ephemeral storage is implemented on the root disk on all
controllers/workers regardless of labeling. controllers/workers regardless of labeling.
**Storage Hosts** **Storage Hosts**
One or more disks are used on storage hosts to realize a large scale Ceph One or more disks are used on storage hosts to realize a large scale Ceph
cluster providing backing for |PVCs| for containers. Storage hosts are used cluster providing backing for |PVCs| for containers. Storage hosts are used
only on |prod| with Dedicated Storage systems. only on |prod| with Dedicated Storage systems.
.. _storage-planning-storage-resources-section-N1015E-N10031-N1000F-N10001: .. _storage-planning-storage-resources-section-N1015E-N10031-N1000F-N10001:
----------------------- -----------------------
External Netapp Trident External Netapp Trident
----------------------- -----------------------
|prod| can be configured to connect-to and use an external Netapp Trident |prod| can be configured to connect-to and use an external Netapp Trident
deployment as its storage backend. deployment as its storage backend.
Netapp Trident supports: Netapp Trident supports:
.. _storage-planning-storage-resources-d247e23: .. _storage-planning-storage-resources-d247e23:
- |AWS| Cloud Volumes - |AWS| Cloud Volumes
- E and EF-Series SANtricity - E and EF-Series SANtricity
- ONTAP AFF, FAS, Select, and Cloud - ONTAP AFF, FAS, Select, and Cloud
- Element HCI and SolidFire - Element HCI and SolidFire
- Azure NetApp Files service - Azure NetApp Files service
.. _storage-planning-storage-resources-d247e56: .. _storage-planning-storage-resources-d247e56:
For more information about Trident, see For more information about Trident, see
`https://netapp-trident.readthedocs.io <https://netapp-trident.readthedocs.io>`__. `https://netapp-trident.readthedocs.io <https://netapp-trident.readthedocs.io>`__.
.. seealso:: .. seealso::
:ref:`Storage on Controller Hosts <storage-planning-storage-on-controller-hosts>` :ref:`Storage on Controller Hosts <storage-planning-storage-on-controller-hosts>`
:ref:`Storage on Worker Hosts <storage-planning-storage-on-worker-hosts>` :ref:`Storage on Worker Hosts <storage-planning-storage-on-worker-hosts>`
:ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>` :ref:`Storage on Storage Hosts <storage-planning-storage-on-storage-hosts>`

View File

@ -1,47 +1,47 @@
.. srt1552049815547 .. srt1552049815547
.. _the-cluster-host-network: .. _the-cluster-host-network:
==================== ====================
Cluster Host Network Cluster Host Network
==================== ====================
The cluster host network provides the physical network required for Kubernetes The cluster host network provides the physical network required for Kubernetes
management and control, as well as private container networking. management and control, as well as private container networking.
Kubernetes uses logical networks for communication between containers, pods, Kubernetes uses logical networks for communication between containers, pods,
services, and external sites. These networks are implemented over the cluster services, and external sites. These networks are implemented over the cluster
host network using the |CNI| service, Calico, in |prod|. host network using the |CNI| service, Calico, in |prod|.
All nodes in the cluster must be attached to the cluster host network. This All nodes in the cluster must be attached to the cluster host network. This
network shares an interface with the management network. A container workload's network shares an interface with the management network. A container workload's
external connectivity is either through the |OAM| port or through other external connectivity is either through the |OAM| port or through other
configured ports on both the controller and worker nodes, depending on configured ports on both the controller and worker nodes, depending on
containerized workload requirements. Container network endpoints will be containerized workload requirements. Container network endpoints will be
exposed externally with **NodePort** Kubernetes services. This exposes selected exposed externally with **NodePort** Kubernetes services. This exposes selected
application containers network ports on *all* interfaces of both controller application containers network ports on *all* interfaces of both controller
nodes and *all* worker nodes, on either the |OAM| interface or other configured nodes and *all* worker nodes, on either the |OAM| interface or other configured
interfaces for external connectivity on all nodes. This is typically done interfaces for external connectivity on all nodes. This is typically done
either directly to the application containers service or through an ingress either directly to the application containers service or through an ingress
controller service to reduce external port usage. HA would be achieved through controller service to reduce external port usage. HA would be achieved through
either an external HA load balancer across two or more controller and/or worker either an external HA load balancer across two or more controller and/or worker
nodes, or simply using multiple records \(two or more destination controller nodes, or simply using multiple records \(two or more destination controller
and/or worker node IPs\) for the application's external DNS entry. and/or worker node IPs\) for the application's external DNS entry.
Alternatively, the cluster host network can be deployed as an external network Alternatively, the cluster host network can be deployed as an external network
and provides the container workload's external connectivity as well. Container and provides the container workload's external connectivity as well. Container
network endpoints will be exposed externally with **NodePort** Kubernetes network endpoints will be exposed externally with **NodePort** Kubernetes
services. This exposes selected Application Containers network ports on *all* services. This exposes selected Application Containers network ports on *all*
interfaces \(e.g. external cluster host interfaces\) of both controller nodes interfaces \(e.g. external cluster host interfaces\) of both controller nodes
and *all* worker nodes. This would typically be done either directly to the and *all* worker nodes. This would typically be done either directly to the
Application Containers service or through an ingress controller service to Application Containers service or through an ingress controller service to
reduce external port usage. HA would be achieved through either an external HA reduce external port usage. HA would be achieved through either an external HA
load balancer across two or more controller and/or worker nodes, or simply load balancer across two or more controller and/or worker nodes, or simply
using multiple records \(2 or more destination controller and/or worker node using multiple records \(2 or more destination controller and/or worker node
IPs\) for the Application's external DNS Entry. IPs\) for the Application's external DNS Entry.
If using an external cluster host network, container network endpoints could be If using an external cluster host network, container network endpoints could be
exposed through |BGP| within the Calico |CNI| service. Calico |BGP| exposed through |BGP| within the Calico |CNI| service. Calico |BGP|
configuration could be modified to advertise selected Application Container configuration could be modified to advertise selected Application Container
services or the ingress controller service to a |BGP| Peer, specifying the services or the ingress controller service to a |BGP| Peer, specifying the
available next hop controller and/or worker nodes' cluster host IP Addresses. available next hop controller and/or worker nodes' cluster host IP Addresses.

86
doc/source/planning/kubernetes/the-ethernet-mtu.rst Normal file → Executable file
View File

@ -1,43 +1,43 @@
.. acc1552590687558 .. acc1552590687558
.. _the-ethernet-mtu: .. _the-ethernet-mtu:
============ ============
Ethernet MTU Ethernet MTU
============ ============
The |MTU| of an Ethernet frame is a configurable attribute in |prod|. Changing The |MTU| of an Ethernet frame is a configurable attribute in |prod|. Changing
its default size must be done in coordination with other network elements on its default size must be done in coordination with other network elements on
the Ethernet link. the Ethernet link.
In the context of |prod|, the |MTU| refers to the largest possible payload on In the context of |prod|, the |MTU| refers to the largest possible payload on
the Ethernet frame on a particular network link. The payload is enclosed by the the Ethernet frame on a particular network link. The payload is enclosed by the
Ethernet header \(14 bytes\) and the CRC \(4 bytes\), resulting in an Ethernet Ethernet header \(14 bytes\) and the CRC \(4 bytes\), resulting in an Ethernet
frame that is 18 bytes longer than the |MTU| size. frame that is 18 bytes longer than the |MTU| size.
The original IEEE 802.3 specification defines a valid standard Ethernet frame The original IEEE 802.3 specification defines a valid standard Ethernet frame
size to be from 64 to 1518 bytes, accommodating payloads ranging in size from size to be from 64 to 1518 bytes, accommodating payloads ranging in size from
46 to 1500 bytes. Ethernet frames with a payload larger than 1500 bytes are 46 to 1500 bytes. Ethernet frames with a payload larger than 1500 bytes are
considered to be jumbo frames. considered to be jumbo frames.
For a |VLAN| network, the frame also includes a 4-byte |VLAN| ID header, For a |VLAN| network, the frame also includes a 4-byte |VLAN| ID header,
resulting in a frame size 22 bytes longer than the |MTU| size. resulting in a frame size 22 bytes longer than the |MTU| size.
In |prod|, you can configure the |MTU| size for the following interfaces and In |prod|, you can configure the |MTU| size for the following interfaces and
networks: networks:
.. _the-ethernet-mtu-ul-qmn-yvn-m4: .. _the-ethernet-mtu-ul-qmn-yvn-m4:
- The management, cluster host and |OAM| network interfaces on the - The management, cluster host and |OAM| network interfaces on the
controller. The |MTU| size for these interfaces is set during initial controller. The |MTU| size for these interfaces is set during initial
installation. installation.
.. xbooklink For more information, see the `StarlingX Installation and Deployment Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. To make changes after installation, see |sysconf-doc|: :ref:`Change the MTU of an OAM Interface Using Horizon <changing-the-mtu-of-an-oam-interface-using-horizon>`. .. xbooklink For more information, see the `StarlingX Installation and Deployment Guide <https://docs.starlingx.io/deploy_install_guides/index.html>`__. To make changes after installation, see |sysconf-doc|: :ref:`Change the MTU of an OAM Interface Using Horizon <changing-the-mtu-of-an-oam-interface-using-horizon>`.
- Additional interfaces configured for container workload connectivity to - Additional interfaces configured for container workload connectivity to
external networks, external networks,
In all cases, the default |MTU| size is 1500. The minimum value is 576, and the In all cases, the default |MTU| size is 1500. The minimum value is 576, and the
maximum is 9216. maximum is 9216.

View File

@ -1,44 +1,44 @@
.. yxu1552670544024 .. yxu1552670544024
.. _the-internal-management-network: .. _the-internal-management-network:
==================================== ====================================
Internal Management Network Overview Internal Management Network Overview
==================================== ====================================
The internal management network must be implemented as a single, dedicated, The internal management network must be implemented as a single, dedicated,
Layer 2 broadcast domain for the exclusive use of each |prod| cluster. Layer 2 broadcast domain for the exclusive use of each |prod| cluster.
Sharing of this network by more than one |prod| cluster is not supported. Sharing of this network by more than one |prod| cluster is not supported.
.. note:: .. note::
This network is not used with |prod| Simplex systems. This network is not used with |prod| Simplex systems.
During the |prod| software installation process, several network services During the |prod| software installation process, several network services
such as |BOOTP|, |DHCP|, and |PXE|, are expected to run over the internal such as |BOOTP|, |DHCP|, and |PXE|, are expected to run over the internal
management network. These services are used to bring up the different hosts management network. These services are used to bring up the different hosts
to an operational state. It is therefore mandatory that this network be to an operational state. It is therefore mandatory that this network be
operational and available in advance, to ensure a successful installation. operational and available in advance, to ensure a successful installation.
On each host, the internal management network can be implemented using a 1 On each host, the internal management network can be implemented using a 1
Gb or 10 Gb Ethernet port. Requirements for this port are that: Gb or 10 Gb Ethernet port. Requirements for this port are that:
.. _the-internal-management-network-ul-uh1-pqs-hp: .. _the-internal-management-network-ul-uh1-pqs-hp:
- It must be capable of |PXE|-booting. - It must be capable of |PXE|-booting.
- It can be used by the motherboard as a primary boot device. - It can be used by the motherboard as a primary boot device.
.. note:: .. note::
If required, the internal management network can be configured as a If required, the internal management network can be configured as a
|VLAN|-tagged network. In this case, a separate IPv4 |PXE| boot |VLAN|-tagged network. In this case, a separate IPv4 |PXE| boot
network must be implemented as the untagged network on the same network must be implemented as the untagged network on the same
physical interface. This configuration must also be used if the physical interface. This configuration must also be used if the
management network must support IPv6. management network must support IPv6.
.. seealso:: .. seealso::
:ref:`Internal Management Network Planning :ref:`Internal Management Network Planning
<internal-management-network-planning>` <internal-management-network-planning>`
:ref:`Multicast Subnets for the Management Network :ref:`Multicast Subnets for the Management Network
<multicast-subnets-for-the-management-network>` <multicast-subnets-for-the-management-network>`

60
doc/source/planning/kubernetes/the-storage-network.rst Normal file → Executable file
View File

@ -1,30 +1,30 @@
.. hzz1585077472404 .. hzz1585077472404
.. _the-storage-network: .. _the-storage-network:
=============== ===============
Storage Network Storage Network
=============== ===============
The storage network is an optional network that is only required if using an The storage network is an optional network that is only required if using an
external Netapp Trident cluster as a storage backend. external Netapp Trident cluster as a storage backend.
The storage network provides connectivity between all nodes in the The storage network provides connectivity between all nodes in the
|prod-long| cluster \(controller/master nodes and worker nodes\) and the |prod-long| cluster \(controller/master nodes and worker nodes\) and the
Netapp Trident cluster. Netapp Trident cluster.
For the most part, the storage network shares the design considerations For the most part, the storage network shares the design considerations
applicable to the internal management network. applicable to the internal management network.
.. _the-storage-network-ul-c41-qwm-dlb: .. _the-storage-network-ul-c41-qwm-dlb:
- It can be implemented using a 10 Gb Ethernet interface. - It can be implemented using a 10 Gb Ethernet interface.
- It can be |VLAN|-tagged, enabling it to share an interface with the - It can be |VLAN|-tagged, enabling it to share an interface with the
management or |OAM| network. management or |OAM| network.
- It can own the entire IP address range on the subnet, or a specified - It can own the entire IP address range on the subnet, or a specified
range. range.
- It supports dynamic or static IP address assignment. - It supports dynamic or static IP address assignment.

50
doc/source/planning/kubernetes/tpm-planning.rst Normal file → Executable file
View File

@ -1,25 +1,25 @@
.. cvf1552672201332 .. cvf1552672201332
.. _tpm-planning: .. _tpm-planning:
============ ============
TPM Planning TPM Planning
============ ============
|TPM| is an industry standard crypto processor that enables secure storage |TPM| is an industry standard crypto processor that enables secure storage
of HTTPS |SSL| private keys. It is used in support of advanced security of HTTPS |SSL| private keys. It is used in support of advanced security
features. features.
|TPM| is an optional requirement for |UEFI| Secure Boot. |TPM| is an optional requirement for |UEFI| Secure Boot.
If you plan to use |TPM| for secure protection of REST API and Web Server If you plan to use |TPM| for secure protection of REST API and Web Server
HTTPS |SSL| keys, ensure that |TPM| 2.0 compliant hardware devices are HTTPS |SSL| keys, ensure that |TPM| 2.0 compliant hardware devices are
fitted on controller nodes before provisioning them. If properly connected, fitted on controller nodes before provisioning them. If properly connected,
the BIOS should detect these new devices and display appropriate the BIOS should detect these new devices and display appropriate
configuration options. |TPM| must be enabled from the BIOS before it can be configuration options. |TPM| must be enabled from the BIOS before it can be
used in software. used in software.
.. note:: .. note::
|prod| allows post installation configuration of HTTPS mode. It is |prod| allows post installation configuration of HTTPS mode. It is
possible to transition a live HTTP system to a system that uses |TPM| possible to transition a live HTTP system to a system that uses |TPM|
for storage of HTTPS |SSL| keys without reinstalling the system. for storage of HTTPS |SSL| keys without reinstalling the system.

View File

@ -1,183 +1,183 @@
.. svs1552672428539 .. svs1552672428539
.. _verified-commercial-hardware: .. _verified-commercial-hardware:
============================ =======================================
Verified Commercial Hardware Kubernetes Verified Commercial Hardware
============================ =======================================
Verified and approved hardware components for use with |prod| are listed Verified and approved hardware components for use with |prod| are listed
here. here.
.. _verified-commercial-hardware-verified-components: .. _verified-commercial-hardware-verified-components:
.. table:: Table 1. Verified Components .. table:: Table 1. Verified Components
:widths: 100, 200 :widths: 100, 200
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Component | Approved Hardware | | Component | Approved Hardware |
+==========================================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================+ +==========================================================+=============================================================================================================================================================================================================================================================================================================================================================================================================================================+
| Hardware Platforms | - Hewlett Packard Enterprise | | Hardware Platforms | - Hewlett Packard Enterprise |
| | | | | |
| | | | | |
| | - HPE ProLiant DL360p Gen8 Server | | | - HPE ProLiant DL360p Gen8 Server |
| | | | | |
| | - HPE ProLiant DL360p Gen9 Server | | | - HPE ProLiant DL360p Gen9 Server |
| | | | | |
| | - HPE ProLiant DL360 Gen10 Server | | | - HPE ProLiant DL360 Gen10 Server |
| | | | | |
| | - HPE ProLiant DL380p Gen8 Server | | | - HPE ProLiant DL380p Gen8 Server |
| | | | | |
| | - HPE ProLiant DL380p Gen9 Server | | | - HPE ProLiant DL380p Gen9 Server |
| | | | | |
| | - HPE ProLiant ML350 Gen10 Server | | | - HPE ProLiant ML350 Gen10 Server |
| | | | | |
| | - c7000 Enclosure with HPE ProLiant BL460 Gen9 Server | | | - c7000 Enclosure with HPE ProLiant BL460 Gen9 Server |
| | | | | |
| | .. caution:: | | | .. caution:: |
| | LAG support is dependent on the switch cards deployed with the c7000 enclosure. To determine whether LAG can be configured, consult the switch card documentation. | | | LAG support is dependent on the switch cards deployed with the c7000 enclosure. To determine whether LAG can be configured, consult the switch card documentation. |
| | | | | |
| | | | | |
| | - Dell | | | - Dell |
| | | | | |
| | | | | |
| | - Dell PowerEdge R430 | | | - Dell PowerEdge R430 |
| | | | | |
| | - Dell PowerEdge R630 | | | - Dell PowerEdge R630 |
| | | | | |
| | - Dell PowerEdge R640 | | | - Dell PowerEdge R640 |
| | | | | |
| | - Dell PowerEdge R720 | | | - Dell PowerEdge R720 |
| | | | | |
| | - Dell PowerEdge R730 | | | - Dell PowerEdge R730 |
| | | | | |
| | - Dell PowerEdge R740 | | | - Dell PowerEdge R740 |
| | | | | |
| | | | | |
| | - Kontron Symkloud MS2920 | | | - Kontron Symkloud MS2920 |
| | | | | |
| | .. note:: | | | .. note:: |
| | The Kontron platform does not support power ON/OFF or reset through the BMC interface on |prod|. As a result, it is not possible for the system to properly fence a node in the event of a management network isolation event. In order to mitigate this, hosted application auto recovery needs to be disabled. | | | The Kontron platform does not support power ON/OFF or reset through the BMC interface on |prod|. As a result, it is not possible for the system to properly fence a node in the event of a management network isolation event. In order to mitigate this, hosted application auto recovery needs to be disabled. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Supported Reference Platforms | - Intel Iron Pass | | Supported Reference Platforms | - Intel Iron Pass |
| | | | | |
| | - Intel Canoe Pass | | | - Intel Canoe Pass |
| | | | | |
| | - Intel Grizzly Pass | | | - Intel Grizzly Pass |
| | | | | |
| | - Intel Wildcat Pass | | | - Intel Wildcat Pass |
| | | | | |
| | - Intel Wolf Pass | | | - Intel Wolf Pass |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Disk Controllers | - Dell | | Disk Controllers | - Dell |
| | | | | |
| | | | | |
| | - PERC H310 Mini | | | - PERC H310 Mini |
| | | | | |
| | - PERC H730 Mini | | | - PERC H730 Mini |
| | | | | |
| | - PERC H740P | | | - PERC H740P |
| | | | | |
| | - PERC H330 | | | - PERC H330 |
| | | | | |
| | - PERC HBA330 | | | - PERC HBA330 |
| | | | | |
| | | | | |
| | | | | |
| | - HPE Smart Array | | | - HPE Smart Array |
| | | | | |
| | | | | |
| | - P440ar | | | - P440ar |
| | | | | |
| | - P420i | | | - P420i |
| | | | | |
| | - P408i-a | | | - P408i-a |
| | | | | |
| | - P816i-a | | | - P816i-a |
| | | | | |
| | | | | |
| | - LSI 2308 | | | - LSI 2308 |
| | | | | |
| | - LSI 3008 | | | - LSI 3008 |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for PXE Boot, Management, and OAM Networks | - Intel I210 \(Springville\) 1G | | NICs Verified for PXE Boot, Management, and OAM Networks | - Intel I210 \(Springville\) 1G |
| | | | | |
| | - Intel I350 \(Powerville\) 1G | | | - Intel I350 \(Powerville\) 1G |
| | | | | |
| | - Intel 82599 \(Niantic\) 10G | | | - Intel 82599 \(Niantic\) 10G |
| | | | | |
| | - Intel X540 10G | | | - Intel X540 10G |
| | | | | |
| | - Intel X710/XL710 \(Fortville\) 10G | | | - Intel X710/XL710 \(Fortville\) 10G |
| | | | | |
| | - Intel X722 \(Fortville\) 10G | | | - Intel X722 \(Fortville\) 10G |
| | | | | |
| | - Emulex XE102 10G | | | - Emulex XE102 10G |
| | | | | |
| | - Broadcom BCM5719 1G | | | - Broadcom BCM5719 1G |
| | | | | |
| | - Broadcom BCM57810 10G | | | - Broadcom BCM57810 10G |
| | | | | |
| | - Mellanox MT27710 Family \(ConnectX-4 Lx\) 10G/25G | | | - Mellanox MT27710 Family \(ConnectX-4 Lx\) 10G/25G |
| | | | | |
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G | | | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for Data Interfaces [#]_ | The following NICs are supported: | | NICs Verified for Data Interfaces [#]_ | The following NICs are supported: |
| | | | | |
| | - Intel I350 \(Powerville\) 1G | | | - Intel I350 \(Powerville\) 1G |
| | | | | |
| | - Intel 82599 \(Niantic\) 10G | | | - Intel 82599 \(Niantic\) 10G |
| | | | | |
| | - Intel X710/XL710 \(Fortville\) 10 G | | | - Intel X710/XL710 \(Fortville\) 10 G |
| | | | | |
| | - Intel X552 \(Xeon-D\) 10G | | | - Intel X552 \(Xeon-D\) 10G |
| | | | | |
| | - Mellanox Technologies | | | - Mellanox Technologies |
| | | | | |
| | | | | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G | | | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | | | | |
| | - MT27700 Family \(ConnectX-4\) 40G | | | - MT27700 Family \(ConnectX-4\) 40G |
| | | | | |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI passthrough or PCI SR-IOV NICs | - Intel 82599 \(Niantic\) 10 G | | PCI passthrough or PCI SR-IOV NICs | - Intel 82599 \(Niantic\) 10 G |
| | | | | |
| | - Intel X710/XL710 \(Fortville\) 10G | | | - Intel X710/XL710 \(Fortville\) 10G |
| | | | | |
| | - Mellanox Technologies | | | - Mellanox Technologies |
| | | | | |
| | | | | |
| | - MT27500 Family \(ConnectX-3\) 10G \(support for PCI passthrough only\) [#]_ | | | - MT27500 Family \(ConnectX-3\) 10G \(support for PCI passthrough only\) [#]_ |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G | | | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | | | | |
| | - MT27700 Family \(ConnectX-4\) 40G | | | - MT27700 Family \(ConnectX-4\) 40G |
| | | | | |
| | | | | |
| | .. note:: | | | .. note:: |
| | For a Mellanox CX3 using PCI passthrough or a CX4 using PCI passthrough or SR-IOV, SR-IOV must be enabled in the CX3/CX4 firmware. For more information, see `How To Configure SR-IOV for ConnectX-3 with KVM (Ethernet): Enable SR-IOV on the Firmware <https://community.mellanox.com/docs/DOC-2365#jive_content_id_I_Enable_SRIOV_on_the_Firmware>`__. | | | For a Mellanox CX3 using PCI passthrough or a CX4 using PCI passthrough or SR-IOV, SR-IOV must be enabled in the CX3/CX4 firmware. For more information, see `How To Configure SR-IOV for ConnectX-3 with KVM (Ethernet): Enable SR-IOV on the Firmware <https://community.mellanox.com/docs/DOC-2365#jive_content_id_I_Enable_SRIOV_on_the_Firmware>`__. |
| | | | | |
| | | | | |
| | .. note:: | | | .. note:: |
| | The maximum number of VFs per hosted application instance, across all PCI devices, is 32. | | | The maximum number of VFs per hosted application instance, across all PCI devices, is 32. |
| | | | | |
| | For example, a hardware encryption hosted application can be launched with virtio interfaces and 32 QAT VFs. However, a hardware encryption hosted application with an SR-IOV network interface \(with 1 VF\) can only be launched with 31 VFs. | | | For example, a hardware encryption hosted application can be launched with virtio interfaces and 32 QAT VFs. However, a hardware encryption hosted application with an SR-IOV network interface \(with 1 VF\) can only be launched with 31 VFs. |
| | | | | |
| | .. note:: | | | .. note:: |
| | Dual-use configuration \(PCI passthrough or PCI SR-IOV on the same interface\) is supported for Fortville NICs only. | | | Dual-use configuration \(PCI passthrough or PCI SR-IOV on the same interface\) is supported for Fortville NICs only. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI SR-IOV Hardware Accelerators | - Intel AV-ICE02 VPN Acceleration Card, based on the Intel Coleto Creek 8925/8950, and C62x device with QuickAssist ®. | | PCI SR-IOV Hardware Accelerators | - Intel AV-ICE02 VPN Acceleration Card, based on the Intel Coleto Creek 8925/8950, and C62x device with QuickAssist ®. |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GPUs Verified for PCI Passthrough | - NVIDIA Corporation: VGA compatible controller - GM204GL \(Tesla M60 rev a1\) | | GPUs Verified for PCI Passthrough | - NVIDIA Corporation: VGA compatible controller - GM204GL \(Tesla M60 rev a1\) |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Board Management Controllers | - HPE iLO3 | | Board Management Controllers | - HPE iLO3 |
| | | | | |
| | - HPE iLO4 | | | - HPE iLO4 |
| | | | | |
| | - Quanta | | | - Quanta |
+----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +----------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. include:: ../../_includes/verified-commercial-hardware.rest .. include:: ../../_includes/verified-commercial-hardware.rest

View File

@ -0,0 +1,112 @@
.. ixo1464634136835
.. _block-storage-for-virtual-machines:
==================================
Block Storage for Virtual Machines
==================================
Virtual machines use controller or storage host resources for root and
ephemeral disk storage.
.. _block-storage-for-virtual-machines-section-N10022-N1001F-N10001:
-------------------------
Root Disk Storage for VMs
-------------------------
You can allocate root disk storage for virtual machines using the following:
.. _block-storage-for-virtual-machines-ul-d1c-j5k-s5:
- Cinder volumes on controller hosts \(backed by small Ceph Cluster\) or
storage hosts \(backed by large-scale Ceph\).
- Ephemeral local storage on compute hosts, using image-based instance
backing.
- Ephemeral remote storage on controller hosts or storage hosts, backed by
Ceph.
The use of Cinder volumes or ephemeral storage is determined by the **Instance
Boot Source** setting when an instance is launched. Boot from volume results in
the use of a Cinder volume, while Boot from image results in the use of
ephemeral storage.
.. note::
On systems with one or more single-disk compute hosts configured with local
instance backing, the use of Boot from volume for all |VMs| is strongly
recommended. This helps prevent the use of local ephemeral storage on these
hosts.
On systems without dedicated storage hosts, Cinder-backed persistent storage
for virtual machines is provided using the small Ceph cluster on controller
disks.
On systems with dedicated hosts, Cinder storage is provided using Ceph-backed
|OSD| disks on high-availability and highly-scalable storage hosts.
.. _block-storage-for-virtual-machines-section-N100A2-N1001F-N10001:
---------------------------------------
Ephemeral and Swap Disk Storage for VMs
---------------------------------------
Storage for |VM| ephemeral and swap disks, and for ephemeral boot disks if the
|VM| is launched from an image rather than a volume, is provided using the
**nova-local** local volume group defined on compute hosts.
The **nova-local** group provides either local ephemeral storage using
|CoW|-image-backed storage resources on compute hosts, or remote ephemeral
storage, using Ceph-backed resources on storage hosts. You must configure the
storage backing type at installation before you can unlock a compute host. The
default type is image-backed local ephemeral storage. You can change the
configuration after installation.
.. xbooklink For more information, see |stor-doc|: :ref:`Working with Local Volume Groups <working-with-local-volume-groups>`.
.. caution::
On a compute node with a single disk, local ephemeral storage uses the root
disk. This can adversely affect the disk I/O performance of the host. To
avoid this, ensure that single-disk compute nodes use remote Ceph-backed
storage if available. If Ceph storage is not available on the system, or is
not used for one or more single-disk compute nodes, then you must ensure
that all VMs on the system are booted from Cinder volumes and do not use
ephemeral or swap disks.
On |prod-os| Simplex or Duplex systems that use a single disk, the same
consideration applies. Since the disk also provides Cinder support, adverse
effects on I/O performance can also be expected for VMs booted from Cinder
volumes.
The backing type is set individually for each host using the **Instance
Backing** parameter on the **nova-local** local volume group.
**Local CoW Image backed**
This provides local ephemeral storage using a |CoW| sparse-image-format
backend, to optimize launch and delete performance.
**Remote RAW Ceph storage backed**
This provides remote ephemeral storage using a Ceph backend on a system
with storage nodes, to optimize migration capabilities. Ceph backing uses a
Ceph storage pool configured from the storage host resources.
You can control whether a |VM| is instantiated with |CoW| or Ceph-backed
storage by setting a flavor extra specification.
.. xbooklink For more information, see OpenStack Configuration and Management: :ref:`Specifying the Storage Type for VM Ephemeral Disks <specifying-the-storage-type-for-vm-ephemeral-disks>`.
.. _block-storage-for-virtual-machines-d29e17:
.. caution::
Unlike Cinder-based storage, ephemeral storage does not persist if the
instance is terminated or the compute node fails.
In addition, for local ephemeral storage, migration and resizing support
depends on the storage backing type specified for the instance, as well as
the boot source selected at launch.
The **nova-local** storage type affects migration behavior. Live migration is
not always supported for |VM| disks using local ephemeral storage.
.. xbooklink For more information, see :ref:`VM Storage Settings for Migration, Resize, or Evacuation <vm-storage-settings-for-migration-resize-or-evacuation>`.

View File

@ -0,0 +1,71 @@
.. jow1404333738594
.. _data-network-planning:
=====================
Data Network Planning
=====================
Data networks are the payload-carrying networks used implicitly by end users
when they move traffic over their project networks.
You can review details for existing data networks using OpenStack Horizon Web
interface or the CLI.
When planning data networks, you must consider the following guidelines:
.. _data-network-planning-ul-cmp-rl2-4n:
- From the point of view of the projects, all networking happens over the
project networks created by them, or by the **admin** user on their behalf.
Projects are not necessarily aware of the available data networks. In fact,
they cannot create project networks over data networks not already
accessible to them. For this reason, the system administrator must ensure
that proper communication mechanisms are in place for projects to request
access to specific data networks when required.
For example, a project may be interested in creating a new project network
with access to a specific network access device in the data center, such as
an access point for a wireless transport. In this case, the system
administrator must create a new project network on behalf of the project,
using a |VLAN| ID in the project's segmentation range that provides
connectivity to the said network access point.
- Consider how different offerings of bandwidth, throughput commitments, and
class-of-service, can be used by your users. Having different data network
offerings available to your projects enables end users to diversify their
own portfolio of services. This in turn gives the |prod-os| administration
an opportunity to put different revenue models in place.
- For the IPv4 address plan, consider the following:
- Project networks attached to a public network, such as the Internet,
have to have external addresses assigned to them. You must therefore
plan for valid definitions of their IPv4 subnets and default gateways.
- As with the |OAM| network, you must ensure that suitable firewall
services are in place on any project network with a public address.
- Segmentation ranges may be owned by the administrator, a specific project,
or may be shared by all projects. With this ownership model:
- A base deployment scenario has each compute node using a single data
interface defined over a single data network. In this scenario, all
required project networks can be instantiated making use of the
available |VLANs| or |VNIs| in each corresponding segmentation range.
You may need more than one data network when the underlying physical
networks demand different |MTU| sizes, or when boundaries between data
networks are dictated by policy or other non-technical considerations.
- Segmentation ranges can be reserved and assigned on-demand without
having to lock and unlock the compute nodes. This facilitates
day-to-day operations which can be performed without any disruption to
the running environment.
- In some circumstances, data networks can be configured to support |VLAN|
Transparent mode on project networks. In this mode |VLAN| tagged packets
are encapsulated within a data network segment without removing or
modifying the guest |VLAN| tag\(s\).

View File

@ -0,0 +1,94 @@
.. jow1423169316542
.. _ethernet-interface-configuration:
.. partner only ?
================================
Ethernet Interface Configuration
================================
You can review and modify the configuration for physical or virtual Ethernet
interfaces using the OpenStack Horizon Web interface or the CLI.
.. _ethernet-interface-configuration-section-N1001F-N1001C-N10001:
----------------------------
Physical Ethernet Interfaces
----------------------------
The physical Ethernet interfaces on |prod-os| nodes are configured to use the
following networks:
.. _ethernet-interface-configuration-ul-lk1-b4j-zq:
- the internal management network
- the internal cluster host network \(by default sharing the same L2
interface as the internal management network\)
- the external |OAM| network
- one or more data networks
A single interface can optionally be configured to support more than one
network using |VLAN| tagging \(see :ref:`Shared (VLAN or Multi-Netted) Ethernet
Interfaces
<network-planning-shared-vlan-or-multi-netted-ethernet-interfaces>`\).
.. _ethernet-interface-configuration-section-N10059-N1001C-N10001:
---------------------------
Virtual Ethernet Interfaces
---------------------------
The virtual Ethernet interfaces for guest |VMs| running on |prod-os| are
defined when an instance is launched. They connect the |VM| to project
networks, which are virtual networks defined over data networks, which in turn
are abstractions associated with physical interfaces assigned to physical
networks on the compute nodes.
The following virtual network interfaces are available:
.. _ethernet-interface-configuration-ul-amy-z5z-zs:
- |AVP|
- ne2k\_pci \(NE2000 Emulation\)
- pcnet \(AMD PCnet/|PCI| Emulation\)
- rtl8139 \(Realtek 8139 Emulation\)
- virtio \(VirtIO Network\)
- pci-passthrough \(|PCI| Passthrough Device\)
- pci-sriov \(|SRIOV| device\)
Unmodified guests can use Linux networking and virtio drivers. This provides a
mechanism to bring existing applications into the production environment
immediately.
.. xbooklink For more information about |AVP| drivers, see OpenStack VNF Integration: :ref:`Accelerated Virtual Interfaces <accelerated-virtual-interfaces>`.
|prod-os| incorporates |DPDK|-Accelerated Neutron Virtual Router L3 Forwarding
\(AVR\). Accelerated forwarding is used for directly attached project networks
and subnets, as well as for gateway, |SNAT| and floating IP functionality.
|prod-os| also supports direct guest access to |NICs| using |PCI| passthrough
or |SRIOV|, with enhanced |NUMA| scheduling options compared to standard
OpenStack. This offers very high performance, but because access is not managed
by |prod-os| or the vSwitch process, there is no support for live migration,
|prod-os|-provided |LAG|, host interface monitoring, |QoS|, or ACL. If |VLANs|
are used, they must be managed by the guests.
For further performance improvements, |prod-os| supports direct access to
|PCI|-based hardware accelerators, such as the Coleto Creek encryption
accelerator from Intel. |prod-os| manages the allocation of |SRIOV| VFs to
|VMs|, and provides intelligent scheduling to optimize |NUMA| node affinity.
.. only:: partner
.. include:: ../../_includes/ethernet-interface-configuration.rest

View File

@ -0,0 +1,37 @@
.. jow1404333731990
.. _ethernet-interfaces:
===================
Ethernet Interfaces
===================
Ethernet interfaces, both physical and virtual, play a key role in the overall
performance of the virtualized network. Therefore, it is important to
understand the available interface types, their configuration options, and
their impact on network design.
.. _ethernet-interfaces-section-N1006F-N1001A-N10001:
-----------------------
About LAG/AE Interfaces
-----------------------
You can use |LAG| for Ethernet interfaces. |prod-os| supports up to four ports
in a |LAG| group.
Ethernet interfaces in a |LAG| group can be attached either to the same L2
switch, or to multiple switches in a redundant configuration. For more
information about L2 switch configurations, see :ref:`L2 Access Switches
<network-planning-l2-access-switches>`. For information about the different
|LAG| modes, see |node-doc|: :ref:`Link Aggregation Settings
<link-aggregation-settings>`.
.. seealso::
:ref:`Ethernet Interface Configuration <ethernet-interface-configuration>`
:ref:`The Ethernet MTU <the-ethernet-mtu>`
:ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<network-planning-shared-vlan-or-multi-netted-ethernet-interfaces>`

View File

@ -0,0 +1,277 @@
.. fnr1551900935447
.. _hardware-requirements:
=====================
Hardware Requirements
=====================
|prod-os| has been tested to work with specific hardware configurations.
If the minimum hardware requirements are not met, system performance cannot be
guaranteed.
See :ref:`StarlingX Hardware Requirements <starlingx-hardware-requirements>`
for the |prod-long| Hardware Requirements. In the table below, only the
Interface sections are modified for |prod-os|.
.. _hardware-requirements-section-N10044-N10024-N10001:
--------------------------------------
Controller, Compute, and Storage Hosts
--------------------------------------
.. _hardware-requirements-table-nvy-52x-p5:
.. table:: Table 1. Hardware Requirements — |prod-os| Standard Configuration
:widths: auto
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller | Storage | Compute |
+===========================================================+=================================================================================================================================================================================================================================================+==============================================================================================+====================================================================================================================+
| Minimum Qty of Servers | 2 \(required\) | \(if Ceph storage used\) | 2 100 |
| | | | |
| | | 2 8 \(for replication factor 2\) | |
| | | | |
| | | 3 9 \(for replication factor 3\) | |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | |
| | |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | Platform: All cores | Platform: All cores | - Platform: 1x physical core \(2x logical cores if hyper-threading\), \(by default, configurable\) |
| | | | |
| | | | - vSwitch: 1x physical core / socket \(by default, configurable\) |
| | | | |
| | | | - Application: Remaining cores |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB | 64 GB | 32 GB |
| | | | |
| | Platform: All memory | Platform: All memory | - Platform: |
| | | | |
| | | | |
| | | | - Socket 0: 7GB \(by default, configurable\) |
| | | | |
| | | | - Socket 1: 1GB \(by default, configurable\) |
| | | | |
| | | | |
| | | | - vSwitch: 1GB / socket \(by default, configurable\) |
| | | | |
| | | | - Application: |
| | | | |
| | | | |
| | | | - Socket 0: Remaining memory |
| | | | |
| | | | - Socket 1: Remaining memory |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk \(two-disk hardware RAID suggested\) | 500 GB - SSD or NVMe | 120 GB \(min. 10K RPM\) |
| | | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | .. note:: |
| | Installation on software RAID is not supported. |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Additional Disks | 1 X 500 GB \(min 10K RPM\) | 500 GB \(min. 10K RPM\) for OSD storage | 500 GB \(min. 10K RPM\) — 1 or more |
| | | | |
| | \(not required for systems with dedicated storage nodes\) | one or more SSDs or NVMe drives \(recommended for Ceph journals\); min. 1024 MiB per journal | .. note:: |
| | | | Single-disk hosts are supported, but must not be used for local ephemeral storage |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment\) |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | | | |
| | - OAM: 2 x 1GE LAG | | - Data: 2 x LAG, DPDK-compatible \(see "Verified Commercial Hardware: NICs Verified for Data Interfaces" below\) |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Board Management Controller \(BMC\) | 1 \(required\) | 1 \(required\) | 1 \(required\) |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 | not required |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB | HD, PXE |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`. |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Disabled | Enabled |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
.. _hardware-requirements-section-N102D0-N10024-N10001:
---------------------------------
Combined Controller-Compute Hosts
---------------------------------
Hardware requirements for a |prod-os| Simplex or Duplex configuration are
listed in the following table.
See :ref:`StarlingX Hardware Requirements <starlingx-hardware-requirements>`
for the |prod-long| Hardware Requirements. In the table below, only the
Interface sections are modified for |prod-os|.
.. _hardware-requirements-table-cb2-lfx-p5:
.. table:: Table 2. Hardware Requirements — |prod-os| Simplex or Duplex Configuration
:widths: auto
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller + Compute |
| | |
| | \(Combined Server\) |
+===================================+=================================================================================================================================================================================================================================================+
| Minimum Qty of Servers | Simplex―1 |
| | |
| | Duplex―2 |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | |
| | or |
| | |
| | Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) |
| | |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | - Platform: 2x physical cores \(4x logical cores if hyper-threading\), \(by default, configurable\) |
| | |
| | - vSwitch: 1x physical core / socket \(by default, configurable\) |
| | |
| | - Application: Remaining cores |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB |
| | |
| | - Platform: |
| | |
| | |
| | - Socket 0: 7GB \(by default, configurable\) |
| | |
| | - Socket 1: 1GB \(by default, configurable\) |
| | |
| | |
| | - vSwitch: 1GB / socket \(by default, configurable\) |
| | |
| | - Application: |
| | |
| | |
| | - Socket 0: Remaining memory |
| | |
| | - Socket 1: Remaining memory |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk | 500 GB - SSD or NVMe |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Additional Disks | - Single-disk system: N/A |
| | |
| | - Two-disk system: |
| | |
| | |
| | - 1 x 500 GB SSD or NVMe for Persistent Volume Claim storage |
| | |
| | |
| | - Three-disk system: |
| | |
| | |
| | - 1 x 500 GB \(min 10K RPM\) for Persistent Volume Claim storage |
| | |
| | - 1 or more x 500 GB \(min. 10K RPM\) for Container ephemeral disk storage |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment.\) |
| | |
| | - Magement and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | |
| | .. note:: |
| | Magement ports are required for Duplex systems only |
| | |
| | - OAM: 2 x 1GE LAG |
| | |
| | - Data: 2 x LAG, DPDK-compatible \(see "Verified Commercial Hardware: NICs Verified for Data Interfaces" below\) |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`. |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Enabled |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _hardware-requirements-section-if-scenarios:
|row-alt-off|
---------------------------------
Interface Configuration Scenarios
---------------------------------
|prod-os| supports the use of consolidated interfaces for the management,
cluster host and |OAM| networks. Some typical configurations are shown in the
following table. For best performance, |org| recommends dedicated interfaces.
|LAG| is optional in all instances.
.. _hardware-requirements-table-if-scenarios:
.. table::
:widths: auto
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| Scenario | Controller | Storage | Compute |
+====================================================================+===============================+===============================+================================+
| | | | |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| - Physical interfaces on servers limited to two pairs | 2x 10GE LAG: | 2x 10GE LAG: | 2x 10GE LAG: |
| | | | |
| - Estimated aggregate average VM storage traffic less than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | |
| | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) |
| | | | |
| | | | |
| | 2x 1GE LAG: | | 2x 10GE LAG |
| | | | |
| | - OAM \(untagged\) | | - Data \(tagged\) |
| | | | |
| | | | |
| | | | \[ … more data interfaces … \] |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| - No specific limit on number of physical interfaces | 2x 1GE LAG: | 2x 1GE LAG | 2x 1GE LAG |
| | | | |
| - Estimated aggregate average VM storage traffic greater than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | |
| | | | |
| | 2x 1GE LAG: | 2x 1GE LAG: | 2x 1GE LAG: |
| | | | |
| | - OAM \(untagged\) | - OAM \(untagged\) | - OAM \(untagged\) |
| | | | |
| | | | |
| | | | 2x 10GE LAG: |
| | | | |
| | | | - Data \(tagged\) |
| | | | |
| | | | |
| | | | \[ … more data interfaces … \] |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+

View File

@ -0,0 +1,29 @@
.. iym1475074530218
.. _https-access-planning:
=====================
HTTPS Access Planning
=====================
You can enable secure HTTPS access for |prod-os|'s REST API endpoints or
OpenStack Horizon Web interface users.
.. note::
To enable HTTPS access for |prod-os|, you must enable HTTPS in the
underlying |prod-long| platform.
By default, |prod-os| provides HTTP access for remote connections. For improved
security, you can enable HTTPS access. When HTTPS access is enabled, HTTP
access is disabled.
When HTTPS is enabled for the first time on a |prod-os| system, a self-signed
certificate is automatically installed. In order to connect, remote clients
must be configured to accept the self-signed certificate without verifying it
\("insecure" mode\).
For secure-mode connections, a |CA|-signed certificate is required. The use of
a |CA|-signed certificate is strongly recommended.
You can update the certificate used by |prod-os| at any time after
installation.

View File

@ -0,0 +1,114 @@
---------
OpenStack
---------
================
Network planning
================
.. toctree::
:maxdepth: 1
network-planning-ip-support
the-pxe-boot-network
network-planning-the-internal-management-network
network-planning-the-cluster-host-network
the-oam-network
*************
Data networks
*************
.. toctree::
:maxdepth: 1
network-planning-data-networks
physical-network-planning
.. toctree::
:maxdepth: 1
network-planning-l2-access-switches
*******************
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
ethernet-interfaces
ethernet-interface-configuration
the-ethernet-mtu
network-planning-shared-vlan-or-multi-netted-ethernet-interfaces
*************************
Virtual or cloud networks
*************************
.. toctree::
:maxdepth: 1
virtual-or-cloud-networks
os-data-networks-overview
project-networks
Project network planning
************************
.. toctree::
:maxdepth: 1
project-network-planning
project-network-ip-address-management
subnet-details
internal-dns-resolution
data-network-planning
vxlans
vlan-aware-vms
VM network interface options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 1
vm-network-interface-options
port-security-extension
pci-passthrough-ethernet-interfaces
sr-iov-ethernet-interfaces
================
Storage planning
================
.. toctree::
:maxdepth: 1
storage-resources
storage-configuration-storage-on-hosts
block-storage-for-virtual-machines
vm-storage-settings-for-migration-resize-or-evacuation
=================
Security planning
=================
.. toctree::
:maxdepth: 1
uefi-secure-boot-planning
https-access-planning
==================================
Installation and resource planning
==================================
.. toctree::
:maxdepth: 1
hardware-requirements
installation-and-resource-planning-verified-commercial-hardware
installation-and-resource-planning-controller-disk-configurations-for-all-in-one-systems

View File

@ -0,0 +1,13 @@
.. gkz1516633358554
.. _installation-and-resource-planning-controller-disk-configurations-for-all-in-one-systems:
=====================================================
Controller Disk Configurations for All-in-one Systems
=====================================================
Verified |AIO| controller disk configurations are discussed in the |prod-long|
documentation.
See :ref:`Kubernetes Controller Disk Configurations for All-in-one Systems
<controller-disk-configurations-for-all-in-one-systems>` for details.

View File

@ -0,0 +1,189 @@
.. ikn1516739312384
.. _installation-and-resource-planning-verified-commercial-hardware:
======================================
OpenStack Verified Commercial Hardware
======================================
Verified and approved hardware components for use with |prod-os| are listed
here.
.. _installation-and-resource-planning-verified-commercial-hardware-verified-components:
.. table:: Table 1. Verified Components
:widths: 100, 200
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Component | Approved Hardware |
+==========================================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+
| Hardware Platforms | - Hewlett Packard Enterprise |
| | |
| | |
| | - HPE ProLiant DL360p Gen8 Server |
| | |
| | - HPE ProLiant DL360p Gen9 Server |
| | |
| | - HPE ProLiant DL360 Gen10 Server |
| | |
| | - HPE ProLiant DL380p Gen8 Server |
| | |
| | - HPE ProLiant DL380p Gen9 Server |
| | |
| | - HPE ProLiant ML350 Gen10 Server |
| | |
| | - c7000 Enclosure with HPE ProLiant BL460 Gen9 Server |
| | |
| | .. caution:: |
| | LAG support is dependent on the switch cards deployed with the c7000 enclosure. To determine whether LAG can be configured, consult the switch card documentation. |
| | |
| | |
| | - Dell |
| | |
| | |
| | - Dell PowerEdge R430 |
| | |
| | - Dell PowerEdge R630 |
| | |
| | - Dell PowerEdge R640 |
| | |
| | - Dell PowerEdge R720 |
| | |
| | - Dell PowerEdge R730 |
| | |
| | - Dell PowerEdge R740 |
| | |
| | |
| | - Kontron Symkloud MS2920 |
| | |
| | .. note:: |
| | The Kontron platform does not support power ON/OFF or reset through the BMC interface on Cloud Platform. As a result, it is not possible for the system to properly fence a node in the event of a management network isolation event. In order to mitigate this, hosted application auto recovery needs to be disabled. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Supported Reference Platforms | - Intel Iron Pass |
| | |
| | - Intel Canoe Pass |
| | |
| | - Intel Grizzly Pass |
| | |
| | - Intel Wildcat Pass |
| | |
| | - Intel Wolf Pass |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Disk Controllers | - Dell |
| | |
| | |
| | - PERC H310 Mini |
| | |
| | - PERC H730 Mini |
| | |
| | - PERC H740P |
| | |
| | - PERC H330 |
| | |
| | - PERC HBA330 |
| | |
| | |
| | |
| | - HPE Smart Array |
| | |
| | |
| | - P440ar |
| | |
| | - P420i |
| | |
| | - P408i-a |
| | |
| | - P816i-a |
| | |
| | |
| | - LSI 2308 |
| | |
| | - LSI 3008 |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for PXE Boot, Management, and OAM Networks | - Intel I210 \(Springville\) 1G |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X540 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Intel X722 \(Fortville\) 10G |
| | |
| | - Emulex XE102 10G |
| | |
| | - Broadcom BCM5719 1G |
| | |
| | - Broadcom BCM57810 10G |
| | |
| | - Mellanox MT27710 Family \(ConnectX-4 Lx\) 10G/25G |
| | |
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for Data Interfaces [#f1]_ | The following NICs are supported: |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10 G |
| | |
| | - Intel X552 \(Xeon-D\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
| | |
| | |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI passthrough or PCI SR-IOV NICs | - Intel 82599 \(Niantic\) 10 G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27500 Family \(ConnectX-3\) 10G \(support for PCI passthrough only\) |
| | |
| | .. note:: |
| | Support for Mellanox CX3 is deprecated in Release 5 and scheduled for removal in Release 6 |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
| | |
| | .. note:: |
| | For a Mellanox CX3 using PCI passthrough or a CX4 using PCI passthrough or SR-IOV, SR-IOV must be enabled in the CX3/CX4 firmware. For more information, see `How To Configure SR-IOV for ConnectX-3 with KVM (Ethernet): Enable SR-IOV on the Firmware <https://community.mellanox.com/docs/DOC-2365#jive_content_id_I_Enable_SRIOV_on_the_Firmware>`__. |
| | |
| | |
| | .. note:: |
| | The maximum number of VFs per hosted application instance, across all PCI devices, is 32. |
| | |
| | For example, a hardware encryption hosted application can be launched with virtio interfaces and 32 QAT VFs. However, a hardware encryption hosted application with an SR-IOV network interface \(with 1 VF\) can only be launched with 31 VFs. |
| | |
| | .. note:: |
| | Dual-use configuration \(PCI passthrough or PCI SR-IOV on the same interface\) is supported for Fortville NICs only. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI SR-IOV Hardware Accelerators | - Intel AV-ICE02 VPN Acceleration Card, based on the Intel Coleto Creek 8925/8950, and C62x device with QuickAssist Technology. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GPUs Verified for PCI Passthrough | - NVIDIA Corporation: VGA compatible controller - GM204GL \(Tesla M60 rev a1\) |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Board Management Controllers | - HPE iLO3 |
| | |
| | - HPE iLO4 |
| | |
| | - Quanta |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. include:: ../../_includes/installation-and-resource-planning-verified-commercial-hardware.rest
.. seealso::
:ref:`Kubernetes Verified Commercial Hardware <verified-commercial-hardware>`

View File

@ -0,0 +1,23 @@
.. kss1491241946903
.. _internal-dns-resolution:
=======================
Internal DNS Resolution
=======================
|prod-os| supports internal DNS resolution for instances. When this feature
is enabled, instances are automatically assigned hostnames, and an internal
DNS server is maintained which associates the hostnames with IP addresses.
The ability to use hostnames is based on the OpenStack DNS integration
capability. For more about this capability, see
`https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html
<https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html>`__.
When internal DNS resolution is enabled on a |prod-os| system, the Neutron
service maintains an internal DNS server with a hostname-IP address pair for
each instance. The hostnames are derived automatically from the names assigned
to the instances when they are launched, providing |PQDNs|. They can be
concatenated with a domain name defined for the Neutron service to form fully
|FQDNs|.

View File

@ -0,0 +1,17 @@
.. etp1466026368950
.. _network-planning-data-networks:
=============
Data Networks
=============
The physical Ethernet interfaces on |prod-os| nodes can be configured to use
one or more data networks.
For more information, see, |prod-os| Configuration and Management: :ref:`Data
Networks and Data Network Interfaces <data-networks-overview>`.
.. seealso::
:ref:`Physical Network Planning <physical-network-planning>`

View File

@ -0,0 +1,37 @@
.. ekg1551898490562
.. _network-planning-ip-support:
==========
IP Support
==========
|prod-os| supports IPv4 and IPv6 versions for various networks.
All networks must be a single address family, either IPv4 or IPv6, with the
exception of the |PXE| boot network which must always use IPv4. The following
table lists IPv4 and IPv6 support for different networks:
.. _network-planning-ip-support-table-xqy-3cj-4cb:
.. table:: Table 1. IPv4 and IPv6 Support
:widths: auto
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | IPv4 Support | IPv6 Support | Comment |
+====================================================================+==============+==============+==================================================================================================================================================================================================================================================+
| PXE boot | Y | N | If present, the PXE boot network is used for PXE booting of new hosts \(instead of using the internal management network\), and therefore must be untagged. It is limited to IPv4, since the Platform does not support IPv6 booting. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Internal Management | Y | Y | |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OAM | Y | Y | |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cluster Host | Y | Y | The cluster host network supports IPv4 or IPv6 addressing. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| VXLAN Data Network | Y | Y | .. note:: |
| | | | Flat and VLAN Data networks are L2 functions, so IPv4 or IPv6 can be used, if required. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project Networks, Routers, MetaData Server, SNAT, and Floating IPs | Y | Y\* | - DHCP and Routing support for IPv6. |
| | | | |
| | | | - No IPv6 support for access to MetaData Server, SNAT or Floating IPs. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,50 @@
.. jow1404333739778
.. _network-planning-l2-access-switches:
============================
OpenStack L2 Access Switches
============================
L2 access switches connect the |prod-os| hosts to the different networks.
Proper configuration of the access ports is necessary to ensure proper traffic
flow.
One or more L2 switches can be used to connect the |prod-os| hosts to the
different networks. When sharing a single L2 switch you must ensure proper
isolation of the network traffic. Here is an example of how to configure a
shared L2 switch:
.. _network-planning-l2-access-switches-ul-obf-dyr-4n:
- one port-based |VLAN| for the internal management network and cluster host
network
- one port-based |VLAN| for the |OAM| network
- one or more sets of |VLANs| for data networks. For example:
- one set of |VLANs| with good |QoS| for bronze projects
- one set of |VLANs| with better |QoS| for silver projects
- one set of |VLANs| with the best |QoS| for gold projects
When using multiple L2 switches, there are several deployment possibilities.
Here are some examples:
.. _network-planning-l2-access-switches-ul-qmd-wyr-4n:
- A single L2 switch for the internal management, cluster host, and |OAM|
networks. Port- or |MAC|-based network isolation is mandatory.
- One or more L2 switches, not necessarily inter-connected, with one L2
switch per data network.
- Redundant L2 switches to support link aggregation, using either a failover
model, or |VPC| for more robust redundancy.
See :ref:`Kubernetes Platform Planning <l2-access-switches>` for
additional considerations related to L2 switches.

View File

@ -0,0 +1,40 @@
.. jow1423173019489
.. _network-planning-shared-vlan-or-multi-netted-ethernet-interfaces:
===================================================
Shared \(VLAN or Multi-Netted\) Ethernet Interfaces
===================================================
The management, cluster host, |OAM|, and physical networks can share Ethernet
or aggregated Ethernet interfaces using |VLAN| tagging or IP Multi-Netting.
The |OAM|, internal management, and cluster host networks can use |VLAN|
tagging or IP Multi-Tagging, allowing them to share an Ethernet or aggregated
Ethernet interface with other networks. The one restriction is that if the
internal management network is implemented as a |VLAN|-tagged network, then it
must be on the physical interface used for |PXE| booting.
The following arrangements are possible:
.. _network-planning-shared-vlan-or-multi-netted-ethernet-interfaces-ul-y5k-zg2-zq:
- One interface for the internal management network and internal cluster host
network using multi-netting, another interface for |OAM| \(on which
container workloads are exposed externally\) and one or more additional
interfaces data networks. This is the default configuration.
- One interface for the internal management network, another interface for
the |OAM| network, a third interface for the cluster host network, and one
or more additional interfaces for data networks.
- One interface for the internal management network, with the |OAM| and
cluster host networks also implemented on the interface using |VLAN|
tagging, and additional interfaces for data networks .
For some typical interface scenarios, see :ref:`Managed Kubernetes Cluster
Hardware Requirements <hardware-requirements>`.
For more information about configuring |VLAN| interfaces, see |node-doc|:
:ref:`Configure VLAN Interfaces Using the CLI
<configuring-vlan-interfaces-using-the-cli>`.

View File

@ -0,0 +1,20 @@
.. nzw1555338241460
.. _network-planning-the-cluster-host-network:
========================
The Cluster Host Network
========================
The cluster host network provides the physical network required for Kubernetes
container networking in support of the containerized OpenStack control plane
traffic.
All nodes in the cluster must be attached to the cluster host network.
In the |prod-os| scenario, this network is considered internal and by default
shares an L2 network / interface with the management network \(although can be
configured on a separate interface, if required\). External access to the
Containerized OpenStack Service Endpoints is through a deployed nginx ingress
controller using host networking to expose itself on ports 80/443
\(http/https\) on the |OAM| Floating IP.

View File

@ -0,0 +1,46 @@
.. wib1463582694200
.. _network-planning-the-internal-management-network:
===============================
The Internal Management Network
===============================
The internal management network must be implemented as a single, dedicated,
Layer 2 broadcast domain for the exclusive use of each |prod-os| cluster.
Sharing of this network by more than one |prod-os| cluster is not supported.
The internal management network is also used for disk IO traffic to and from
the Ceph storage cluster.
If required, the internal management network can be configured as a
|VLAN|-tagged network. In this case, a separate IPv4 |PXE| boot network must be
implemented as the untagged network on the same physical interface. This
configuration must also be used if the management network must support IPv6.
During the |prod-os| software installation process, several network services
such as |BOOTP|, |DHCP|, and |PXE|, are expected to run over the internal
management network. These services are used to bring up the different hosts to
an operational state. It is therefore mandatory that this network be
operational and available in advance, to ensure a successful installation.
On each host, the internal management network can be implemented using a 1 Gb
or 10 Gb Ethernet port. Requirements for this port are that:
.. _network-planning-the-internal-management-network-ul-uh1-pqs-hp:
- it must be capable of |PXE|-booting
- it can be used by the motherboard as a primary boot device
.. note::
This network is not used with Simplex systems.
.. note::
|OSDs| bind to all addresses but communicate on the same network as
monitors \(monitors and |OSDs| need to communicate, if monitors are on
management networks, then |OSDs| source address will also be on mgmt0\).
See :ref:`Kubernetes Internal Management Network
<internal-management-network-planning>` for details.

View File

@ -0,0 +1,43 @@
.. wdq1463583173409
.. _os-planning-data-networks-overview:
========
Overview
========
Data networks are used to model the L2 Networks that nodes' data, pci-sriov and
pci-passthrough interfaces attach to.
A Layer 2 physical or virtual network or set of virtual networks is used to
provide the underlying network connectivity needed to support the application
project networks. Multiple data networks may be configured as required, and
realized over the same or different physical networks. Access to external
networks is typically granted to the **openstack-compute** labeled worker nodes
using the data network. The extent of this connectivity, including access to
the open internet, is application dependent.
Data networks are created at the |prod| level.
.. _data-networks-overview-ul-yj1-dtq-3nb:
.. xbooklink VXLAN Data Networks are specific to |prod-os| application and are described in detail in :ref:`VXLAN Data Networks <vxlan-data-networks>` .
Segmentation ID ranges, for |VLAN| and |VXLAN| data networks, are defined
through OpenStack Neutron commands, see :ref:`Add Segmentation Ranges Using
the CLI <adding-segmentation-ranges-using-the-cli>`.
For details on creating data networks and assigning them to node interfaces,
see the |datanet-doc| documentation:
- :ref:`Add Data Networks Using the CLI <adding-data-networks-using-the-cli>`
- :ref:`Assign a Data Network to an Interface
<assigning-a-data-network-to-an-interface>`
- :ref:`Remove a Data Network Using the CLI
<removing-a-data-network-using-the-cli>`
.. only:: partner
.. include:: ../../_includes/os-data-networks-overview.rest

View File

@ -0,0 +1,28 @@
.. osb1466081265288
.. _pci-passthrough-ethernet-interfaces:
===================================
PCI Passthrough Ethernet Interfaces
===================================
A passthrough Ethernet interface is a physical |PCI| Ethernet |NIC| on a
compute node to which a virtual machine is granted direct access.
This minimizes packet processing delays but at the same time demands special
operational considerations.
For all purposes, a |PCI| passthrough interface behaves as if it were
physically attached to the virtual machine. Therefore, any potential throughput
limitations coming from the virtualized environment, such as the ones
introduced by internal copying of data buffers, are eliminated. However, by
bypassing the virtualized environment, the use of |PCI| passthrough Ethernet
devices introduces several restrictions that you must take into consideration.
They include:
.. _pci-passthrough-ethernet-interfaces-ul-mjs-m52-tp:
- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring
- no support for live migration

View File

@ -0,0 +1,10 @@
.. qrb1466026876949
.. _physical-network-planning:
=========================
Physical Network Planning
=========================
The data network is the backing network for the overlay project networks and
therefore has a direct impact on the networking performance of the guest.

View File

@ -0,0 +1,27 @@
.. hjx1519399837056
.. _port-security-extension:
=======================
Port Security Extension
=======================
|prod-os| supports the Neutron port security extension for disabling IP address
filtering at the project network or |VM| port level.
By default, IP address filtering is enabled on all ports and networks, subject
to security group settings. You can override the default IP address filtering
rules that apply to a |VM| by enabling the Neutron port security extension
driver, and then disabling port security on individual project networks or
ports. For example, you can configure a |VM| to allow packet routing by
enabling the port security extension driver, and then disabling port security
on the |VM| port used for routing.
Disabling port security on a network also disables |MAC| address filtering on the
network.
By default, the port security extension driver is disabled.
.. only:: partner
.. include:: ../../_includes/port-security-extension.rest

View File

@ -0,0 +1,12 @@
.. ter1493141472996
.. _project-network-ip-address-management:
=====================================
Project Network IP Address Management
=====================================
For |VM| IP address assignment, |prod-os| supports both internal management
using project networks with subnets, and external management \(for example,
static addressing or an external |DHCP| server\) using project networks without
subnets.

View File

@ -0,0 +1,13 @@
.. nhz1466029435409
.. _project-network-planning:
========================
Project Network Planning
========================
In addition to the standard features for OpenStack project networks, project
network planning on an |prod-os| system can take advantage of extended
capabilities offered by |prod-os|.
.. include:: ../../_includes/project-network-planning.rest

View File

@ -0,0 +1,88 @@
.. jow1404333739174
.. _project-networks:
================
Project Networks
================
Project networks are logical networking entities visible to project users, and
around which working network topologies are built.
Project networks need support from the physical layers to work as intended.
This means that the access L2 switches, data networks, and data interface
definitions on the compute nodes, must all be properly configured. In
particular, careful |prod-os| project network management planning
is required to achieve the proper configuration when using data networks of the
|VLAN| or |VXLAN| type.
For data networks of the |VLAN| type, consider the following guidelines:
.. _project-networks-ul-hqm-n2s-4n:
- All ports on the access L2 switches must be statically configured to
support all the |VLANs| defined on the data networks they provide access
to. The dynamic nature of the cloud might force the set of |VLANs| in use
by a particular L2 switch to change at any moment.
.. only:: partner
.. include:: ../../_includes/project-networks.rest
:start-after: vlan-begin
:end-before: vxlan-begin
- Configuring a project network to have access to external networks \(not
just providing local networking\) requires the following elements:
- A physical router, and the data network's access L2 switch, must be
part of the same Layer-2 network. Because this Layer 2 network uses a
unique |VLAN| ID, this means also that the router's port used in the
connection must be statically configured to support the corresponding
|VLAN| ID.
- The router must be configured to be part of the same IP subnet that the
project network is intending to use.
- When configuring the IP subnet, the project must use the router's port
IP address as its external gateway.
- The project network must have the external flag set. Only the **admin**
user can set this flag when the project network is created.
For data networks of the |VXLAN| type, consider the following guidelines:
.. _project-networks-ul-gwl-5fh-hr:
- Layer 3 routers used to interconnect compute nodes must be
multicast-enabled, as required by the |VXLAN| protocol.
.. include:: ../../_includes/project-networks.rest
:start-after: vxlan-begin
- To support |IGMP| and |MLD| snooping, Layer 3 routers must be configured
for |IGMP| and |MLD| querying.
- To accommodate |VXLAN| encapsulation, the |MTU| values for Layer 2 switches
and compute node data interfaces must allow for additional headers. For
more information, see :ref:`The Ethernet MTU <the-ethernet-mtu>`.
- To participate in a |VXLAN| network, the data interfaces on the compute
nodes must be configured with IP addresses, and with route table entries
for the destination subnets or the local gateway. For more information, see
:ref:`Manage Data Interface Static IP Addresses Using the CLI
<managing-data-interface-static-ip-addresses-using-the-cli>`, and :ref:`Add
and Maintain Routes for a VXLAN Network
<adding-and-maintaining-routes-for-a-vxlan-network>`.
In some circumstances, project networks can be configured to use |VLAN|
Transparent mode, in which |VLAN| tagged packets from the guest are
encapsulated within a data network segment \(|VLAN|\) without removing or
modifying the guest |VLAN| tag.
Alternatively, guest |VLAN|-tagged traffic can be implemented using |prod-os|
support for OpenStack |VLAN| Aware |VMs|.
For more information about |VLAN| Aware |VMs|, see :ref:`VLAN Aware VMs
<vlan-aware-vms>` or consult the public OpenStack documentation at,
`http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
<http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html>`__.

View File

@ -0,0 +1,29 @@
.. aqg1466081208315
.. _sr-iov-ethernet-interfaces:
==========================
SR-IOV Ethernet Interfaces
==========================
An |SRIOV| Ethernet interface is a physical |PCI| Ethernet |NIC| that
implements hardware-based virtualization mechanisms to expose multiple virtual
network interfaces that can be used by one or more virtual machines
simultaneously.
The PCI-SIG |SRIOV| specification defines a standardized mechanism to create
individual virtual Ethernet devices from a single physical Ethernet interface.
For each exposed virtual Ethernet device, formally referred to as a |VF|, the
|SRIOV| interface provides separate management memory space, work queues,
interrupts resources, and DMA streams, while utilizing common resources behind
the host interface. Each VF therefore has direct access to the hardware and can
be considered to be an independent Ethernet interface.
The following limitations apply to |SRIOV| interfaces:
.. _sr-iov-ethernet-interfaces-ul-mjs-m52-tp:
- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring
- no support for live migration

View File

@ -0,0 +1,95 @@
.. bfh1466190844731
.. _storage-configuration-storage-by-host-type:
======================
Storage Configurations
======================
.. contents:: |minitoc|
:local:
:depth: 1
---------------------------
Storage on controller hosts
---------------------------
.. _storage-configuration-storage-on-controller-hosts:
The controllers provide storage for the |prod-os|'s OpenStack Controller
Services through a combination of local container ephemeral disk, |PVCs| backed
by Ceph and a containerized HA mariadb deployment.
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, they also provide persistent block storage for
persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and
storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or
Duplex systems, the controllers also provide nova-local storage for ephemeral
|VM| volumes.
On systems configured for controller storage, the master/controller's root disk
is reserved for system use, and additional disks are required to support the
small Ceph cluster. On a All-in-One Simplex or Duplex system, you have the
option to partition the root disk for the nova-local storage \(to realize a
two-disk controller\) or use a third disk for nova-local storage.
.. _storage-configuration-storage-on-controller-hosts-section-N10031-N10024-N10001:
**************************************
Underlying platform filesystem storage
**************************************
To pass the disk-space checks, any replacement disks must be installed before
the allotments are changed.
.. _storage-configuration-storage-on-controller-hosts-section-N1010F-N1001F-N10001:
***************************************
Glance, Cinder, and remote Nova storage
***************************************
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, this small Ceph cluster on the controller provides
Glance image storage, Cinder block storage, Cinder backup storage, and Nova
remote ephemeral block storage. For more information, see :ref:`Block Storage
for Virtual Machines <block-storage-for-virtual-machines>`.
.. _storage-configuration-storage-on-controller-hosts-section-N101BB-N10029-N10001:
******************
Nova-local storage
******************
Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute
function, and therefore provide **nova-local** storage for ephemeral disks. On
other systems, **nova-local** storage is provided by compute hosts. For more
information about this type of storage, see :ref:`Storage on Compute Hosts
<storage-on-compute-hosts>` and :ref:`Block Storage for Virtual Machines
<block-storage-for-virtual-machines>`.
You can add a physical volume using the system :command:`host-pv-add` command.
.. xbooklink For more information, see Cloud Platform Storage Configuration: :ref:`Adding a Physical Volume <adding-a-physical-volume>`.
.. _storage-on-storage-hosts:
------------------------
Storage on storage hosts
------------------------
|prod-os| creates default Ceph storage pools for Glance images, Cinder volumes,
Cinder backups, and Nova ephemeral data.
.. xbooklink For more information, see the :ref:`Platform Storage Configuration <storage-configuration-storage-resources>` guide for details on configuring the internal Ceph cluster on either controller or storage hosts.
.. _storage-on-compute-hosts:
------------------------
Storage on compute hosts
------------------------
Compute-labelled worker hosts can provide ephemeral storage for |VM| disks.
.. note::
On All-in-One Simplex or Duplex systems, compute storage is provided using
resources on the combined host.

View File

@ -0,0 +1,124 @@
.. uvy1462906813562
.. _storage-resources:
=================
Storage Resources
=================
|prod-os| uses storage resources on the controller-labelled master hosts, the
compute-labeled worker hosts, and on storage hosts if they are present.
The storage configuration for |prod-os| is very flexible. The specific
configuration depends on the type of system installed, and the requirements of
the system.
.. _storage-resources-section-N1005C-N10029-N10001:
-----------------------------
Storage Services and Backends
-----------------------------
The figure below shows the storage options and backends for |prod-os|.
.. figure:: ../figures/zpk1486667625575.png
|prod-os| Storage Options and Backends
Each service can use different storage backends.
**Ceph**
This provides storage managed by the internal Ceph cluster. Depending on
the deployment configuration, the internal Ceph cluster is provided through
|OSDs| on OpenStack master / controller hosts or storage hosts.
.. _storage-resources-table-ajr-tlf-zbb:
.. table:: Table 1. Available Backends for Storage Services
:widths: auto
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Service | Description | Available Backends |
+=========+===============================================================+===============================================================+
| Cinder | - persistent block storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk volumes | |
| | | |
| | - used as aditional disk volumes for VMs booted from images | |
| | | |
| | - snapshots and persistent backups for volumes | |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Glance | - image file storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk images | |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Nova | - ephemeral object storage | - CoW-Image on Compute Nodes |
| | | |
| | - used for VM ephemeral disks | - Internal Ceph on master/controller hosts or storage hosts |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
.. _storage-resources-section-N10035-N10028-N10001:
--------------------
Uses of Disk Storage
--------------------
**Containerized OpenStack System**
The |prod-os| system containers use a combination of local container
ephemeral disk, |PVCs| backed by Ceph and a containerized HA mariadb
deployment for configuration and database files.
**VM Ephemeral Boot Disk Volumes \(that is, when booting from an image\)**
Virtual machines use local ephemeral disk storage on computes for Nova
ephemeral local boot disk volumes built from images. These virtual disk
volumes are created when the |VM| instances are launched. These virtual
volumes are destroyed when the |VM| instances are terminated.
**VM Persistent Boot Disk Volumes \(that is, when booting from Cinder Volumes\)**
Virtual machines can optionally use the Ceph-backed storage cluster for
backing Cinder boot disk volumes. This provides permanent storage for the
|VM| root disks, facilitating faster machine startup, but requiring more
storage resources. For |VMs| booted from images it provides additional
Cinder disk volumes for persistent storage.
**VM Additional Disks**
Virtual machines can optionally use local ephemeral disk storage on
computes for additional virtual disks, such as swap disks. These disks are
ephemeral; they are created when a |VM| instance is launched, and destroyed
when the |VM| instance is terminated.
**VM Block Storage backups**
Cinder volumes can be backed up for long term storage in a separate Ceph
pool.
.. _storage-resources-section-N100B3-N10028-N10001:
-----------------
Storage Locations
-----------------
In additional to the storage used by |prod-os| system containers, the following
storage locations may be used.
**Controller Hosts**
In the Standard with Controller Storage deployment option, one or more
disks can be used on controller hosts to provide a small Ceph-based cluster
for providing the storage backend for Cinder volumes, Cinder backups,
Glance images, and remote Nova ephemeral volumes.
**Compute Hosts**
One or more disks can be used on compute hosts to provide local Nova
ephemeral storage for virtual machines.
**Combined Controller-Compute Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local Nova Ephemeral Storage for virtual machines and a
small Ceph-backed storage cluster for backing Cinder, Glance, and Remote
Nova Ephemeral storage.
**Storage Hosts**
One or more disks are used on storage hosts to provide a large scale
Ceph-backed storage cluster for backing Cinder, Glance, and Remote Nova
Ephemeral storage. Storage hosts are used only on |prod-os| with Dedicated
Storage systems.

View File

@ -0,0 +1,51 @@
.. psa1412702861873
.. _subnet-details:
==============
Subnet Details
==============
You can adjust several options for project network subnets, including |DHCP|,
IP allocation pools, DNS server addresses, and host routes.
These options are available on the **Subnet Details** tab when you create or
edit a project network subnet.
.. note::
IP addresses on project network subnets are always managed internally. For
external address management, use project networks without subnets. For more
information, see :ref:`Project Network IP Address Management
<project-network-ip-address-management>`.
When creating a new IP subnet for a project network, you can specify the
following attributes:
**Enable DHCP**
When this attribute is enabled, a virtual |DHCP| server becomes available
when the subnet is created. It uses the \(MAC address, IP address\) pairs
registered in the Neutron database to offer IP addresses in response to
|DHCP| discovery requests broadcast on the subnet. |DHCP| discovery
requests from unknown |MAC| addresses are ignored.
When the |DHCP| attribute is disabled, all |DHCP| and DNS services, and all
static routes, if any, must be provisioned externally.
**Allocation Pools**
This a list attribute where each element in the list specifies an IP
address range, or address pool, in the subnet address space that can be
used for dynamic offering of IP addresses. By default, there is a single
allocation pool comprised of the entire subnet's IP address space, with the
exception of the default gateway's IP address.
An external, non-Neutron, |DHCP| server can be attached to a subnet to
support specific deployment needs as required. For example, it can be
configured to offer IP addresses on ranges outside the Neutron allocation
pools to service physical devices attached to the project network, such as
testing equipment and servers.
**DNS Name Servers**
You can reserve IP addresses for use by DNS servers.
**Host Routes**
You can use this to specify host connections to routers on the subnet.

View File

@ -0,0 +1,106 @@
.. jow1404333732592
.. _os-planning-the-ethernet-mtu:
================
The Ethernet MTU
================
The |MTU| of an Ethernet frame is a configurable attribute in |prod-os|.
Changing its default size must be done in coordination with other network
elements on the Ethernet link.
In the context of |prod-os|, the |MTU| refers to the largest possible payload
on the Ethernet frame on a particular network link. The payload is enclosed by
the Ethernet header \(14 bytes\) and the CRC \(4 bytes\), resulting in an
Ethernet frame that is 18 bytes longer than the |MTU| size.
The original IEEE 802.3 specification defines a valid standard Ethernet frame
size to be from 64 to 1518 bytes, accommodating payloads ranging in size from
46 to 1500 bytes. Ethernet frames with a payload larger than 1500 bytes are
considered to be jumbo frames.
For a |VLAN| network, the frame also includes a 4-byte |VLAN| ID header,
resulting in a frame size 22 bytes longer than the |MTU| size.
For a |VXLAN| network, the frame is either 54 or 74 bytes longer, depending on
whether IPv4 or IPv6 protocol is used. This is because, in addition to the
Ethernet header and CRC, the payload is enclosed by an IP header \(20 bytes for
Ipv4 or 40 bytes for IPv6\), a |UDP| header \(8 bytes\), and a |VXLAN| header
\(8 bytes\).
In |prod-os|, you can configure the |MTU| size for the following interfaces and
networks:
.. _the-ethernet-mtu-ul-qmn-yvn-m4:
- The management and |OAM| network interfaces on the controller. The |MTU|
size for these interfaces is set during initial installation; for more
information, see the |prod-os| installation guide for your system. To make
changes after installation, see |sysconf-doc|: :ref:`Change the MTU of an
OAM Interface <changing-the-mtu-of-an-oam-interface-using-the-cli>`.
- Data interfaces on compute nodes. For more information, see :ref:`Change
the MTU of a Data Interface <changing-the-mtu-of-a-data-interface>`.
- Data networks. For more information, see |datanet-doc|: :ref:`Data Networks
<data-network-management-data-networks>`.
In all cases, the default |MTU| size is 1500. The minimum value is 576, and the
maximum is 9216.
.. note::
You cannot change the |MTU| for a cluster-host interface. The default |MTU|
of 1500 must always be used.
Because data interfaces are defined over physical interfaces connecting to data
networks, it is important that you consider the implications of modifying the
default |MTU| size:
.. _the-ethernet-mtu-ul-hsq-2f4-m4:
- The |MTU| sizes for a data interface and the corresponding Ethernet
interface on the edge router or switch must be compatible. You must ensure
that each side of the link is configured to accept the maximum frame size
that can be delivered from the other side. For example, if the data
interface is configured with a |MTU| size of 9216 bytes, the corresponding
switch interface must be configured to accept a maximum frame size of 9238
bytes, assuming a |VLAN| tag is present.
The way switch interfaces are configured varies from one switch
manufacturer to another. In some cases you configure the |MTU| size
directly, while in some others you configure the maximum Ethernet frame
size instead. In the latter case, it is often unclear whether the frame
size includes |VLAN| headers or not. In any case, you must ensure that both
sides are configured to accept the expected maximum frame sizes.
- For a |VXLAN| network, the additional IP, |UDP|, and |VXLAN| headers are
invisible to the data interface, which expects a frame only 18 bytes larger
than the |MTU|. To accommodate the larger frames on a |VXLAN| network, you
must specify a larger nominal |MTU| on the data interface. For simplicity,
and to avoid issues with stacked |VLAN| tagging, some third party vendors
recommend rounding up by an additional 100 bytes for calculation purposes.
For example, to attach to a |VXLAN| data network with an |MTU| of 1500, a
data interface with an |MTU| of 1600 is recommended.
- A data network can only be associated with a compute node data interface
with an |MTU| of equal or greater value.
- The |MTU| size of a compute node data interface cannot be modified to be
less than the |MTU| size of any of its associated data networks.
- The |MTU| size of a data network is automatically propagated to new project
networks. Changes to the data network |MTU| are *not* propagated to
existing project networks.
- The Neutron L3 and |DHCP| agents automatically propagate the |MTU| size of
their networks to their Linux network interfaces.
- The Neutron |DHCP| agent makes the option interface-mtu available to any
|DHCP| client request from a virtual machine. The request response from the
server is the current interface's |MTU| size, which can then be used by the
client to adjust its own interface |MTU| size.
.. .. only:: partner
.. include:: ../../_includes/the-ethernet-mtu.rest

View File

@ -0,0 +1,17 @@
.. znn1463582982634
.. _the-oam-network:
===============
The OAM Network
===============
This network provides external access to OpenStack Services' External API
Endpoints and External Access to the OpenStack Horizon Web interface web
server.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
See :ref:`OAM Network Planning <oam-network-planning>` for additional services
that need to be available to the underlying |prod| through the |OAM| Network.

View File

@ -0,0 +1,24 @@
.. xms1466026140926
.. _the-pxe-boot-network:
====================
The PXE Boot Network
====================
You can set up a |PXE| Boot network for booting all nodes to allow a
non-standard management network configuration.
The internal management network is used for |PXE| booting of new hosts and the
|PXE| Boot network is not required. However, there are scenarios where the
internal management network cannot be used for |PXE| booting of new hosts. For
example, if the internal management network needs to be on a |VLAN|-tagged
network for deployment reasons, or if it must support IPv6, you must configure
the optional untagged |PXE| boot network for |PXE| booting of new hosts using
IPv4.
.. note::
|prod| does not support IPv6 |PXE| Booting.
See :ref:`The PXE Boot Network <network-planning-the-pxe-boot-network>` for
details.

View File

@ -0,0 +1,12 @@
.. xft1580509778612
.. _uefi-secure-boot-planning:
=========================
UEFI Secure Boot Planning
=========================
You may want to plan for utilizing the supported |UEFI| secure boot feature.
See :ref:`Kubernetes UEFI Secure Boot Planning
<security-planning-uefi-secure-boot-planning>` for details.

View File

@ -0,0 +1,28 @@
.. yfz1466026434733
.. _virtual-or-cloud-networks:
=========================
Virtual or Cloud Networks
=========================
In addition to the physical networks used to connect the |prod-os| hosts,
|prod-os| uses virtual networks to support |VMs|.
Virtual networks, which include data networks and project networks, are defined
and implemented internally. They are connected to system hosts and to the
outside world using data \(physical\) networks attached to data interfaces on
compute nodes.
Each data network supports one or more data networks, which may be implemented
as a flat, |VLAN|, or |VXLAN| network. The data networks in turn support
project networks, which are allocated for use by different projects and their
|VMs|, and which can be isolated from one another.
.. seealso::
:ref:`Overview <data-networks-overview>`
:ref:`Project Networks <project-networks>`
:ref:`Project Network Planning <project-network-planning>`

View File

@ -0,0 +1,19 @@
.. psa1428328539397
.. _vlan-aware-vms:
==============
VLAN Aware VMs
==============
|prod-os| supports OpenStack |VLAN| Aware |VMs| \(also known as port
trunking\), which adds |VLAN| support for |VM| interfaces.
For more information about |VLAN| Aware |VMs|, consult the public OpenStack
documentation at,
`http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
<http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html>`__.
Alternatively, project networks can be configured to use |VLAN| Transparent
mode, in which |VLAN| tagged guest packets are encapsulated within a data
network segment without removing or modifying the guest |VLAN| tag.

View File

@ -0,0 +1,52 @@
.. jow1411482049845
.. _vm-network-interface-options:
============================
VM Network Interface Options
============================
|prod-os| supports a variety of standard and performance-optimized network
interface drivers in addition to the standard OpenStack choices.
.. _vm-network-interface-options-ul-mgc-xnp-nn:
- Unmodified guests can use Linux networking and virtio drivers. This
provides a mechanism to bring existing applications into the production
environment immediately.
.. only:: partner
.. include:: ../../_includes/vm-network-interface-options.rest
:start-after: unmodified-guests-virtio-begin
:end-before: highest-performance-begin
.. note::
The virtio devices on a |VM| cannot use vhost-user for enhanced
performance if any of the following is true:
- The |VM| is not backed by huge pages.
- The |VM| is backed by 4k huge pages.
- The |VM| is live-migrated from an older platform that does not
support vhost-user.
.. only:: partner
.. include:: ../../_includes/vm-network-interface-options.rest
:start-after: highest-performance-begin
.. xbooklink For more information about |AVP| drivers, see OpenStack VNF Integration: :ref:`Accelerated Virtual Interfaces <accelerated-virtual-interfaces>`.
.. seealso::
:ref:`Port Security Extension <port-security-extension>`
:ref:`PCI Passthrough Ethernet Interfaces
<pci-passthrough-ethernet-interfaces>`
:ref:`SR-IOV Ethernet Interfaces <sr-iov-ethernet-interfaces>`
.. xpartnerlink :ref:`MAC Address Filtering on Virtual Interfaces
<mac-address-filtering-on-virtual-interfaces>`

View File

@ -0,0 +1,58 @@
.. ksh1464711502906
.. _vm-storage-settings-for-migration-resize-or-evacuation:
========================================================
VM Storage Settings for Migration, Resize, or Evacuation
========================================================
The migration, resize, or evacuation behavior for an instance depends on the
type of ephemeral storage used.
.. note::
Live migration behavior can also be affected by flavor extra
specifications, image metadata, or instance metadata.
The following table summarizes the boot and local storage configurations needed
to support various behaviors.
.. _vm-storage-settings-for-migration-resize-or-evacuation-table-wmf-qdh-v5:
.. table::
:widths: auto
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| Instance Boot Type and Ephemeral and Swap Disks from flavor | Local Storage Backing | Live Migration with Block Migration | Live Migration w/o Block Migration | Cold Migration | Local Disk Resize | Evacuation |
+============================================================================+=======================+=====================================+====================================+================+===================+==========================+
| From Cinder Volume \(no local disks\) | N/A | N | Y | Y | N/A | Y |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Cinder Volume \(w/ remote Ephemeral and/or Swap\) | N/A | N | Y | Y | N/A | Y |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Cinder Volume \(w/ local Ephemeral and/or Swap\) | CoW | Y | Y \(CLI only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Ephemeral/Swap data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Glance Image \(all flavor disks are local\) | CoW | Y | Y \(CLI Only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Local disk data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Glance Image \(all flavor disks are local + attached Cinder Volumes\) | CoW | Y | Y \(CLI only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Local disk data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
In addition to the behavior summarized in the table, system-initiated cold
migrate \(e.g. when locking a host\) and evacuate restrictions may be applied
if a |VM| with a large root disk size exists on the host. For a Local |CoW|
Image Backed \(local\_image\) storage type, the VIM can cold migrate or
evacuate |VMs| with disk sizes up to 60 GB
.. note::
The criteria for live migration are independent of disk size.
.. note::
The **Local Storage Backing** is a consideration only for instances that
use local ephemeral or swap disks.
The boot configuration for an instance is determined by the **Instance Boot
Source** selected at launch.

View File

@ -0,0 +1,18 @@
.. ovi1474997555122
.. _vxlans:
======
VXLANs
======
You can use |VXLANs| to connect |VM| instances across non-contiguous Layer 2
segments \(that is, Layer 2 segments connected by one or more Layer 3
routers\).
A |VXLAN| is a Layer 2 overlay network scheme on a Layer 3 network
infrastructure. Packets originating from |VMs| and destined for other |VMs| are
encapsulated with IP, |UDP|, and |VXLAN| headers and sent as Layer 3 packets.
The IP addresses of the source and destination compute nodes are included in
the headers.

201
doc/source/shared/abbrevs.txt Normal file → Executable file
View File

@ -1,99 +1,104 @@
.. Common and domain-specific abbreviations. .. Common and domain-specific abbreviations.
.. Plural forms must be defined separately from singular as .. Plural forms must be defined separately from singular as
.. replacements like |PVC|s won't work. .. replacements like |PVC|s won't work.
.. Please keep this list alphabetical. .. Please keep this list alphabetical.
.. |ACL| replace:: :abbr:`ACL (Access Control List)` .. |ACL| replace:: :abbr:`ACL (Access Control List)`
.. |AE| replace:: :abbr:`AE (Aggregated Ethernet)` .. |AE| replace:: :abbr:`AE (Aggregated Ethernet)`
.. |AIO| replace:: :abbr:`AIO (All-In-One)` .. |AIO| replace:: :abbr:`AIO (All-In-One)`
.. |AVP| replace:: :abbr:`AVP (Accelerated Virtual Port)` .. |AVP| replace:: :abbr:`AVP (Accelerated Virtual Port)`
.. |AWS| replace:: :abbr:`AWS (Amazon Web Services)` .. |AWS| replace:: :abbr:`AWS (Amazon Web Services)`
.. |BGP| replace:: :abbr:`BGP (Border Gateway Protocol)` .. |BGP| replace:: :abbr:`BGP (Border Gateway Protocol)`
.. |BMC| replace:: :abbr:`BMC (Board Management Controller)` .. |BMC| replace:: :abbr:`BMC (Board Management Controller)`
.. |BMCs| replace:: :abbr:`BMCs (Board Management Controllers)` .. |BMCs| replace:: :abbr:`BMCs (Board Management Controllers)`
.. |BOOTP| replace:: :abbr:`BOOTP (Boot Protocol)` .. |BOOTP| replace:: :abbr:`BOOTP (Boot Protocol)`
.. |BPDU| replace:: :abbr:`BPDU (Bridge Protocol Data Unit)` .. |BPDU| replace:: :abbr:`BPDU (Bridge Protocol Data Unit)`
.. |BPDUs| replace:: :abbr:`BPDUs (Bridge Protocol Data Units)` .. |BPDUs| replace:: :abbr:`BPDUs (Bridge Protocol Data Units)`
.. |CA| replace:: :abbr:`CA (Certificate Authority)` .. |CA| replace:: :abbr:`CA (Certificate Authority)`
.. |CAs| replace:: :abbr:`CAs (Certificate Authorities)` .. |CAs| replace:: :abbr:`CAs (Certificate Authorities)`
.. |CNI| replace:: :abbr:`CNI (Container Networking Interface)` .. |CNI| replace:: :abbr:`CNI (Container Networking Interface)`
.. |CoW| replace:: :abbr:`CoW (Copy on Write)` .. |CoW| replace:: :abbr:`CoW (Copy on Write)`
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)` .. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
.. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)` .. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)`
.. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)` .. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protoco)` .. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)`
.. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)` .. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)`
.. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)` .. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)`
.. |DSCP| replace:: :abbr:`DSCP (Differentiated Services Code Point)` .. |DSCP| replace:: :abbr:`DSCP (Differentiated Services Code Point)`
.. |DVR| replace:: :abbr:`DVR (Distributed Virtual Router)` .. |DVR| replace:: :abbr:`DVR (Distributed Virtual Router)`
.. |FEC| replace:: :abbr:`FEC (Forward Error Correction)` .. |FEC| replace:: :abbr:`FEC (Forward Error Correction)`
.. |FPGA| replace:: :abbr:`FPGA (Field Programmable Gate Array)` .. |FPGA| replace:: :abbr:`FPGA (Field Programmable Gate Array)`
.. |FQDN| replace:: :abbr:`FQDN (Fully Qualified Domain Name)` .. |FQDN| replace:: :abbr:`FQDN (Fully Qualified Domain Name)`
.. |GNP| replace:: :abbr:`GNP (Global Network Policy)` .. |FQDNs| replace:: :abbr:`FQDNs (Fully Qualified Domain Names)`
.. |IGMP| replace:: :abbr:`IGMP (Internet Group Management Protocol)` .. |GNP| replace:: :abbr:`GNP (Global Network Policy)`
.. |IoT| replace:: :abbr:`IoT (Internet of Things)` .. |IGMP| replace:: :abbr:`IGMP (Internet Group Management Protocol)`
.. |IPMI| replace:: :abbr:`IPMI (Intelligent Platform Management Interface)` .. |IoT| replace:: :abbr:`IoT (Internet of Things)`
.. |LACP| replace:: :abbr:`LACP (Link Aggregation Control Protocol)` .. |IPMI| replace:: :abbr:`IPMI (Intelligent Platform Management Interface)`
.. |LAG| replace:: :abbr:`LAG (Link Aggregation)` .. |LACP| replace:: :abbr:`LACP (Link Aggregation Control Protocol)`
.. |LDAP| replace:: :abbr:`LDAP (Lightweight Directory Access Protocol)` .. |LAG| replace:: :abbr:`LAG (Link Aggregation)`
.. |LDPC| replace:: :abbr:`LDPC (Low-Density Parity Check)` .. |LDAP| replace:: :abbr:`LDAP (Lightweight Directory Access Protocol)`
.. |LLDP| replace:: :abbr:`LLDP (Link Layer Discovery Protocol)` .. |LDPC| replace:: :abbr:`LDPC (Low-Density Parity Check)`
.. |MAC| replace:: :abbr:`MAC (Media Access Control)` .. |LLDP| replace:: :abbr:`LLDP (Link Layer Discovery Protocol)`
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)` .. |MAC| replace:: :abbr:`MAC (Media Access Control)`
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)` .. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
.. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)` .. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
.. |MOTD| replace:: :abbr:`MOTD (Message of the Day)` .. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)`
.. |MTU| replace:: :abbr:`MTU (Maximum Transmission Unit)` .. |MOTD| replace:: :abbr:`MOTD (Message of the Day)`
.. |NIC| replace:: :abbr:`NIC (Network Interface Card)` .. |MTU| replace:: :abbr:`MTU (Maximum Transmission Unit)`
.. |NICs| replace:: :abbr:`NICs (Network Interface Cards)` .. |NIC| replace:: :abbr:`NIC (Network Interface Card)`
.. |NTP| replace:: :abbr:`NTP (Network Time Protocol)` .. |NICs| replace:: :abbr:`NICs (Network Interface Cards)`
.. |NUMA| replace:: :abbr:`NUMA (Non-Uniform Memory Access)` .. |NTP| replace:: :abbr:`NTP (Network Time Protocol)`
.. |NVMe| replace:: :abbr:`NVMe (Non-Volatile Memory express)` .. |NUMA| replace:: :abbr:`NUMA (Non-Uniform Memory Access)`
.. |OAM| replace:: :abbr:`OAM (Operations, administration and management)` .. |NVMe| replace:: :abbr:`NVMe (Non-Volatile Memory express)`
.. |ONAP| replace:: :abbr:`ONAP (Open Network Automation Program)` .. |OAM| replace:: :abbr:`OAM (Operations, administration and management)`
.. |OSD| replace:: :abbr:`OSD (Object Storage Device)` .. |ONAP| replace:: :abbr:`ONAP (Open Network Automation Program)`
.. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)` .. |OSD| replace:: :abbr:`OSD (Object Storage Device)`
.. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)` .. |OSDs| replace:: :abbr:`OSDs (Object Storage Devices)`
.. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)` .. |PAC| replace:: :abbr:`PAC (Programmable Acceleration Card)`
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)` .. |PCI| replace:: :abbr:`PCI (Peripheral Component Interconnect)`
.. |PF| replace:: :abbr:`PF (Physical Function)` .. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)` .. |PF| replace:: :abbr:`PF (Physical Function)`
.. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)` .. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)`
.. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)` .. |PQDN| replace:: :abbr:`PDQN (Partially Qualified Domain Name)`
.. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)` .. |PQDNs| replace:: :abbr:`PQDNs (Partially Qualified Domain Names)`
.. |PXE| replace:: :abbr:`PXE (Preboot Execution Environment)` .. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)`
.. |QoS| replace:: :abbr:`QoS (Quality of Service)` .. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)` .. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)`
.. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)` .. |PXE| replace:: :abbr:`PXE (Preboot Execution Environment)`
.. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)` .. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)` .. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |SAS| replace:: :abbr:`SAS (Serial Attached SCSI)` .. |RPC| replace:: :abbr:`RPC (Remote Procedure Call)`
.. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)` .. |SAN| replace:: :abbr:`SAN (Subject Alternative Name)`
.. |SLA| replace:: :abbr:`SLA (Service Level Agreement)` .. |SANs| replace:: :abbr:`SANs (Subject Alternative Names)`
.. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)` .. |SAS| replace:: :abbr:`SAS (Serial Attached SCSI)`
.. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)` .. |SATA| replace:: :abbr:`SATA (Serial AT Attachment)`
.. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)` .. |SLA| replace:: :abbr:`SLA (Service Level Agreement)`
.. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)` .. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)`
.. |SSD| replace:: :abbr:`SSD (Solid State Drive)` .. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)`
.. |SSDs| replace:: :abbr:`SSDs (Solid State Drives)` .. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)`
.. |SSH| replace:: :abbr:`SSH (Secure Shell)` .. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)`
.. |SSL| replace:: :abbr:`SSL (Secure Socket Layer)` .. |SSD| replace:: :abbr:`SSD (Solid State Drive)`
.. |STP| replace:: :abbr:`STP (Spanning Tree Protocol)` .. |SSDs| replace:: :abbr:`SSDs (Solid State Drives)`
.. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)` .. |SSH| replace:: :abbr:`SSH (Secure Shell)`
.. |TFTP| replace:: :abbr:`TFTP (Trivial File Transfer Protocol)` .. |SSL| replace:: :abbr:`SSL (Secure Socket Layer)`
.. |ToR| replace:: :abbr:`ToR (Top-of-Rack)` .. |STP| replace:: :abbr:`STP (Spanning Tree Protocol)`
.. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)` .. |TPM| replace:: :abbr:`TPM (Trusted Platform Module)`
.. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)` .. |TFTP| replace:: :abbr:`TFTP (Trivial File Transfer Protocol)`
.. |VF| replace:: :abbr:`VF (Virtual Function)` .. |ToR| replace:: :abbr:`ToR (Top-of-Rack)`
.. |VFs| replace:: :abbr:`VFs (Virtual Functions)` .. |UDP| replace:: :abbr:`UDP (User Datagram Protocol)`
.. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)` .. |UEFI| replace:: :abbr:`UEFI (Unified Extensible Firmware Interface)`
.. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)` .. |VF| replace:: :abbr:`VF (Virtual Function)`
.. |VM| replace:: :abbr:`VM (Virtual Machine)` .. |VFs| replace:: :abbr:`VFs (Virtual Functions)`
.. |VMs| replace:: :abbr:`VMs (Virtual Machines)` .. |VLAN| replace:: :abbr:`VLAN (Virtual Local Area Network)`
.. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)` .. |VLANs| replace:: :abbr:`VLANs (Virtual Local Area Networks)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)` .. |VM| replace:: :abbr:`VM (Virtual Machine)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)` .. |VMs| replace:: :abbr:`VMs (Virtual Machines)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)` .. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)` .. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`
.. |YAML| replace:: :abbr:`YAML (YAML Ain't Markup Language)` .. |YAML| replace:: :abbr:`YAML (YAML Ain't Markup Language)`