docs/doc/source/deploy/standard-configuration-with-dedicated-storage.rst
Stone 333b4346aa Changed relative paths to absolute
Sphinx considers the "source" dir to be it's root. Image or figure paths such as:

.. image:: ../deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png

can lead to build time behavior like the following under some circumstances:

copying images... [ 11%] deploy/../deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png

whereas the absolute path:

.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png

behaves predictably.

copying images... [ 11%] /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png

So changed a handful of relative links to be absolute from the source root.

Signed-off-by: Stone <ronald.stone@windriver.com>
Change-Id: Id63e5949a78959a5f570b835eac658b42440fbcd
2021-02-19 06:45:53 -05:00

2.5 KiB

Standard Configuration with Dedicated Storage

Deployment of with dedicated storage nodes provides the highest capacity (single region), performance, and scalability.

image

Note

Physical L2 switches are not shown in the deployment diagram in subsequent chapters. Only the L2 networks they realize are shown.

See Common Components <common-components> for a description of common components of this deployment configuration.

The differentiating physical feature of this model is that the controller, storage, and worker functionalities are deployed on separate physical hosts allowing controller nodes, storage nodes, and worker nodes to scale independently from each other.

The controller nodes provide the master function for the system. Two controller nodes are required to provide redundancy. The controller nodes' server and peripheral resources such as CPU cores/speed, memory, storage, and network interfaces can be scaled to meet requirements.

Storage nodes provide a large scale Ceph cluster for the storage backend for Kubernetes . They are deployed in replication groups of either two or three for redundancy. For a system configured to use two storage hosts per replication group, a maximum of eight storage hosts (four replication groups) are supported. For a system with three storage hosts per replication group, up to nine storage hosts (three replication groups) are supported. The system provides redundancy and scalability through the number of Ceph installed in a storage node group, with more providing more capacity and better storage performance. The scalability and performance of the storage function is affected by the size and speed, optional or Ceph journals, CPU cores and speeds, memory, disk controllers, and networking. can be grouped into storage tiers according to their performance characteristics.

Note

A storage backend is not configured by default. You can use either internal Ceph or an external Netapp Trident backend.

On worker nodes, the primary disk is used for system requirements and for container local ephemeral storage.