
Sphinx considers the "source" dir to be it's root. Image or figure paths such as: .. image:: ../deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png can lead to build time behavior like the following under some circumstances: copying images... [ 11%] deploy/../deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png whereas the absolute path: .. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png behaves predictably. copying images... [ 11%] /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png So changed a handful of relative links to be absolute from the source root. Signed-off-by: Stone <ronald.stone@windriver.com> Change-Id: Id63e5949a78959a5f570b835eac658b42440fbcd
54 lines
2.5 KiB
ReStructuredText
54 lines
2.5 KiB
ReStructuredText
|
|
.. gzi1565204095452
|
|
.. _standard-configuration-with-dedicated-storage:
|
|
|
|
=============================================
|
|
Standard Configuration with Dedicated Storage
|
|
=============================================
|
|
|
|
Deployment of |prod| with dedicated storage nodes provides the highest capacity
|
|
\(single region\), performance, and scalability.
|
|
|
|
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-dedicated-storage.png
|
|
:width: 800
|
|
|
|
.. note::
|
|
Physical L2 switches are not shown in the deployment diagram in subsequent
|
|
chapters. Only the L2 networks they realize are shown.
|
|
|
|
See :ref:`Common Components <common-components>` for a description of common
|
|
components of this deployment configuration.
|
|
|
|
The differentiating physical feature of this model is that the controller,
|
|
storage, and worker functionalities are deployed on separate physical hosts
|
|
allowing controller nodes, storage nodes, and worker nodes to scale
|
|
independently from each other.
|
|
|
|
The controller nodes provide the master function for the system. Two controller
|
|
nodes are required to provide redundancy. The controller nodes' server and
|
|
peripheral resources such as CPU cores/speed, memory, storage, and network
|
|
interfaces can be scaled to meet requirements.
|
|
|
|
Storage nodes provide a large scale Ceph cluster for the storage backend for
|
|
Kubernetes |PVCs|. They are deployed in replication groups of either two or
|
|
three for redundancy. For a system configured to use two storage hosts per
|
|
replication group, a maximum of eight storage hosts \(four replication groups\)
|
|
are supported. For a system with three storage hosts per replication group, up
|
|
to nine storage hosts \(three replication groups\) are supported. The system
|
|
provides redundancy and scalability through the number of Ceph |OSDs| installed
|
|
in a storage node group, with more |OSDs| providing more capacity and better
|
|
storage performance. The scalability and performance of the storage function is
|
|
affected by the |OSD| size and speed, optional |SSD| or |NVMe| Ceph journals,
|
|
CPU cores and speeds, memory, disk controllers, and networking. |OSDs| can be
|
|
grouped into storage tiers according to their performance characteristics.
|
|
|
|
.. note::
|
|
A storage backend is not configured by default. You can use either
|
|
internal Ceph or an external Netapp Trident backend.
|
|
|
|
.. xreflink For more information,
|
|
see the :ref:`|stor-doc| <storage-configuration-storage-resources>` guide.
|
|
|
|
On worker nodes, the primary disk is used for system requirements and for
|
|
container local ephemeral storage.
|