diff --git a/doc/source/deploy/common-components.rst b/doc/source/deploy/common-components.rst index b74dba760..a19ad9916 100644 --- a/doc/source/deploy/common-components.rst +++ b/doc/source/deploy/common-components.rst @@ -117,7 +117,7 @@ A number of components are common to most |prod| deployment configurations. The use of Container Networking Calico |BGP| to advertise containers' network endpoints is not available in this scenario. -**Additional External Network\(s\) \(Worker & AIO Nodes Only\)** +**Additional External Network\(s\) or Data Networks \(Worker & AIO Nodes Only\)** Networks on which ingress controllers and/or hosted application containers expose their Kubernetes service, for example, through a NodePort service. Node interfaces to these networks are configured as platform class diff --git a/doc/source/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.rst b/doc/source/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.rst index 4934e32de..aca7abf22 100644 --- a/doc/source/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.rst +++ b/doc/source/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.rst @@ -13,10 +13,6 @@ controller nodes instead of using dedicated storage nodes. .. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png :width: 800 -.. note:: - Physical L2 switches are not shown in the deployment diagram in subsequent - chapters. Only the L2 networks they support are shown. - See :ref:`Common Components ` for a description of common components of this deployment configuration. @@ -25,11 +21,14 @@ cluster managing up to 200 worker nodes. The limit on the size of the worker node pool is due to the performance and latency characteristics of the small integrated Ceph cluster on the controller+storage nodes. -This configuration uses dedicated physical disks configured on each -controller+storage node as Ceph |OSDs|. The -primary disk is used by the platform for system purposes and subsequent disks +This configuration optionally uses dedicated physical disks configured on each +controller+storage node as Ceph |OSDs|. The typical solution requires one +primary disk used by the platform for system purposes and subsequent disks are used for Ceph |OSDs|. +Optionally, instead of using an internal Ceph cluster across controllers, you +can configure an external Netapp Trident storage backend. + On worker nodes, the primary disk is used for system requirements and for container local ephemeral storage. diff --git a/doc/source/deploy/deployment-config-options-all-in-one-duplex-configuration.rst b/doc/source/deploy/deployment-config-options-all-in-one-duplex-configuration.rst index 8b3ab05c8..ee19d2e4b 100644 --- a/doc/source/deploy/deployment-config-options-all-in-one-duplex-configuration.rst +++ b/doc/source/deploy/deployment-config-options-all-in-one-duplex-configuration.rst @@ -27,14 +27,17 @@ cloud processing / storage power is required. HA services run on the controller function across the two physical servers in either Active/Active or Active/Standby mode. -The storage function is provided by a small-scale two node Ceph cluster using -one or more disks/|OSDs| from each server, and -provides the backend for Kubernetes' |PVCs|. +The optional storage function is provided by a small-scale two node Ceph +cluster using one or more disks/|OSDs| from each server, and provides the +backend for Kubernetes' |PVCs|. -The solution requires two or more disks per server; one for system +The typical solution requires two or more disks per server; one for system requirements and container ephemeral storage, and one or more for Ceph |OSDs|. +Optionally, instead of using an internal Ceph cluster across servers, you can +configure an external Netapp Trident storage backend. + Hosted application containers are scheduled on both worker functions. In the event of an overall server hardware fault: diff --git a/doc/source/deploy/deployment-config-optionsall-in-one-simplex-configuration.rst b/doc/source/deploy/deployment-config-optionsall-in-one-simplex-configuration.rst index 1b574a8df..30de4b817 100644 --- a/doc/source/deploy/deployment-config-optionsall-in-one-simplex-configuration.rst +++ b/doc/source/deploy/deployment-config-optionsall-in-one-simplex-configuration.rst @@ -14,8 +14,9 @@ non-redundant host. :width: 800 .. note:: - Physical L2 switches are not shown in the deployment diagram in subsequent - chapters. Only the L2 networks they support are shown. + Physical L2 switches are not shown in this deployment diagram and in + subsequent deployment diagrams. Only the L2 networks they support are + shown. See :ref:`Common Components ` for a description of common components of this deployment configuration. @@ -29,12 +30,15 @@ Typically, this solution is used where only a small amount of cloud processing / storage power is required, and protection against overall server hardware faults is either not required or done at a higher level. -Ceph is deployed in this configuration using one or more disks for |OSDs|, and +Optionally, Ceph is deployed in this configuration using one or more disks for |OSDs|, and provides the backend for Kubernetes' |PVCs|. -The solution requires two or more disks, one for system requirements and +Typically, the solution requires two or more disks, one for system requirements and container ephemeral storage, and one or more for Ceph |OSDs|. +Optionally, instead of using an internal Ceph cluster on the server, you can +configure an external Netapp Trident storage backend. + .. xreflink .. note:: A storage backend is not configured by default. You can use either internal Ceph or an external Netapp Trident backend. For more information, diff --git a/doc/source/deploy/deployment-options.rst b/doc/source/deploy/deployment-options.rst index b1160238b..b1aac6329 100644 --- a/doc/source/deploy/deployment-options.rst +++ b/doc/source/deploy/deployment-options.rst @@ -25,21 +25,10 @@ A variety of |prod-long| deployment configuration options are supported. A two node HA controller node cluster with a 2-9 node Ceph storage cluster, managing up to 200 worker nodes. - .. note:: - A storage backend is not configured by default. You can use either - internal Ceph or an external Netapp Trident backend. - .. xreflink For more information, see the :ref:`Storage ` guide. All |prod| systems can use worker platforms \(worker hosts, or the worker function on a simplex or duplex system\) configured for either standard or -low-latency performance. - -.. seealso:: - - :ref:`Worker Function Performance Profiles - ` - -The Ceph storage backend is configured by default. +low-latency worker function performance profiles. \ No newline at end of file diff --git a/doc/source/deploy/index.rst b/doc/source/deploy/index.rst index 7e07a9bd8..6257c7d59 100644 --- a/doc/source/deploy/index.rst +++ b/doc/source/deploy/index.rst @@ -14,6 +14,6 @@ Deployment Configurations common-components deployment-config-optionsall-in-one-simplex-configuration deployment-config-options-all-in-one-duplex-configuration - standard-configuration-with-dedicated-storage deployment-and-configuration-options-standard-configuration-with-controller-storage + standard-configuration-with-dedicated-storage worker-function-performance-profiles diff --git a/doc/source/deploy/standard-configuration-with-dedicated-storage.rst b/doc/source/deploy/standard-configuration-with-dedicated-storage.rst index 6370429cb..b4983fffe 100644 --- a/doc/source/deploy/standard-configuration-with-dedicated-storage.rst +++ b/doc/source/deploy/standard-configuration-with-dedicated-storage.rst @@ -12,10 +12,6 @@ Deployment of |prod| with dedicated storage nodes provides the highest capacity .. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-dedicated-storage.png :width: 800 -.. note:: - Physical L2 switches are not shown in the deployment diagram in subsequent - chapters. Only the L2 networks they realize are shown. - See :ref:`Common Components ` for a description of common components of this deployment configuration. @@ -42,9 +38,8 @@ affected by the |OSD| size and speed, optional |SSD| or |NVMe| Ceph journals, CPU cores and speeds, memory, disk controllers, and networking. |OSDs| can be grouped into storage tiers according to their performance characteristics. -.. note:: - A storage backend is not configured by default. You can use either - internal Ceph or an external Netapp Trident backend. +Alternatively, instead of configuring Storage Nodes, you can configure an +external Netapp Trident storage backend. .. xreflink For more information, see the :ref:`|stor-doc| ` guide.