From 5265340f9e73b214e54bbe9d993512c91611a719 Mon Sep 17 00:00:00 2001 From: Ron Stone Date: Wed, 18 May 2022 11:27:10 -0400 Subject: [PATCH] Storage nodes not balanced (r6,dsR6) Draft of new section on optimization with a large number of OSDs. Fixed typo. Partially address patchset 1 review comments. One open question outstanding. Signed-off-by: Ron Stone Change-Id: I9f44857e49dc1e289301d496611e508a338048e2 --- .../index-storage-kub-e797132c87a8.rst | 1 + ...th-a-large-number-of-osds-df2169096946.rst | 51 +++++++++++++++++++ 2 files changed, 52 insertions(+) create mode 100644 doc/source/storage/kubernetes/optimization-with-a-large-number-of-osds-df2169096946.rst diff --git a/doc/source/storage/kubernetes/index-storage-kub-e797132c87a8.rst b/doc/source/storage/kubernetes/index-storage-kub-e797132c87a8.rst index 11bf85f38..d7699e3ce 100644 --- a/doc/source/storage/kubernetes/index-storage-kub-e797132c87a8.rst +++ b/doc/source/storage/kubernetes/index-storage-kub-e797132c87a8.rst @@ -123,6 +123,7 @@ Configure Ceph OSDs on a Host add-a-storage-tier-using-the-cli provision-storage-on-a-controller-or-storage-host-using-horizon provision-storage-on-a-storage-host-using-the-cli + optimization-with-a-large-number-of-osds-df2169096946 replace-osds-and-journal-disks replace-osds-on-a-standard-system-f3b1e376304c replace-osds-on-an-aio-dx-system-319b0bc2f7e6 diff --git a/doc/source/storage/kubernetes/optimization-with-a-large-number-of-osds-df2169096946.rst b/doc/source/storage/kubernetes/optimization-with-a-large-number-of-osds-df2169096946.rst new file mode 100644 index 000000000..7b2fdf627 --- /dev/null +++ b/doc/source/storage/kubernetes/optimization-with-a-large-number-of-osds-df2169096946.rst @@ -0,0 +1,51 @@ +.. _optimization-with-a-large-number-of-osds-df2169096946: + +======================================== +Optimization with a Large number of OSDs +======================================== + +You may need to optimize your Ceph configuration for balanced operation across +deployments with a high number of |OSDs|. + +.. rubric:: |context| + +As the number of |OSDs| increases, choosing the correct and +values becomes more important as they have a significant influence on the +behavior of the cluster and the durability of the data should a catastrophic +event occur. + +|org| recommends the following values: + +* Fewer than 5 |OSDs|: Set and to 128. + +* Between 5 and 10 |OSDs|: Set and to 512. + +* Between 10 and 50 |OSDs|: Set and to 4096. + +* More than 50 |OSDs|: Understanding the memory, CPU and network usage + tradeoffs, calculate and set the optimal and values for + your scenario. + + Use the equation below and round up to a number power of 2. + + *Total PGs = (OSDs * 100) / * + + is either the number of replicas for replicated pools or the K+M + sum for erasure coded pools as returned by ``ceph osd erasure-code-profile + get ``, where is usually default. + + For more information on the tradeoffs involved, consult the Ceph + documentation at: + + https://docs.ceph.com/en/latest/rados/operations/placement-groups/ + + +.. rubric:: |eg| + +* For a deployment with 7 |OSDs|, use the following commands to set and + to 512. + + .. code-block:: none + + $ ceph osd pool set kube-rbd pg_num 512 + $ ceph osd pool set kube-rbd pgp_num 512