Draft of new section on optimization with a large number of OSDs. Fixed typo. Partially address patchset 1 review comments. One open question outstanding. Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: I9f44857e49dc1e289301d496611e508a338048e2
1.6 KiB
Optimization with a Large number of OSDs
You may need to optimize your Ceph configuration for balanced operation across deployments with a high number of .
As the number of increases, choosing the correct <pg_num> and <pgp_num> values becomes more important as they have a significant influence on the behavior of the cluster and the durability of the data should a catastrophic event occur.
recommends the following values:
Fewer than 5 : Set <pg_num> and <pgp_num> to 128.
Between 5 and 10 : Set <pg_num> and <pgp_num> to 512.
Between 10 and 50 : Set <pg_num> and <pgp_num> to 4096.
More than 50 : Understanding the memory, CPU and network usage tradeoffs, calculate and set the optimal <pg_num> and <pgp_num> values for your scenario.
Use the equation below and round up to a number power of 2.
Total PGs = (OSDs 100) / <pool_size>*
<pool_size> is either the number of replicas for replicated pools or the K+M sum for erasure coded pools as returned by
ceph osd erasure-code-profile get <profile>
, where <profile> is usually default.For more information on the tradeoffs involved, consult the Ceph documentation at:
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
For a deployment with 7 , use the following commands to set <pg> and <pgp_num> to 512.
$ ceph osd pool set kube-rbd pg_num 512 $ ceph osd pool set kube-rbd pgp_num 512