Add Performance Configurations on Rook Ceph
Create a document for performance configuration on rook ceph. Also reviewed all Rook Ceph documentation. Story: 2011066 Task: 51381 Change-Id: Id98f1fec24f3059d91528d843b5384f3839d265c Signed-off-by: Caio Correa <caio.correa@windriver.com>
This commit is contained in:
parent
574800be04
commit
45e522e3f0
@ -111,6 +111,7 @@
|
|||||||
.. |LUKS| replace:: :abbr:`LUKS (Linux Unified Key Setup)`
|
.. |LUKS| replace:: :abbr:`LUKS (Linux Unified Key Setup)`
|
||||||
.. |LVG| replace:: :abbr:`LVG (Local Volume Groups)`
|
.. |LVG| replace:: :abbr:`LVG (Local Volume Groups)`
|
||||||
.. |MAC| replace:: :abbr:`MAC (Media Access Control)`
|
.. |MAC| replace:: :abbr:`MAC (Media Access Control)`
|
||||||
|
.. |MDS| replace:: :abbr:`MDS (MetaData Server for cephfs)`
|
||||||
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
|
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
|
||||||
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
|
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
|
||||||
.. |ML| replace:: :abbr:`ML (Machine Learning)`
|
.. |ML| replace:: :abbr:`ML (Machine Learning)`
|
||||||
|
@ -217,3 +217,62 @@ them following the rules:
|
|||||||
|
|
||||||
``object``
|
``object``
|
||||||
- Will enable the ceph object store (RGW).
|
- Will enable the ceph object store (RGW).
|
||||||
|
|
||||||
|
Services Parameterization for the Open Model
|
||||||
|
********************************************
|
||||||
|
|
||||||
|
In the 'open' deployment model, no specific configurations are enforced. Users are responsible
|
||||||
|
for customizing settings based on their specific needs. To update configurations, a Helm override is required.
|
||||||
|
|
||||||
|
When applying a helm-override update, list-type values are completely replaced, not incrementally updated.
|
||||||
|
For example, modifying cephFileSystems (or cephBlockPools, cephECBlockPools, cephObjectStores) via
|
||||||
|
Helm override will overwrite the entire entry.
|
||||||
|
|
||||||
|
Here is an **example** of how to change a parameter, using failureDomain as
|
||||||
|
an example, for **Cephfs** and **RBD**:
|
||||||
|
|
||||||
|
.. tabs::
|
||||||
|
|
||||||
|
.. group-tab:: Cephfs
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
# Get the current crush rule information
|
||||||
|
ceph osd pool get kube-cephfs-data crush_rule
|
||||||
|
|
||||||
|
# Get the current default values
|
||||||
|
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephFileSystems:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > cephfs_overrides.yaml
|
||||||
|
|
||||||
|
# Update the failure domain
|
||||||
|
sed -i 's/failureDomain: osd/failureDomain: host/g' cephfs_overrides.yaml
|
||||||
|
|
||||||
|
# Get the current user override values ("combined overrides" is what will be deployed):
|
||||||
|
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
|
||||||
|
|
||||||
|
# Set the new overrides
|
||||||
|
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values cephfs_overrides.yaml
|
||||||
|
|
||||||
|
# Get the updated user override values
|
||||||
|
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
|
||||||
|
|
||||||
|
# Apply the application
|
||||||
|
system application-apply rook-ceph
|
||||||
|
|
||||||
|
# Confirm the current crush rule information:
|
||||||
|
ceph osd pool get kube-cephfs-data crush_rule
|
||||||
|
|
||||||
|
.. group-tab:: RBD
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
# Retrieve the current values and extract the cephBlockPools section:
|
||||||
|
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephBlockPools:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > rbd_overrides.yaml
|
||||||
|
|
||||||
|
# Modify the failureDomain parameter from osd to host in the rbd_overrides.yaml file:
|
||||||
|
sed -i 's/failureDomain: osd/failureDomain: host/g' rbd_overrides.yaml
|
||||||
|
|
||||||
|
# Set the update configuration:
|
||||||
|
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values rbd_overrides.yaml
|
||||||
|
|
||||||
|
# Apply the application
|
||||||
|
system application-apply rook-ceph
|
@ -142,7 +142,7 @@ Rook Ceph Application
|
|||||||
install-rook-ceph-a7926a1f9b70
|
install-rook-ceph-a7926a1f9b70
|
||||||
uninstall-rook-ceph-cbb046746782
|
uninstall-rook-ceph-cbb046746782
|
||||||
deployment-models-for-rook-ceph-b855bd0108cf
|
deployment-models-for-rook-ceph-b855bd0108cf
|
||||||
|
performance-configurations-rook-ceph-9e719a652b02
|
||||||
|
|
||||||
-------------------------
|
-------------------------
|
||||||
Persistent Volume Support
|
Persistent Volume Support
|
||||||
|
@ -207,11 +207,11 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
|
|||||||
$ system storage-backend-add ceph-rook --deployment controller --confirmed
|
$ system storage-backend-add ceph-rook --deployment controller --confirmed
|
||||||
|
|
||||||
#. Add the ``host-fs ceph`` on controller, the ``host-fs ceph`` is configured
|
#. Add the ``host-fs ceph`` on controller, the ``host-fs ceph`` is configured
|
||||||
with 10 GB.
|
with 20 GB.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ system host-fs-add controller-0 ceph=10
|
$ system host-fs-add controller-0 ceph=20
|
||||||
|
|
||||||
#. To add |OSDs|, get the |UUID| of each disk to feed the
|
#. To add |OSDs|, get the |UUID| of each disk to feed the
|
||||||
:command:`host-stor-add` command:
|
:command:`host-stor-add` command:
|
||||||
@ -226,10 +226,10 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
|
|||||||
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
||||||
| | | | | 968 | | | 9 | |
|
| | | | | 968 | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| | | | | | | | 9 | |
|
| | | | | | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| | | | | | | | 4 | |
|
| | | | | | | | 4 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
||||||
@ -331,18 +331,18 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
|
|||||||
|
|
||||||
$ system storage-backend-add ceph-rook --deployment controller --confirmed
|
$ system storage-backend-add ceph-rook --deployment controller --confirmed
|
||||||
|
|
||||||
#. Add the ``controller-fs`` ``ceph-float`` configured with 10 GB.
|
#. Add the ``controller-fs`` ``ceph-float`` configured with 20 GB.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ system controllerfs-add ceph-float=<size>
|
$ system controllerfs-add ceph-float=20
|
||||||
|
|
||||||
#. Add the ``host-fs ceph`` on each controller, the ``host-fs ceph`` is
|
#. Add the ``host-fs ceph`` on each controller, the ``host-fs ceph`` is
|
||||||
configured with 10 GB.
|
configured with 20 GB.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ system host-fs-add controller-0 ceph=10
|
$ system host-fs-add controller-0 ceph=20
|
||||||
|
|
||||||
#. To add |OSDs|, get the |UUID| of each disk to feed the
|
#. To add |OSDs|, get the |UUID| of each disk to feed the
|
||||||
:command:`host-stor-add` command.
|
:command:`host-stor-add` command.
|
||||||
@ -357,10 +357,10 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
|
|||||||
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
||||||
| | | | | 968 | | | 9 | |
|
| | | | | 968 | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| | | | | | | | 9 | |
|
| | | | | | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| | | | | | | | 4 | |
|
| | | | | | | | 4 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
||||||
@ -373,10 +373,10 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
|
|||||||
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
||||||
| | | | | 968 | | | 32be8509 | |
|
| | | | | 968 | | | 32be8509 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| | | | | | | | 62d4613b | |
|
| | | | | | | | 62d4613b | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| | | | | | | | 3003aa5e | |
|
| | | | | | | | 3003aa5e | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
|
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
|
||||||
@ -516,10 +516,10 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
|
|||||||
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
||||||
| | | | | 968 | | | 9 | |
|
| | | | | 968 | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| | | | | | | | 9 | |
|
| | | | | | | | 9 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| | | | | | | | 4 | |
|
| | | | | | | | 4 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
|
||||||
@ -532,10 +532,10 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
|
|||||||
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
|
||||||
| | | | | 968 | | | 32be8509 | |
|
| | | | | 968 | | | 32be8509 | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
|
||||||
| | | | | | | | 62d4613b | |
|
| | | | | | | | 62d4613b | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
|
||||||
| | | | | | | | 3003aa5e | |
|
| | | | | | | | 3003aa5e | |
|
||||||
| | | | | | | | | |
|
| | | | | | | | | |
|
||||||
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
|
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
|
||||||
|
@ -0,0 +1,156 @@
|
|||||||
|
.. WARNING: Add no lines of text between the label immediately following
|
||||||
|
.. and the title.
|
||||||
|
|
||||||
|
.. _performance-configurations-rook-ceph-9e719a652b02:
|
||||||
|
|
||||||
|
=======================================
|
||||||
|
Performance Configurations on Rook Ceph
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
When using Rook Ceph it is important to consider resource allocation and
|
||||||
|
configuration adjustments to ensure optimal performance. Rook introduces
|
||||||
|
additional management overhead compared to a traditional bare-metal Ceph setup
|
||||||
|
and needs more infrastructure resources.
|
||||||
|
|
||||||
|
Consequently, increasing the number of platform cores will improve I/O performance for
|
||||||
|
|OSD|, monitor and |MDS| pods.
|
||||||
|
|
||||||
|
Increasing the number of |OSDs| in the cluster can also improve performance, reducing
|
||||||
|
the load on individual disks and enhancing throughput.
|
||||||
|
|
||||||
|
When we talk about memory, it's important to emphasize that Ceph's default for
|
||||||
|
the |OSD| is 4GB, and we do not recommend decreasing it below 4GB. However, the
|
||||||
|
system could work with only 2GB.
|
||||||
|
|
||||||
|
Another factor to consider is the size of the data blocks. Reading and writing
|
||||||
|
small block files can degrade Ceph's performance, especially during
|
||||||
|
high-frequency operations.
|
||||||
|
|
||||||
|
Pod resource limit tuning
|
||||||
|
-------------------------
|
||||||
|
|
||||||
|
To check the current values for |OSDs| memory limits:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ helm get values -n rook-ceph rook-ceph-cluster -o yaml | grep ' osd:' -A2
|
||||||
|
|
||||||
|
If you want to adjust memory settings in an effort to improve |OSD|
|
||||||
|
read/write performance, you can allocate more memory to |OSDs| by running
|
||||||
|
the following command:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ cat << EOF >> limit_override.yml
|
||||||
|
cephClusterSpec:
|
||||||
|
resources:
|
||||||
|
osd:
|
||||||
|
limits:
|
||||||
|
memory: <value>
|
||||||
|
EOF
|
||||||
|
|
||||||
|
Make sure to provide parameter the with the correct unit, e.g.: ``4Gi``.
|
||||||
|
|
||||||
|
Then reapply the override:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)$ system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --values limit_override.yml
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The settings applied using ``helm-override-update`` remain active
|
||||||
|
until the Rook-Ceph application is deleted. If the application is
|
||||||
|
deleted and reinstalled, these settings will need to be reapplied.
|
||||||
|
|
||||||
|
Finally, apply the Rook-Ceph application:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)$ system application-apply rook-ceph
|
||||||
|
|
||||||
|
|
||||||
|
Bluestore tunable parameters
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
The ``osd_memory_cache_min`` and ``osd_memory_target`` parameters impact
|
||||||
|
memory management in |OSDs|. Increasing them improves performance by
|
||||||
|
optimizing memory usage and reducing latencies for read/write
|
||||||
|
operations. However, higher values consume more resources, which can
|
||||||
|
affect overall platform resources utilization. For performance similar to a
|
||||||
|
Ceph bare metal environment, a significant increase in these parameters
|
||||||
|
is required.
|
||||||
|
|
||||||
|
To check the current values for these parameters, use:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^configOverride:/,/^[[:alnum:]_-]*:/{/^[[:alnum:]_-]*:/!p}'
|
||||||
|
|
||||||
|
To modify these parameters first create a override with the updated
|
||||||
|
values:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ cat << EOF >> tunable_override.yml
|
||||||
|
configOverride: |
|
||||||
|
[global]
|
||||||
|
osd_pool_default_size = 1
|
||||||
|
osd_pool_default_min_size = 1
|
||||||
|
auth_cluster_required = cephx
|
||||||
|
auth_service_required = cephx
|
||||||
|
auth_client_required = cephx
|
||||||
|
|
||||||
|
[osd]
|
||||||
|
osd_mkfs_type = xfs
|
||||||
|
osd_mkfs_options_xfs = "-f"
|
||||||
|
osd_mount_options_xfs = "rw,noatime,inode64,logbufs=8,logbsize=256k"
|
||||||
|
osd_memory_target = <value>
|
||||||
|
osd_memory_cache_min = <value>
|
||||||
|
|
||||||
|
[mon]
|
||||||
|
mon_warn_on_legacy_crush_tunables = false
|
||||||
|
mon_pg_warn_max_per_osd = 2048
|
||||||
|
mon_pg_warn_max_object_skew = 0
|
||||||
|
mon_clock_drift_allowed = .1
|
||||||
|
mon_warn_on_pool_no_redundancy = false
|
||||||
|
EOF
|
||||||
|
|
||||||
|
Make sure to provide the ``osd_memory_target`` and
|
||||||
|
``osd_memory_cache_min`` with the correct unit, e.g.: ``4Gi``.
|
||||||
|
|
||||||
|
The default value for ``osd_memory_cache_min`` is ``4Gi``.
|
||||||
|
The default value for ``osd_memory_target`` is ``128Mi``.
|
||||||
|
|
||||||
|
Then run ``helm-override-update``:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)$ system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --values tunable_override.yml
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
The settings applied using ``helm-override-update`` remain active
|
||||||
|
until the Rook-Ceph application is deleted. If the application is
|
||||||
|
deleted and reinstalled, these settings will need to be reapplied.
|
||||||
|
|
||||||
|
Then reapply the Rook-Ceph application:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
~(keystone_admin)$ system application-apply rook-ceph
|
||||||
|
|
||||||
|
To change the configuration of an already running |OSD| without restarting it, the following Ceph config commands must be
|
||||||
|
executed:
|
||||||
|
|
||||||
|
.. code-block:: none
|
||||||
|
|
||||||
|
$ ceph config set osd.<id> osd_memory_target <value>
|
||||||
|
$ ceph config set osd.<id> osd_memory_cache_min <value>
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Changes made with ``ceph config set`` commands will persist for
|
||||||
|
the life of the Ceph cluster. However, if the Ceph cluster is removed
|
||||||
|
(e.g., deleted and recreated), these changes will be lost and will
|
||||||
|
need to be reapplied once the cluster is redeployed.
|
Loading…
x
Reference in New Issue
Block a user