In .mgr the size is 1 for every system configuration.
This size=1 prevents the OSD that holds the data of .mgr to be stopped
even if needed by Rook for some management task.
This solution is to set the .mgr size according to the storage-backend
replication settings, as it already is for the other pools.
That way, the new default replication size for .mgr will still be 1 in
SX, but 2 in DX and STD.
The solution was made by creating a CephBlockPool for the .mgr as in:
https://github.com/rook/rook/blob/release-1.13/deploy/examples/pool-builtin-mgr.yaml.
This also fixes a bug when reapplying the app using service ecblock.
The general StorageClass was being replaced by the one in the
static-overrides, this was not supposed to happen.
This also fixes a problem in SX when increasing the storage-backend
replication. The crush rule kube-cephfs-metadata was set with the
default failureDomain=host, causing PGs to not be replicated and be
undersized.
Test Plan:
- App fresh install
- App reapply
- storage-backend-modify replication 1 -> 2 (only for SX)
- storage-backend-modify replication 2 -> 3
- storage-backend-modify replication 3 -> 2
- storage-backend-modify replication 2 -> 1 (only for SX)
PASS: SX [service block, deployment controller]
PASS: SX [service ecblock, deployment open]
PASS: DX [service block, deployment controller]
PASS: DX [service ecblock, deployment open]
Closes-Bug: 2092531
Change-Id: Ia742da7d4dc0de8f04b1f3bb3b58f8efec635ed5