Merge "Rook Ceph Deployment Model Updates"

This commit is contained in:
Zuul 2024-11-21 15:35:12 +00:00 committed by Gerrit Code Review
commit c25ddf4e62
4 changed files with 472 additions and 180 deletions

View File

@ -921,9 +921,9 @@ Optionally Configure PCI-SRIOV Interfaces
:end-before: end-config-controller-0-OS-k8s-sriov-dx
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
*******************************************************
Optional - Initialize a Ceph Persistent Storage Backend
*******************************************************
A persistent storage backend is required if your application requires |PVCs|.
@ -933,11 +933,19 @@ A persistent storage backend is required if your application requires |PVCs|.
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx
.. note::
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
.. note::
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
For host-based Ceph:
#. Initialize with add ceph backend:
@ -960,23 +968,32 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-0
.. only:: starlingx
For Rook-Ceph:
For Rook container-based Ceph:
#. Add Storage-Backend with Deployment Model.
#. Initialize with add ceph-rook backend:
.. code-block:: none
::
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller
~(keystone_admin)$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| 45e3fedf-c386-4b8b-8405-882038dd7d13 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: controller replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
~(keystone_admin)$ system storage-backend-add ceph-rook --confirmed
#. Set up a ``contorllerfs ceph-float`` filesystem.
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
.. code-block:: none
::
~(keystone_admin)$ system controllerfs-add ceph-float=20
~(keystone_admin)$ system host-label-assign controller-0 ceph-mon-placement=enabled
~(keystone_admin)$ system host-label-assign controller-0 ceph-mgr-placement=enabled
#. Set up a ``host-fs ceph`` filesystem on controller-0.
.. code-block:: none
~(keystone_admin)$ system host-fs-add controller-0 ceph=20
-------------------
@ -1441,18 +1458,13 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-1
.. only:: starlingx
For Rook-Ceph:
For Rook container-based Ceph:
#. Set up a ``host-fs ceph`` filesystem on controller-1.
#. Assign Rook host labels to controller-1 in support of installing the
rook-ceph-apps manifest/helm-charts later:
.. code-block:: bash
~(keystone_admin)$ system host-label-assign controller-1 ceph-mon-placement=enabled
~(keystone_admin)$ ~(keystone_admin)$ system host-label-assign controller-1 ceph-mgr-placement=enabled
.. code-block:: none
~(keystone_admin)$ system host-fs-add controller-1 ceph=20
-------------------
Unlock controller-1
@ -1460,7 +1472,7 @@ Unlock controller-1
Unlock controller-1 in order to bring it into service:
.. code-block:: bash
.. code-block:: none
system host-unlock controller-1
@ -1468,87 +1480,114 @@ Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. only:: starlingx
-------------------------------------------------------------------
If configuring Rook Ceph Storage Backend, configure the environment
-------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
-----------------------------------------------------------------------------------------------
#. Check if the rook-ceph app is uploaded.
For Rook container-based Ceph:
On active controller:
#. Wait for the ``rook-ceph-apps`` application to be uploaded
::
.. code-block:: none
~(keystone_admin)$ source /etc/platform/openrc
~(keystone_admin)]$ system application-list
~(keystone_admin)$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-79 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| nginx-ingress-controller | 24.09-64 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-59 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-141 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
| rook-ceph | 24.09-48 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| snmp | 24.09-89 | snmp-fluxcd-manifests | fluxcd-manifests | applied | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
#. List all the disks.
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|OSD|.
.. code-block:: none
.. code-block:: bash
~(keystone_admin)$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7ce699f0-12dd-4416-ae43-00d3877450f7 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB0e18230e-6a8780e1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB144f1510-14f089fd | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 937cfabc-8447-4dbd-8ca3-062a46953023 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB95057d1c-4ee605c2 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
(keystone_admin)]$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 52c8e1b5-0551-4748-a7a0-27b9c028cf9d | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB9b565509-a2edaa2e | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 93020ce0-249e-4db3-b8c3-6c7e8f32713b | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBa08ccbda-90190faa | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| dc0ec403-67f8-40bf-ada0-6fcae3ed76da | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB16244caf-ab36d36c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
``values.yaml`` for rook-ceph-apps.
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
.. code-block:: yaml
.. code-block:: none
cluster:
storage:
nodes:
- name: controller-0
devices:
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
- name: controller-1
devices:
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
::
~(keystone_admin)$ system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
#. Apply the rook-ceph-apps application.
::
~(keystone_admin)$ system application-apply rook-ceph-apps
~(keystone_admin)$ system host-stor-add controller-0 osd bfb83b6f-61e2-4f9f-a87d-ecae938b7e78
~(keystone_admin)$ system host-stor-add controller-1 osd 93020ce0-249e-4db3-b8c3-6c7e8f32713b
#. Wait for |OSDs| pod to be ready.
::
.. code-block:: none
~(keystone_admin)$ kubectl get pods -n kube-system
rook-ceph-crashcollector-controller-0-f984688ff-jsr8t 1/1 Running 0 4m9s
rook-ceph-crashcollector-controller-1-7f9b6f55b6-699bb 1/1 Running 0 2m5s
rook-ceph-mgr-a-7f9d588c5b-49cbg 1/1 Running 0 3m5s
rook-ceph-mon-a-75bcbd8664-pvq99 1/1 Running 0 4m27s
rook-ceph-mon-b-86c67658b4-f4snf 1/1 Running 0 4m10s
rook-ceph-mon-c-7f48b58dfb-4nx2n 1/1 Running 0 3m30s
rook-ceph-operator-77b64588c5-bhfg7 1/1 Running 0 7m6s
rook-ceph-osd-0-6949657cf7-dkfp2 1/1 Running 0 2m6s
rook-ceph-osd-1-5d4b58cf69-kdg82 1/1 Running 0 2m4s
rook-ceph-osd-prepare-controller-0-wcvsn 0/1 Completed 0 2m27s
rook-ceph-osd-prepare-controller-1-98h76 0/1 Completed 0 2m26s
rook-ceph-tools-5778d7f6c-2h8s8 1/1 Running 0 5m55s
rook-discover-xc22t 1/1 Running 0 6m2s
rook-discover-xndld 1/1 Running 0 6m2s
storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-w55rh 0/1 Completed 0 10m
csi-cephfsplugin-8j7xz 2/2 Running 1 (11m ago) 12m
csi-cephfsplugin-lmmg2 2/2 Running 0 12m
csi-cephfsplugin-provisioner-5467c6c4f-mktqg 5/5 Running 0 12m
csi-rbdplugin-8m8kd 2/2 Running 1 (11m ago) 12m
csi-rbdplugin-provisioner-fd84899c-kpv4q 5/5 Running 0 12m
csi-rbdplugin-z92sk 2/2 Running 0 12m
mon-float-post-install-sw8qb 0/1 Completed 0 6m5s
mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s
rook-ceph-crashcollector-controller-0-589f5f774-sp6zf 1/1 Running 0 7m49s
rook-ceph-crashcollector-controller-1-68d66b9bff-zwgp9 1/1 Running 0 7m36s
rook-ceph-exporter-controller-0-5fd477bb8-jgsdk 1/1 Running 0 7m44s
rook-ceph-exporter-controller-1-6f5d8695b9-ndksh 1/1 Running 0 7m32s
rook-ceph-mds-kube-cephfs-a-5f584f4bc-tbk8q 2/2 Running 0 7m49s
rook-ceph-mgr-a-6845774cb5-lgjjd 3/3 Running 0 9m1s
rook-ceph-mgr-b-7fccfdf64d-4pcmc 3/3 Running 0 9m1s
rook-ceph-mon-a-69fd4895c7-2lfz4 2/2 Running 0 11m
rook-ceph-mon-b-7fd8cbb997-f84ng 2/2 Running 0 11m
rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s
rook-ceph-operator-69b5674578-z456r 1/1 Running 0 13m
rook-ceph-osd-0-5f59b5bb7b-mkwrg 2/2 Running 0 8m17s
rook-ceph-osd-prepare-controller-0-rhjgx 0/1 Completed 0 8m38s
rook-ceph-provision-5glpc 0/1 Completed 0 6m17s
rook-ceph-tools-7dc9678ccb-nmwwc 1/1 Running 0 12m
stx-ceph-manager-664f8585d8-5lt8c 1/1 Running 0 10m
#. Check ceph cluster health.
.. code-block:: none
$ ceph -s
cluster:
id: c18dfe3a-9b72-46e4-bb6e-6984f131598f
health: HEALTH_OK
services:
mon: 2 daemons, quorum a,b (age 9m)
mgr: a(active, since 6m), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 2 osds: 2 up (since 7m), 2 in (since 7m)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 25 objects, 594 KiB
usage: 72 MiB used, 19 GiB / 20 GiB avail
pgs: 113 active+clean
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
@ -1568,10 +1607,8 @@ Optionally Extend Capacity with Worker Nodes
.. include:: /_includes/72hr-to-license.rest
Complete system configuration by reviewing procedures in:
- |index-security-kub-81153c1254c3|
- |index-sysconf-kub-78f0e1e9ca5a|
- |index-admintasks-kub-ebc55fefc368|

View File

@ -841,9 +841,9 @@ Optionally Configure PCI-SRIOV Interfaces
:end-before: end-config-controller-0-OS-k8s-sriov-sx
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
************************************************************
Optional - Initialize a Ceph-rook Persistent Storage Backend
************************************************************
A persistent storage backend is required if your application requires
|PVCs|.
@ -854,11 +854,19 @@ A persistent storage backend is required if your application requires
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx
.. note::
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
.. note::
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
For host-based Ceph:
#. Add host-based Ceph backend:
@ -881,29 +889,80 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-0
For Rook-Ceph:
.. only:: starlingx
#. Check if the rook-ceph app is uploaded.
For Rook container-based Ceph:
.. code-block:: none
#. Add Rook container-based backend:
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
::
#. Add Storage-Backend with Deployment Model.
~(keystone_admin)$ system storage-backend-add ceph-rook --confirmed
There are three deployment models: Controller, Dedicated, and Open.
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
For the simplex and duplex environments you can use the Controller and Open
configuration.
::
Controller (default)
|OSDs| must only be added to host with controller personality set.
~(keystone_admin)$ system host-label-assign controller-0 ceph-mon-placement=enabled
~(keystone_admin)$ system host-label-assign controller-0 ceph-mgr-placement=enabled
Replication factor is limited to a maximum of 2.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
The replication factor is limited to a maximum of 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
Open
|OSDs| can be added to any host without limitations.
Replication factor has no limitations.
Application Strategies for deployment model controller.
Simplex
|OSDs|: Added to controller nodes.
Replication Factor: Default 1, maximum 2.
MON, MGR, MDS: Configured based on the number of hosts where the
``host-fs ceph`` is available.
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| a2452e47-4b2b-4a3a-a8f0-fb749d92d9cd | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: |
| | | | | | filesystem | 1 min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
#. Set up a ``host-fs ceph`` filesystem.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
-------------------
Unlock controller-0
-------------------
@ -920,75 +979,79 @@ Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. incl-unlock-controller-0-aio-simplex-end:
For Rook-Ceph:
.. only:: starlingx
#. List all the disks.
-----------------------------------------------------------------------------------------------
If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
-----------------------------------------------------------------------------------------------
.. code-block:: none
On controller-0:
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 17408af3-e211-4e2b-8cf1-d2b6687476d5 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBba52ec56-f68a9f2d | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| cee99187-dac4-4a7b-8e58-f2d5bd48dcaf | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf96fa322-597194da | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 0c6435af-805a-4a62-ad8e-403bf916f5cf | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBeefed5ad-b4815f0d | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
#. Wait for application rook-ceph-apps to be uploaded
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
::
.. code-block:: none
$ source /etc/platform/openrc
~(keystone_admin)]$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-79 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| nginx-ingress-controller | 24.09-64 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-59 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-141 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
| rook-ceph | 24.09-48 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| snmp | 24.09-89 | snmp-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
#. Configure rook to use /dev/sdb disk on controller-0 as a ceph |OSD|.
::
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
``values.yaml`` for rook-ceph-apps.
.. code-block:: yaml
cluster:
storage:
nodes:
- name: controller-0
devices:
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
::
~(keystone_admin)$ system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
#. Apply the rook-ceph-apps application.
::
~(keystone_admin)$ system application-apply rook-ceph-apps
$ system host-stor-add controller-0 osd cee99187-dac4-4a7b-8e58-f2d5bd48dcaf
#. Wait for |OSDs| pod to be ready.
::
.. code-block:: none
~(keystone_admin)$ kubectl get pods -n kube-system
rook--ceph-crashcollector-controller-0-764c7f9c8-bh5c7 1/1 Running 0 62m
rook--ceph-mgr-a-69df96f57-9l28p 1/1 Running 0 63m
rook--ceph-mon-a-55fff49dcf-ljfnx 1/1 Running 0 63m
rook--ceph-operator-77b64588c5-nlsf2 1/1 Running 0 66m
rook--ceph-osd-0-7d5785889f-4rgmb 1/1 Running 0 62m
rook--ceph-osd-prepare-controller-0-cmwt5 0/1 Completed 0 2m14s
rook--ceph-tools-5778d7f6c-22tms 1/1 Running 0 64m
rook--discover-kmv6c 1/1 Running 0 65m
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-78xjk 0/1 Completed 0 4m31s
csi-cephfsplugin-572jc 2/2 Running 0 5m32s
csi-cephfsplugin-provisioner-5467c6c4f-t8x8d 5/5 Running 0 5m28s
csi-rbdplugin-2npb6 2/2 Running 0 5m32s
csi-rbdplugin-provisioner-fd84899c-k8wcw 5/5 Running 0 5m32s
rook-ceph-crashcollector-controller-0-589f5f774-d8sjz 1/1 Running 0 3m24s
rook-ceph-exporter-controller-0-5fd477bb8-c7nxh 1/1 Running 0 3m21s
rook-ceph-mds-kube-cephfs-a-cc647757-6p9j5 2/2 Running 0 3m25s
rook-ceph-mds-kube-cephfs-b-5b5845ff59-xprbb 2/2 Running 0 3m19s
rook-ceph-mgr-a-746fc4dd54-t8bcw 2/2 Running 0 4m40s
rook-ceph-mon-a-b6c95db97-f5fqq 2/2 Running 0 4m56s
rook-ceph-operator-69b5674578-27bn4 1/1 Running 0 6m26s
rook-ceph-osd-0-7f5cd957b8-ppb99 2/2 Running 0 3m52s
rook-ceph-osd-prepare-controller-0-vzq2d 0/1 Completed 0 4m18s
rook-ceph-provision-zcs89 0/1 Completed 0 101s
rook-ceph-tools-7dc9678ccb-v2gps 1/1 Running 0 6m2s
stx-ceph-manager-664f8585d8-wzr4v 1/1 Running 0 4m31s
#. Check ceph cluster health.
.. code-block:: none
$ ceph -s
cluster:
id: 75c8f017-e7b8-4120-a9c1-06f38e1d1aa3
health: HEALTH_OK
services:
mon: 1 daemons, quorum a (age 32m)
mgr: a(active, since 30m)
mds: 1/1 daemons up, 1 hot standby
osd: 1 osds: 1 up (since 30m), 1 in (since 31m)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 22 objects, 595 KiB
usage: 27 MiB used, 9.7 GiB / 9.8 GiB avail
pgs: 113 active+clean
io:
client: 852 B/s rd, 1 op/s rd, 0 op/s wr
.. incl-unlock-controller-0-aio-simplex-end:
.. only:: partner

View File

@ -752,9 +752,9 @@ host machine.
.. incl-unlock-compute-nodes-virt-controller-storage-end:
-----------------------------------------------------------------
If configuring Ceph Storage Backend, Add Ceph OSDs to controllers
-----------------------------------------------------------------
----------------------------------------------------------------------------
If configuring host based Ceph Storage Backend, Add Ceph OSDs to controllers
----------------------------------------------------------------------------
.. only:: starlingx
@ -826,3 +826,195 @@ Complete system configuration by reviewing procedures in:
- |index-security-kub-81153c1254c3|
- |index-sysconf-kub-78f0e1e9ca5a|
- |index-admintasks-kub-ebc55fefc368|
*******************************************************************
If configuring Rook Ceph Storage Backend, configure the environment
*******************************************************************
.. note::
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
#. Check if the rook-ceph app is uploaded.
.. code-block:: none
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
#. Add Storage-Backend with Deployment Model.
There are three deployment models: Controller, Dedicated, and Open.
For the simplex and duplex environments you can use the Controller and Open
configuration.
Controller (default)
|OSDs| must only be added to host with controller personality set.
Replication factor is limited to a maximum of 2.
This model aligns with the existing Bare-metal Ceph assignement of OSDs
to controllers.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
The replication factor is limited to a maximum of 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
Open
|OSDs| can be added to any host without limitations.
Replication factor has no limitations.
Application Strategies for deployment model controller.
Duplex, Duplex+ or Standard
|OSDs|: Added to controller nodes.
Replication Factor: Default 1, maximum 'Any'.
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment open --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| 0dfef1f0-a5a4-4b20-a013-ef76e92bcd42 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: open replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
#. Set up a ``host-fs ceph`` filesystem.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-1 ceph=20
$ system host-fs-add compute-0 ceph=20
#. List all the disks.
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7f2b9ff5-b6ee-4eaf-a7eb-cecd3ba438fd | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB3e6c5449-c7224b07 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| fdaf3f71-a2df-4b40-9e70-335900f953a3 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB323207f8-b6b9d531 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| ced60373-0dbc-4bc7-9d03-657c1f92164a | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB49833b9d-a22a2455 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 119533a5-bc66-47e0-a448-f0561871989e | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBb1b06a09-6137c63a | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB5fcf59a9-7c8a531b | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 7351013f-8280-4ff3-88bd-76e88f14fa2f | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB0d1ce946-d0a172c4 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list compute-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 14245695-46df-43e8-b54b-9fb3c22ac359 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB8ac41a93-82275093 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 765d8dff-e584-4064-9c95-6ea3aa25473c | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB569d6dab-9ae3e6af | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| c9b4ed65-da32-4770-b901-60b56fd68c35 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBf88762a8-9aa3315c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
.. code-block:: none
$ system host-stor-add controller-0 osd fdaf3f71-a2df-4b40-9e70-335900f953a3
$ system host-stor-add controller-1 osd 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251
$ system host-stor-add compute-0 osd c9b4ed65-da32-4770-b901-60b56fd68c35
#. Apply the rook-ceph application.
.. code-block:: none
$ system application-apply rook-ceph
#. Wait for |OSDs| pod to be ready.
.. code-block:: none
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-nh6dl 0/1 Completed 0 18h
csi-cephfsplugin-2nnwf 2/2 Running 10 (3h9m ago) 18h
csi-cephfsplugin-flbll 2/2 Running 14 (3h42m ago) 18h
csi-cephfsplugin-provisioner-5467c6c4f-98fxk 5/5 Running 5 (4h7m ago) 18h
csi-cephfsplugin-zzskz 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-42ldl 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-8xzxz 2/2 Running 14 (3h42m ago) 18h
csi-rbdplugin-b6dvk 2/2 Running 10 (3h9m ago) 18h
csi-rbdplugin-provisioner-fd84899c-6795x 5/5 Running 5 (4h7m ago) 18h
rook-ceph-crashcollector-compute-0-59f554f6fc-5s5cz 1/1 Running 0 4m19s
rook-ceph-crashcollector-controller-0-589f5f774-b2297 1/1 Running 0 3h2m
rook-ceph-crashcollector-controller-1-68d66b9bff-njrhg 1/1 Running 1 (4h7m ago) 18h
rook-ceph-exporter-compute-0-569b65cf6c-xhfjk 1/1 Running 0 4m14s
rook-ceph-exporter-controller-0-5fd477bb8-rzkqd 1/1 Running 0 3h2m
rook-ceph-exporter-controller-1-6f5d8695b9-772rb 1/1 Running 1 (4h7m ago) 18h
rook-ceph-mds-kube-cephfs-a-654c56d89d-mdklw 2/2 Running 11 (166m ago) 18h
rook-ceph-mds-kube-cephfs-b-6c498f5db4-5hbcj 2/2 Running 2 (166m ago) 3h2m
rook-ceph-mgr-a-5d6664f544-rgfpn 3/3 Running 9 (3h42m ago) 18h
rook-ceph-mgr-b-5c4cb984b9-cl4qq 3/3 Running 0 168m
rook-ceph-mgr-c-7d89b6cddb-j9hxp 3/3 Running 0 3h9m
rook-ceph-mon-a-6ffbf95cdf-cvw8r 2/2 Running 0 3h9m
rook-ceph-mon-b-5558b5ddc7-h7nhz 2/2 Running 2 (4h7m ago) 18h
rook-ceph-mon-c-6db9c888cb-mfxfh 2/2 Running 0 167m
rook-ceph-operator-69b5674578-k6k4j 1/1 Running 0 8m10s
rook-ceph-osd-0-dd94574ff-dvrrs 2/2 Running 2 (4h7m ago) 18h
rook-ceph-osd-1-5d7f598f8f-88t2j 2/2 Running 0 3h9m
rook-ceph-osd-2-6776d44476-sqnlj 2/2 Running 0 4m20s
rook-ceph-osd-prepare-compute-0-ls2xw 0/1 Completed 0 5m16s
rook-ceph-osd-prepare-controller-0-jk6bz 0/1 Completed 0 5m27s
rook-ceph-osd-prepare-controller-1-d845s 0/1 Completed 0 5m21s
rook-ceph-provision-vtvc4 0/1 Completed 0 17h
rook-ceph-tools-7dc9678ccb-srnd8 1/1 Running 1 (4h7m ago) 18h
stx-ceph-manager-664f8585d8-csl7p 1/1 Running 1 (4h7m ago) 18h
#. Check ceph cluster health.
.. code-block:: none
$ ceph -s
cluster:
id: 5b579aca-617f-4f2a-b059-73e7071111dc
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h), standbys: c, b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 82s), 3 in (since 2m)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 26 objects, 648 KiB
usage: 129 MiB used, 29 GiB / 29 GiB avail
pgs: 110 active+clean
2 active+clean+scrubbing+deep
1 active+clean+scrubbing
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr