Merge "Install Rook Ceph documentation needs improvement (dsr10)"

This commit is contained in:
Zuul 2025-01-20 14:52:23 +00:00 committed by Gerrit Code Review
commit f3b9386efa

View File

@ -9,15 +9,16 @@ Install Rook Ceph
.. rubric:: |context| .. rubric:: |context|
Rook Ceph in an orchestrator providing a containerized solution for Ceph Rook Ceph is an orchestrator that provides a containerized solution for Ceph
Storage with a specialized Kubernetes Operator to automate the management of Storage with a specialized Kubernetes Operator to automate the management of
the cluster. It is an alternative solution to the bare metal Ceph storage. See the cluster. It is an alternative solution to the bare metal Ceph storage. For
https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more more details, see
https://rook.io/docs/rook/latest-release/Getting-Started/intro/. for more
details. details.
.. rubric:: |prereq| .. rubric:: |prereq|
Before configuring the deployment model and services. Before configuring the deployment model and services:
- Ensure that there is no ceph-store storage backend configured on the - Ensure that there is no ceph-store storage backend configured on the
system. system.
@ -35,24 +36,29 @@ Before configuring the deployment model and services.
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed ~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
- Create a ``host-fs ceph`` for each host that will use a Rook Ceph monitor - Create a ``host-fs ceph`` for each host that will use a Rook Ceph monitor
(preferably an ODD number of hosts): (preferably an odd number of hosts).
.. code-block:: none .. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size> ~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
- It is recommended to use a |AIO-DX| platform adding a floating monitor. To add a - On a Duplex system without workers, it is recommended to add a floating
floating monitor the inactive controller should be locked. Ceph monitor. To add a floating monitor, first lock the inactive controller
and create a ``controllerfs`` for the monitor.
.. code-block:: none .. code-block:: none
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller) ~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
~(keystone_admin)$ system controllerfs-add ceph-float=<size> ~(keystone_admin)$ system controllerfs-add ceph-float=<size>
.. note::
The recommended size for the ``host-fs`` and ``controllerfs`` ceph is minimum 20 GB.
- Configure |OSDs|. - Configure |OSDs|.
- Check the |UUID| of the disks of the desired host that will use the - Check the |UUID| of the disks of the desired host that will use the
|OSDs|: |OSDs|.
.. code-block:: none .. code-block:: none
@ -63,14 +69,14 @@ Before configuring the deployment model and services.
The |OSD| placement should follow the chosen deployment model The |OSD| placement should follow the chosen deployment model
placement rules. placement rules.
- Add the desired disks to the system as |OSDs| (Preferably an EVEN - Add the desired disks to the system as |OSDs| (preferably an even
number of |OSDs|): number of |OSDs|).
.. code-block:: none .. code-block:: none
~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid> ~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
For more details on deployment models and services see For more details on deployment models and services, see
:ref:`deployment-models-for-rook-ceph-b855bd0108cf`. :ref:`deployment-models-for-rook-ceph-b855bd0108cf`.
.. rubric:: |proc| .. rubric:: |proc|
@ -139,7 +145,7 @@ to check the Rook Ceph pods on the cluster.
Additional Enhancements Additional Enhancements
----------------------- -----------------------
Add new OSDs on a running cluster Add new OSDs on a Running Cluster
********************************* *********************************
To add new |OSDs| to the cluster, add the new |OSD| to the platform and To add new |OSDs| to the cluster, add the new |OSD| to the platform and
@ -150,7 +156,7 @@ reapply the application.
~(keystone_admin)$ system host-stor-add <host> <disk_uuid> ~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph ~(keystone_admin)$ system application-apply rook-ceph
Add a new monitor on a running cluster Add a new Monitor on a Running Cluster
************************************** **************************************
To add a new monitor to the cluster, add the ``host-fs`` to the desired host To add a new monitor to the cluster, add the ``host-fs`` to the desired host
@ -165,7 +171,7 @@ and reapply the application.
Enable the Ceph Dashboard Enable the Ceph Dashboard
************************* *************************
To enable the Ceph dashboard a Helm override must be provided to the To enable the Ceph dashboard, a Helm override must be provided to the
application. Provide a password coded in base64. application. Provide a password coded in base64.
.. rubric:: |proc| .. rubric:: |proc|
@ -208,7 +214,7 @@ application. Provide a password coded in base64.
You can access the dashboard using the following address: ``https://<floating_ip>:30443``. You can access the dashboard using the following address: ``https://<floating_ip>:30443``.
Check Rook Ceph pods Check Rook Ceph Pods
******************** ********************
You can check the pods of the storage cluster using the following command: You can check the pods of the storage cluster using the following command:
@ -216,445 +222,3 @@ You can check the pods of the storage cluster using the following command:
.. code-block:: none .. code-block:: none
kubectl get pod -n rook-ceph kubectl get pod -n rook-ceph
Installation on |AIO-SX| deployments
------------------------------------
For example, you can manually install a controller model, a monitor and some services (block and cephfs)
on |AIO-SX| deployments.
In this configuration, you can add monitors and |OSDs| on the |AIO-SX| node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage backend using block (RBD), cephfs (default option).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``host-fs ceph`` on the controller, the ``host-fs ceph`` is configured
with 20GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
#. To add |OSDs|, get the |UUID| of each disk and run the :command:`host-stor-add` command.
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|.
.. code-block:: none
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 283359b5-d06f-4e73-a58f-e15f7ea41abd
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 |
| journal_node | /dev/cdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 283359b5-d06f-4e73-a58f-e15f7ea41abd |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the application. With a valid configuration of ``host-fs``
and |OSDs| the application will be applied automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After applying the application the pod list of the namespace Rook Ceph is as follows:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-rfgr4984-t653f 2/2 Running 0 11m
rook-ceph-osd-prepare-controller-0-8ge4z 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Installation on |AIO-DX| deployments
------------------------------------
For example, you can manually install a controller model, three monitors and some services (block and cephfs)
on |AIO-DX| deployments.
In this configuration, you can add monitors and |OSDs| on the |AIO-DX| node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage backend using block (RBD), cephfs (default option).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``controller-fs`` ``ceph-float`` configured with 20GB.
.. code-block:: none
$ system controllerfs-add ceph-float=20
#. Add the ``host-fs ceph`` on each controller, the ``host-fs ceph`` is
configured with 20GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
#. To add |OSDs|, get the |UUID| of each disk and run the
:command:`host-stor-add` command.
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
$ system host-disk-list controller-1
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 32be8509 | |
| | | | | | | | | |
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 62d4613b | |
| | | | | | | | | |
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 3003aa5e | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|.
.. code-block:: none
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add controller-1 #UUID
$ system host-stor-add controller-1 1e36945e-e0fb-4a72-9f96-290f9bf57523
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the application. With a valid configuration of monitors and
|OSDs|, the app will be applied automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After applying the application the pod list of the namespace ``rook-ceph`` should
be as follows:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-64z6c 2/2 Running 0 34m
csi-cephfsplugin-dhsqp 2/2 Running 2 (17m ago) 34m
csi-cephfsplugin-gch9g 2/2 Running 0 34m
csi-cephfsplugin-pkzg2 2/2 Running 0 34m
csi-cephfsplugin-provisioner-5467c6c4f-r2lp6 5/5 Running 0 22m
csi-rbdplugin-2vmzf 2/2 Running 2 (17m ago) 34m
csi-rbdplugin-6j69b 2/2 Running 0 34m
csi-rbdplugin-6j8jj 2/2 Running 0 34m
csi-rbdplugin-hwbl7 2/2 Running 0 34m
csi-rbdplugin-provisioner-fd84899c-wwbrz 5/5 Running 0 22m
mon-float-post-install-sw8qb 0/1 Completed 0 6m5s
mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s
rook-ceph-crashcollector-controller-0-6f47c4c9f5-hbbnt 1/1 Running 0 33m
rook-ceph-crashcollector-controller-1-76585f8db8-cb4jl 1/1 Running 0 11m
rook-ceph-exporter-controller-0-c979d9977-kt7tx 1/1 Running 0 33m
rook-ceph-exporter-controller-1-86bc859c4-q4mxd 1/1 Running 0 11m
rook-ceph-mds-kube-cephfs-a-55978b78b9-dcbtf 2/2 Running 0 22m
rook-ceph-mds-kube-cephfs-b-7b8bf4549f-thr7g 2/2 Running 2 (12m ago) 33m
rook-ceph-mgr-a-649cf9c487-vfs65 3/3 Running 0 17m
rook-ceph-mgr-b-d54c5d7cb-qwtnm 3/3 Running 0 33m
rook-ceph-mon-a-5cc7d56767-64dbd 2/2 Running 0 6m30s
rook-ceph-mon-b-6cf5b79f7f-skrtd 2/2 Running 0 6m31s
rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s
rook-ceph-operator-69b5674578-lmmdl 1/1 Running 0 22m
rook-ceph-osd-0-847f6f7dd9-6xlln 2/2 Running 0 16m
rook-ceph-osd-1-7cc87df4c4-jlpk9 2/2 Running 0 33m
rook-ceph-osd-prepare-controller-0-4rcd6 0/1 Completed 0 22m
rook-ceph-tools-84659bcd67-r8qbp 1/1 Running 0 22m
stx-ceph-manager-689997b4f4-hk6gh 1/1 Running 0 22m
Installation on Standard deployments
------------------------------------
For example, you can install on standard deployments with a dedicated model, five monitors
and services (ecblock and cephfs)
In this configuration, you can add monitors on 5 hosts and fit this
deployment in a dedicated model, and |OSDs| will be added on workers only.
You can choose compute-1 and compute-2 hosts to keep the cluster |OSDs|.
#. On a system with no bare metal Ceph storage backend on it, add a Ceph Rook
storage backend using cephfs and ecblock. To fit in the dedicated model, the |OSDs| must be placed
on dedicated workers only.
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment dedicated --confirmed --services ecblock,filesystem
#. Add all the ``host-fs`` on the nodes that will keep ``mon``, ``mgr`` and ``mds``. In
this case, 5 hosts will have the ``host-fs ceph`` configured.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-1 ceph=20
$ system host-fs-add compute-0 ceph=20
$ system host-fs-add compute-1 ceph=20
$ system host-fs-add compute-2 ceph=20
#. To add |OSDs| get the |UUID| of each disk run the :command:`host-stor-add` command.
.. code-block:: none
$ system host-disk-list compute-1
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
$ system host-disk-list compute-2
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 32be8509 | |
| | | | | | | | | |
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 62d4613b | |
| | | | | | | | | |
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 9.765 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 3003aa5e | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|, for example, only one
|OSD| on compute-1 and compute-2 will be added.
.. code-block:: none
# system host-stor-add compute-1 #UUID
$ system host-stor-add compute-1 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add compute-2 #UUID
$ system host-stor-add compute-2 1e36945e-e0fb-4a72-9f96-290f9bf57523
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the application. With a valid configuration of ``host-fs``
and |OSDs|, the app will be applied automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After the app is applied the pod list of the namespace ``rook-ceph`` should
be as follows:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-c-12f4b58b1e-fzhk6 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-d-a6s4d6a8w4-4d64g 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mgr-b-wd12af64t4-dw62i 2/2 Running 0 11m
rook-ceph-mgr-c-s684gs86g4-62srg 2/2 Running 0 11m
rook-ceph-mgr-d-68r4864f64-8a4a6 2/2 Running 0 11m
rook-ceph-mgr-e-as5d4we6f4-6aef4 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m
rook-ceph-mon-b-464fc6e8a3-fd864 2/2 Running 0 11m
rook-ceph-mon-c-468fc68e4c-6w8sa 2/2 Running 0 11m
rook-ceph-mon-d-8fc5686c4d-5v1w6 2/2 Running 0 11m
rook-ceph-mon-e-21f3c12e3a-6s7qq 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m
rook-ceph-osd-prepare-compute-1-8ge4z 0/1 Completed 0 11m
rook-ceph-osd-prepare-compute-2-s32sz 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s