New Platform App - app-rook-ceph

Create documentation for rook-ceph install, removal and deployment models.

Story: 2011066
Task: 50934

Change-Id: I137d3251078d5868cd2515a617afc5859858b4ac
Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
This commit is contained in:
Elisamara Aoki Goncalves 2024-08-23 19:06:18 +00:00
parent ea3006090b
commit 35370299e6
4 changed files with 984 additions and 0 deletions

View File

@ -0,0 +1,219 @@
.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _deployment-models-for-rook-ceph-b855bd0108cf:
============================================
Deployment Models and Services for Rook Ceph
============================================
The deployment model is the topology strategy that defines the storage backend
capabilities of the deployment. The deployment model dictates how the storage
solution will look like defining rules for the placement of the storage cluster
elements.
Available Deployment Models
---------------------------
Each deployment model works with different deployment strategies and rules to
fit different needs. Choose one of the following models according to the
demands of your cluster:
Controller Model (default)
- The |OSDs| must be added only in hosts with controller personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Dedicated Model
- The |OSDs| must be added only in hosts with worker personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Open Model
- The |OSD| placement does not have any limitation.
- The replication factor does not have any limitation.
- Can swap into controller or dedicated if the placement requisites are
satisfied.
Replication Factor
------------------
The replication factor is the number of copies that each piece of data has
spread across the cluster to provide redundancy.
You can change the replication of an existing Rook Ceph storage backend with
the following command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store replication=<size>
Possible Replication Factors on Deployment Models for platforms.
Simplex Controller Model:
Default: 1
Max: 3
Simplex Open Model:
Default: 1
Max: Any
Duplex Controller Model:
Default: 2
Max: 3
Duplex Open Model:
Default: 1
Max: Any
Duplex+ or Standard Controller Model:
Default: 2
Max: 3
Duplex+ or Standard Dedicated Model:
Default: 2
Max: 3
Duplex+ or Standard Open Model:
Default: 2
Max: Any
Minimum Replication Factor
**************************
The minimum replication factor is the least number of copies that each piece of
data have spread across the cluster to provide redundancy.
You can assign any number smaller than the replication factor to this
parameter. The default value is replication - 1.
You can change the minimum replication of an existing Rook Ceph storage backend
with the command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store min_replication=<size>
Monitor Count
*************
Monitors (mons) are allocated on all the hosts that have a ``host-fs ceph``
with the monitor capability on it.
When the host has no |OSD| registered on the platform, you should add ``host-fs ceph``
in every node intended to house a monitor with the command:
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
When there are |OSDs| registered on a host you should add the monitor function
to the existing ``host-fs``.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
Possible Monitor Count on Deployment Models for Platforms
*********************************************************
Simplex:
Min: 1
Max: 1
Duplex:
Min: 1
Recommended: 3 (using floating monitor)
Max: 3 (using floating monitor)
Duplex+ or Standard:
Min: 1
Recommended: 3
Max: Any
Floating Monitor (only in Duplex)
*********************************
A Floating monitor is possible and recommended on Duplex platforms. The monitor
roams and is always allocated on the active controller.
To add the floating monitor:
.. note::
You should lock the inactive controller add ``controllerfs ceph-float`` to
the platform.
.. code-block:: none
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
Host-fs and controller-fs
-------------------------
To properly set the environment for Rook Ceph, some filesystems are needed.
.. note::
All changes in ``host-fs`` and ``controller-fs`` need a reapply on the
application to properly propagate the modifications in the Rook ceph
cluster.
Functions
*********
The functions parameter contains the ceph cluster function of a given host. A
``host-fs`` can have monitor and osd functions, a ``controller-fs`` can only
have the monitor function.
To modify the function of a ``host-fs`` the complete list of functions desired
must be informed.
Examples:
.. code-block:: none
#(only monitor)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
#(only osd)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
#(no function)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
#(only monitor)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
#(no function)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
Services
--------
Services are the storage types (or classes) that provides storage to each pod
with some mount or storage space allocation.
Available Services
******************
There are four possible services compatible with Rook Ceph. You can combine
them following the rules:
``block`` (default)
- Not possible to be deployed together with ecblock.
- Will enable the block service in rook, will use cephRBD.
``ecblock``
- Not possible to be deployed together with block.
- Will enable the ecblock service in rook, will use cephRBD.
``filesystem`` (default)
- Will enable the ceph filesystem and use cephFS.
``object``
- Will enable the ceph object store (RGW).

View File

@ -131,6 +131,19 @@ Configure Ceph OSDs on a Host
replace-osds-on-an-aio-sx-single-disk-system-without-backup-951eefebd1f2
replace-osds-on-an-aio-sx-single-disk-system-with-backup-770c9324f372
---------------------
Rook Ceph Application
---------------------
.. toctree::
:maxdepth: 1
install-rook-ceph-a7926a1f9b70
uninstall-rook-ceph-cbb046746782
deployment-models-for-rook-ceph-b855bd0108cf
-------------------------
Persistent Volume Support
-------------------------

View File

@ -0,0 +1,637 @@
.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _install-rook-ceph-a7926a1f9b70:
=================
Install Rook Ceph
=================
.. rubric:: |context|
Rook Ceph in an orchestrator providing a containerized solution for Ceph
Storage with a specialized Kubernetes Operator to automate the management of
the cluster. It is an alternative solution to the bare metal Ceph Storage. See
https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more
details.
.. rubric:: |prereq|
Before configuring the deployment model and services.
- Certify that there is no no ceph-store storage backend configured on the
system:
.. code-block:: none
~(keystone_admin)$ system storage-backend-list
- Create a storage backend for Rook Ceph, choose your deployment model
(controller, dedicated, open), and the desired services (block or ecblock,
filesystem, object):
.. code-block:: none
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
- Create a ``host-fs ceph`` for each host that will house a Rook Ceph monitor
(preferably an ODD number of hosts):
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
- For DX platforms, adding a floating monitor is recommended. To add a
floating monitor, the inactive controller should be locked:
.. code-block:: none
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
- Configure |OSDs|.
- Check the uuid of the disks of the desired host that will house the
|OSDs|:
.. code-block:: none
~(keystone_admin)$ system host-disk-list <hostname>
.. note::
The |OSD| placement should follow the chosen deployment model
placement rules.
- Add the desired disks to the system as |OSDs| (Preferably an EVEN
number of |OSDs|):
.. code-block:: none
~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
For more details om deployment models and services see
:ref:`deployment-models-for-rook-ceph-b855bd0108cf`.
.. rubric:: |proc|
After configuring environment according to the chosen deployment model
correctly, Rook Ceph will install automatically.
Check the health of the cluster after some minutes after application applied
using any ceph commands, for example :command:`ceph status`.
.. code-block:: none
~(keystone_admin)$ ceph -s
e.g. (STD with 3 mon and 12 OSDs):
~(keystone_admin)$ ceph -s
cluster:
id: 5c8eb4ff-ba21-40f4-91ed-68effc47a08b
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2d)
mgr: c(active, since 5d), standbys: a, b
mds: 1/1 daemons up, 1 hot standby
osd: 12 osds: 12 up (since 5d), 12 in (since 5d)
data:
volumes: 1/1 healthy
pools: 4 pools, 81 pgs
objects: 133 objects, 353 MiB
usage: 3.8 GiB used, 5.7 TiB / 5.7 TiB avail
pgs: 81 active+clean
Check if the cluster contains all the desired elements. All pods should be
running or completed to the cluster to be considered healthy. You can see the
Rook Ceph pods with:
.. code-block:: none
~(keystone_admin)$ kubectl get pod -n rook-ceph
e.g. (SX with 1 mon and 2 OSDs):
~(keystone_admin)$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-67bd9fcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-76847477bf-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6984b58b79-fzhk6 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-5vmg9 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m
rook-ceph-osd-prepare-controller-0-s4bzz 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Additional Features and Procedures
----------------------------------
Add New OSDs on a Running Cluster
*********************************
To add new |OSDs| to the cluster, add the new |OSD| to the platform and
re-apply the application.
.. code-block:: none
~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph
Add New Monitor on a Running Cluster
************************************
To add a new monitor to the cluster, add the ``host-fs`` to the desired host
and re-apply the application.
.. code-block:: none
~(keystone_admin)$ system host-fs-add <host> ceph=<size>
~(keystone_admin)$ system application-apply rook-ceph
Enable Ceph Dashboard
*********************
To enable Ceph Dashboard a Helm override must be provided before the
application apply. You should provide a password coded in base64.
Create the override file.
.. code-block:: none
$ openssl base64 -e <<< "my_dashboard_passwd"
bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=
$ cat << EOF >> dashboard-override.yaml
cephClusterSpec:
dashboard:
enabled: true
password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo="
EOF
Check Rook Ceph Pods
********************
You can check the pods of the storage cluster running the following command:
.. code-block:: none
kubectl get pod -n rook-ceph
Instalation on Simplex with controller model, 1 monitor, installing manually, services: block and cephfs
--------------------------------------------------------------------------------------------------------
In this configuration, you can add monitors and |OSDs| on the Simplex node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. Use block (RBD), cephfs (default option, no need to
specify with arguments).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``host-fs ceph`` on controller, the ``host-fs ceph`` is configured
with 10 GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=10
#. To add |OSDs|, get the |UUID| of each disk to feed the
:command:`host-stor-add` command:
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|:
.. code-block:: none
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 283359b5-d06f-4e73-a58f-e15f7ea41abd
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 |
| journal_node | /dev/cdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 283359b5-d06f-4e73-a58f-e15f7ea41abd |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of ``host-fs``
and |OSDs|, the app will apply automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After the app is applied the pod list of the namespace rook-ceph should
look like this:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-rfgr4984-t653f 2/2 Running 0 11m
rook-ceph-osd-prepare-controller-0-8ge4z 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Installation on Duplex with controller model, 3 monitors, installing manually, services: block and cephfs
---------------------------------------------------------------------------------------------------------
In this configuration, you can add monitors and OSDs on the Duplex node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. Use block (RBD), cephfs (default option, no need
to specify with arguments).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``controller-fs`` ``ceph-float`` configured with 10 GB.
.. code-block:: none
$ system controllerfs-add ceph-float=<size>
#. Add the ``host-fs ceph`` on each controller, the ``host-fs ceph`` is
configured with 10 GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=10
#. To add |OSDs|, get the |UUID| of each disk to feed the
:command:`host-stor-add` command.
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
$ system host-disk-list controller-1
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 32be8509 | |
| | | | | | | | | |
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 62d4613b | |
| | | | | | | | | |
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 3003aa5e | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|:
.. code-block:: none
# system host-stor-add controller-0 #UUID
$ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add controller-1 #UUID
$ system host-stor-add controller-1 1e36945e-e0fb-4a72-9f96-290f9bf57523
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of monitors and
|OSDs|, the app will apply automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After the app is applied the pod list of the namespace ``rook-ceph`` should
look like this:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-64z6c 2/2 Running 0 34m
csi-cephfsplugin-dhsqp 2/2 Running 2 (17m ago) 34m
csi-cephfsplugin-gch9g 2/2 Running 0 34m
csi-cephfsplugin-pkzg2 2/2 Running 0 34m
csi-cephfsplugin-provisioner-5467c6c4f-r2lp6 5/5 Running 0 22m
csi-rbdplugin-2vmzf 2/2 Running 2 (17m ago) 34m
csi-rbdplugin-6j69b 2/2 Running 0 34m
csi-rbdplugin-6j8jj 2/2 Running 0 34m
csi-rbdplugin-hwbl7 2/2 Running 0 34m
csi-rbdplugin-provisioner-fd84899c-wwbrz 5/5 Running 0 22m
mon-float-post-install-sw8qb 0/1 Completed 0 6m5s
mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s
rook-ceph-crashcollector-controller-0-6f47c4c9f5-hbbnt 1/1 Running 0 33m
rook-ceph-crashcollector-controller-1-76585f8db8-cb4jl 1/1 Running 0 11m
rook-ceph-exporter-controller-0-c979d9977-kt7tx 1/1 Running 0 33m
rook-ceph-exporter-controller-1-86bc859c4-q4mxd 1/1 Running 0 11m
rook-ceph-mds-kube-cephfs-a-55978b78b9-dcbtf 2/2 Running 0 22m
rook-ceph-mds-kube-cephfs-b-7b8bf4549f-thr7g 2/2 Running 2 (12m ago) 33m
rook-ceph-mgr-a-649cf9c487-vfs65 3/3 Running 0 17m
rook-ceph-mgr-b-d54c5d7cb-qwtnm 3/3 Running 0 33m
rook-ceph-mon-a-5cc7d56767-64dbd 2/2 Running 0 6m30s
rook-ceph-mon-b-6cf5b79f7f-skrtd 2/2 Running 0 6m31s
rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s
rook-ceph-operator-69b5674578-lmmdl 1/1 Running 0 22m
rook-ceph-osd-0-847f6f7dd9-6xlln 2/2 Running 0 16m
rook-ceph-osd-1-7cc87df4c4-jlpk9 2/2 Running 0 33m
rook-ceph-osd-prepare-controller-0-4rcd6 0/1 Completed 0 22m
rook-ceph-tools-84659bcd67-r8qbp 1/1 Running 0 22m
stx-ceph-manager-689997b4f4-hk6gh 1/1 Running 0 22m
Installation on Standard with dedicated model, 5 monitors, services: ecblock and cephfs
---------------------------------------------------------------------------------------
In this configuration, you can add monitors on 5 hosts and, to fit this
deployment in the dedicated model, |OSDs| will be added on workers only.
Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. To fit in the dedicated model, the |OSDs| must be placed
on dedicated workers only. We will use ``ecblock`` instead of |RBD| and
cephfs.
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment dedicated --confirmed --services ecblock,filesystem
#. Add all the ``host-fs`` on the nodes that will house mon, mgr and mds. In
this particular case, 5 hosts will have the ``host-fs ceph`` configured.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-1 ceph=20
$ system host-fs-add compute-0 ceph=20
$ system host-fs-add compute-1 ceph=20
$ system host-fs-add compute-2 ceph=20
#. To add |OSDs| get the |UUID| of each disk to feed the
:command:`host-stor-add` command.
.. code-block:: none
$ system host-disk-list compute-1
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
| d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 9 | |
| | | | | | | | | |
| 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 9 | |
| | | | | | | | | |
| 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 4 | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
$ system host-disk-list compute-2
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path |
| | de | num | type | gib | gib | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
| 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| | | | | 968 | | | 32be8509 | |
| | | | | | | | | |
| 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| | | | | | | | 62d4613b | |
| | | | | | | | | |
| 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
| | | | | | | | 3003aa5e | |
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|, here for sake of simplicity only one
|OSD| on compute-1 and compute-2 will be added:
.. code-block:: none
# system host-stor-add compute-1 #UUID
$ system host-stor-add compute-1 9bb0cb55-7eba-426e-a1d3-aba002c7eebc
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| state | configuring-with-app |
| journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 0fb88b8b-a134-4754-988a-382c10123fbb |
| ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d |
| idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:19:41.335302+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add compute-2 #UUID
$ system host-stor-add compute-2 1e36945e-e0fb-4a72-9f96-290f9bf57523
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 1 |
| function | osd |
| state | configuring-with-app |
| journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 13baee21-daad-4266-bfdd-b549837d8b88 |
| ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 |
| idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 |
| tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 |
| tier_name | storage |
| created_at | 2024-06-24T14:18:28.107688+00:00 |
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of ``host-fs``
and |OSDs|, the app will apply automatically.
.. code-block:: none
$ system application-show rook-ceph
#or
$ system application-list
#. After the app is applied the pod list of the namespace ``rook-ceph`` should
look like this:
.. code-block:: none
$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-c-12f4b58b1e-fzhk6 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-d-a6s4d6a8w4-4d64g 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mgr-b-wd12af64t4-dw62i 2/2 Running 0 11m
rook-ceph-mgr-c-s684gs86g4-62srg 2/2 Running 0 11m
rook-ceph-mgr-d-68r4864f64-8a4a6 2/2 Running 0 11m
rook-ceph-mgr-e-as5d4we6f4-6aef4 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m
rook-ceph-mon-b-464fc6e8a3-fd864 2/2 Running 0 11m
rook-ceph-mon-c-468fc68e4c-6w8sa 2/2 Running 0 11m
rook-ceph-mon-d-8fc5686c4d-5v1w6 2/2 Running 0 11m
rook-ceph-mon-e-21f3c12e3a-6s7qq 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m
rook-ceph-osd-prepare-compute-1-8ge4z 0/1 Completed 0 11m
rook-ceph-osd-prepare-compute-2-s32sz 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s

View File

@ -0,0 +1,115 @@
.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _uninstall-rook-ceph-cbb046746782:
===================
Uninstall Rook Ceph
===================
.. rubric:: |context|
To completely remove Rook Ceph you must remove the app and clear all the
environment configurations to prevent an automatic reinstall.
.. rubric:: |proc|
#. Remove the application by running the script:
.. code-block:: none
source /etc/platform/openrc
system application-remove rook-ceph --force
retry_count=1
retries=200
while [ $retry_count -le $retries ]; do
rookstatus=$(system application-list | grep rook-ceph | awk '{print $10}')
echo $rookstatus
if [[ "$rookstatus" == "uploaded" ]]; then
system application-delete rook-ceph --force
break
fi
echo "Retry #" $retry_count
let retry_count++
done
#. Remove the environment configurations completely.
#. Remove |OSDs|.
#. Lock the host.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
#. List all |OSDs| to get the uuid of each |OSD|.
.. code-block:: none
~(keystone_admin)$ system host-stor-list <hostname>
#. Remove each |OSD| using the uuid of all |OSDs|.
.. code-block:: none
~(keystone_admin)$ system host-stor-delete <uuid>
#. Remove the storage backend ceph-rook.
.. code-block:: none
~(keystone_admin)$ system storage-backend-delete ceph-rook-store --force
#. Remove ``host-fs``.
#. Check ``host-fs`` status.
.. code-block:: none
~(keystone_admin)$ system host-fs-list <hostname>
#. To remove a ``host-fs``, the filesystem needs to be in Ready state.
To release an In-Use host-fs, remove all functions from it and
reapply the application:
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
~(keystone_admin)$ system appliction-apply rook-ceph
#. When the ``host-fs`` is in Ready state, remove the ``host-fs``:
.. code-block:: none
~(keystone_admin)$ system host-fs-delete <hostname> ceph
#. (|AIO-DX| Only) Remove ``controller-fs``.
#. Check ``controller-fs`` status.
.. code-block:: none
~(keystone_admin)$ system controller-fs-list
#. To remove a ``controller-fs``, the standby controller must be
locked and the ``controller-fs`` needs to be in Ready state.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
#. To release an In-Use ``controller-fs``, remove all functions from it
and reapply the application.
.. code-block:: none
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
~(keystone_admin)$ system appliction-apply rook-ceph
#. When the ``controller-fs`` is in Ready state, remove the
``host-fs``.
.. code-block:: none
~(keystone_admin)$ system controller-fs-delete ceph-float