Merge "Migration to ceph-csi for RBD/CephFS provisioners"

This commit is contained in:
Zuul 2022-12-19 15:46:44 +00:00 committed by Gerrit Code Review
commit b1e67a3fc3
6 changed files with 357 additions and 315 deletions

View File

@ -37,14 +37,13 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
~(keystone_admin)]$ system helm-override-list platform-integ-apps
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
| ceph-pools-audit | ['kube-system'] |
| cephfs-provisioner | ['kube-system'] |
| rbd-provisioner | ['kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
@ -57,10 +56,11 @@ utilized by a specific namespace.
#. Create an overrides yaml file defining the new namespaces.
In this example we will create the file
/home/sysadmin/update-namespaces.yaml with the following content:
``/home/sysadmin/update-storageclass.yaml`` with the following content:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-storageclass.yaml
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
@ -78,13 +78,13 @@ utilized by a specific namespace.
replication: 1
userId: ceph-pool-new-sc-app
userSecretName: ceph-pool-new-sc-app
EOF
#. Apply the overrides file to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml \
platform-integ-apps rbd-provisioner
~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/update-storageclass.yaml platform-integ-apps rbd-provisioner kube-system
+----------------+-----------------------------------------+
| Property | Value |
+----------------+-----------------------------------------+
@ -121,41 +121,42 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+-----------------------------------------+
| Property | Value |
+--------------------+-----------------------------------------+
| combined_overrides | ... |
| | |
| name | |
| namespace | |
| system_overrides | ... |
| | |
| | |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | - additionalNamespaces: |
| | - new-sc-app |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: special-storage-class |
| | pool_name: new-sc-app-pool |
| | replication: 1 |
| | userId: ceph-pool-new-sc-app |
| | userSecretName: ceph-pool-new-sc-app |
+--------------------+-----------------------------------------+
~(keystone_admin)]$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | ... |
| | |
| name | |
| namespace | |
| system_overrides | ... |
| | |
| user_overrides | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | - new-app |
| | - new-app2 |
| | - new-app3 |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | - additionalNamespaces: |
| | - new-sc-app |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: special-storage-class |
| | pool_name: new-sc-app-pool |
| | replication: 1 |
| | userId: ceph-pool-new-sc-app |
| | userSecretName: ceph-pool-new-sc-app |
+--------------------+------------------------------------------------------+
#. Apply the overrides.
@ -163,33 +164,31 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-5 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:50:54.168114+00:00 |
+---------------+----------------------------------+
~(keystone_admin)]$ system application-apply platform-integ-apps
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-62 |
| created_at | 2022-12-14T04:14:08.878186+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2022-12-14T04:45:09.204231+00:00 |
+---------------+--------------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-8 | platform- | manifest.yaml | applied | completed |
| integ-apps | | integration- | | | |
| | | manifest | | | |
+-------------+---------+------ --------+---------------+---------+-----------+
~(keystone_admin)]$ system application-list
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| platform-integ-apps | 1.0-62 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
You can now create and mount persistent volumes from the new |RBD|
provisioner's **special** storage class from within the **new-sc-app**

View File

@ -34,10 +34,9 @@ application-specific namespaces to access the **cephfs-provisioner**
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
| ceph-pools-audit | ['kube-system'] |
| cephfs-provisioner | ['kube-system'] |
| rbd-provisioner | ['kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the cephfs-provisioner chart. You will refer
@ -47,62 +46,77 @@ application-specific namespaces to access the **cephfs-provisioner**
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
+--------------------+----------------------------------------------------------+
| Property | Value |
+--------------------+----------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: |
| | - 192.168.204.3:6789 |
| | - 192.168.204.1:6789 |
| | - 192.168.204.2:6789 |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | global: |
| | replicas: 2 |
| | |
| name | cephfs-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: ['192.168.204.3:6789', '192.168.204.1:6789', |
| | '192.168.204.2:6789'] |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | claim_root: /pvc-volumes |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | replication: 2 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | global: {replicas: 2} |
| | |
| user_overrides | None |
+--------------------+----------------------------------------------------------+
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: |
| | - 192.168.204.2:6789 |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | controllerExpandSecret: ceph-pool-kube-cephfs-data |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | nodeStageSecret: ceph-pool-kube-cephfs-data |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | replication: 1 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | volumeNamePrefix: pvc-volumes- |
| | csiConfig: |
| | - clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | |
| name | cephfs-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: ['192.168.204.2:6789'] |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | controllerExpandSecret: ceph-pool-kube-cephfs-data |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | nodeStageSecret: ceph-pool-kube-cephfs-data |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | replication: 1 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | volumeNamePrefix: pvc-volumes- |
| | csiConfig: |
| | - clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: {replicaCount: 1} |
| | |
| user_overrides | None |
+--------------------+------------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces.
In this example, create the file /home/sysadmin/update-namespaces.yaml with the following content:
In this example, create the file ``/home/sysadmin/update-namespaces.yaml``
with the following content:
.. code-block:: none
@ -186,32 +200,30 @@ application-specific namespaces to access the **cephfs-provisioner**
.. code-block:: none
~(keystone_admin)]$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-24 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:27:26.547181+00:00 |
+---------------+----------------------------------+
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-62 |
| created_at | 2022-12-14T04:14:08.878186+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2022-12-14T04:58:58.543295+00:00 |
+---------------+--------------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)]$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
| integ-apps | | -integration | | | |
| | | -manifest | | | |
+-------------+---------+---------------+---------------+---------+-----------+
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| platform-integ-apps | 1.0-62 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
You can now create and mount PVCs from the default |RBD| provisioner's
**general** storage class, from within these application-specific

View File

@ -32,10 +32,9 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | [u'kube-system'] |
| cephfs-provisioner | [u'kube-system'] |
| helm-toolkit | [] |
| rbd-provisioner | [u'kube-system'] |
| ceph-pools-audit | ['kube-system'] |
| cephfs-provisioner | ['kube-system'] |
| rbd-provisioner | ['kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart. You will refer
@ -44,71 +43,86 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+--------------------------------------------------+
| Property | Value |
+--------------------+--------------------------------------------------+
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: |
| | - 192.168.204.4:6789 |
| | - 192.168.204.2:6789 |
| | - 192.168.204.3:6789 |
| | - 192.168.204.60:6789 |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | global: |
| | defaultStorageClass: general |
| | replicas: 2 |
| | |
| name | rbd-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: ['192.168.204.4:6789', |
| |'192.168.204.2:6789', '192.168.204.3:6789', |
| | '192.168.204.60:6789'] |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | pool_name: kube-rbd |
| | replication: 2 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | global: {defaultStorageClass: general, replicas: |
| | 2} |
| | |
| user_overrides | None |
+--------------------+--------------------------------------------------+
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: |
| | - 192.168.204.2:6789 |
| | storageClass: general |
| | classes: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | controllerExpandSecret: ceph-pool-kube-rbd |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | nodeStageSecret: ceph-pool-kube-rbd |
| | pool_name: kube-rbd |
| | provisionerSecret: ceph-pool-kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | csiConfig: |
| | - clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | |
| name | rbd-provisioner |
| namespace | kube-system |
| system_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: ['192.168.204.2:6789'] |
| | storageClass: general |
| | classes: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | controllerExpandSecret: ceph-pool-kube-rbd |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | nodeStageSecret: ceph-pool-kube-rbd |
| | pool_name: kube-rbd |
| | provisionerSecret: ceph-pool-kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | csiConfig: |
| | - clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: {replicaCount: 1} |
| | |
| user_overrides | None |
+--------------------+------------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces. In this example we will create the file /home/sysadmin/update-namespaces.yaml with the following content:
#. Create an overrides yaml file defining the new namespaces. In this example
we will create the file ``/home/sysadmin/update-namespaces.yaml`` with the
following content:
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
classes:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbd
replication: 2
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
name: general
pool_name: kube-rbd
replication: 2
userId: ceph-pool-kube-rbd
userSecretName: ceph-pool-kube-rbd
EOF
#. Apply the overrides file to the chart.
@ -177,32 +191,31 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+----------------------------------+
| Property | Value |
+---------------+----------------------------------+
| active | True |
| app_version | 1.0-24 |
| created_at | 2019-05-26T06:22:20.711732+00:00 |
| manifest_file | manifest.yaml |
| manifest_name | platform-integration-manifest |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2019-05-26T22:27:26.547181+00:00 |
+---------------+----------------------------------+
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-62 |
| created_at | 2022-12-14T04:14:08.878186+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2022-12-14T04:16:33.197301+00:00 |
+---------------+--------------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)$ system application-list
+-------------+---------+---------------+---------------+---------+-----------+
| application | version | manifest name | manifest file | status | progress |
+-------------+---------+---------------+---------------+---------+-----------+
| platform- | 1.0-24 | platform | manifest.yaml | applied | completed |
| integ-apps | | -integration | | | |
| | | -manifest | | | |
+-------------+---------+---------------+---------------+---------+-----------+
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| platform-integ-apps | 1.0-62 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
You can now create and mount PVCs from the default |RBD| provisioner's
**general storage class**, from within these application-specific namespaces.

View File

@ -22,85 +22,107 @@ This procedure uses standard Helm mechanisms to install a second
.. rubric:: |proc|
#. Capture a list of monitors.
#. Capture a list of monitors and cluster ID.
This will be stored in the environment variable ``<MON_LIST>`` and
used in the following step.
This will be stored in the environment variable ``<MON_LIST>`` and used in
the following step.
.. code-block:: none
~(keystone_admin)$ MON_LIST=$(ceph mon dump 2>&1 | awk /^[0-2]:/'{print $2}' | awk -F'/' '{print " - "$1}')
~(keystone_admin)]$ MON_LIST=$(ceph mon dump 2>&1 | awk /^[0-2]:/'{print $2}' | grep -oP '(?<=v1:).*(?=/)' | awk -F' ' '{print " - "$1}')
This will be stored in the environment variable ``<CLUSTER_ID>`` and used
in the following step.
.. code-block:: none
~(keystone_admin)]$ CLUSTER_ID=$(ceph fsid)
#. Create an overrides yaml file defining the new provisioner.
In this example we will create the file
/home/sysadmin/my-second-provisioner-overrides.yaml.
``/home/sysadmin/my-second-provisioner-overrides.yaml``.
.. code-block:: none
~(keystone_admin)$ cat <<EOF > /home/sysadmin/my-second-provisioner-overrides.yaml
global:
adminId: admin
adminSecretName: ceph-admin
name: 2nd-provisioner
provisioner_name: "ceph.com/2nd-rbd"
~(keystone_admin)]$ cat <<EOF > ~/my-second-provisioner-overrides.yaml
classdefaults:
monitors:
${MON_LIST}
classes:
- name: 2nd-storage
pool_name: another-pool
- additionalNamespaces:
- default
- kube-public
chunk_size: 64
clusterID: ${CLUSTER_ID}
crush_rule_name: storage_tier_ruleset
name: 2nd-provisioner
pool_name: another-pool
replication: 1
userId: 2nd-user-secret
userSecretName: 2nd-user-secret
rbac:
clusterRole: 2nd-provisioner
clusterRoleBinding: 2nd-provisioner
role: 2nd-provisioner
roleBinding: 2nd-provisioner
serviceAccount: 2nd-provisioner
csiConfig:
- clusterID: ${CLUSTER_ID}
monitors:
${MON_LIST}
nodeplugin:
fullnameOverride: 2nd-nodeplugin
provisioner:
fullnameOverride: 2nd-provisioner
replicaCount: 1
driverName: cool-rbd-provisioner.csi.ceph.com
EOF
.. note::
The ``replicaCount`` parameter has the value 1 for SX and 2 for others.
#. Gets the directory where the rbd-provisioner static overrides file is
located.
This will be stored in the environment variable ``<RBD_STATIC_OVERRIDES>`` and
used in the following step.
.. code-block:: none
~(keystone_admin)]$ SW_VERSION=$(system show | awk /software_version/'{print $4}')
~(keystone_admin)]$ INTEG_APPS_VERSION=$(system application-show platform-integ-apps | awk /app_version/'{print $4}')
~(keystone_admin)]$ RBD_STATIC_OVERRIDES=/opt/platform/fluxcd/$SW_VERSION/platform-integ-apps/$INTEG_APPS_VERSION/platform-integ-apps-fluxcd-manifests/rbd-provisioner/rbd-provisioner-static-overrides.yaml
#. Install the chart.
.. code-block:: none
~(keystone_admin)$ helm upgrade --install my-2nd-provisioner stx-platform/rbd-provisioner --namespace=isolated-app --values=/home/sysadmin/my-second-provisioner-overrides.yaml
~(keystone_admin)]$ helm upgrade --install my-2nd-provisioner stx-platform/ceph-csi-rbd --namespace=isolated-app --create-namespace --values=$RBD_STATIC_OVERRIDES --values=/home/sysadmin/my-second-provisioner-overrides.yaml
Release "my-2nd-provisioner" does not exist. Installing it now.
NAME: my-2nd-provisioner
LAST DEPLOYED: Mon May 27 05:04:51 2019
NAME: my-2nd-provisioner
LAST DEPLOYED: Wed Dec 14 04:20:00 2022
NAMESPACE: isolated-app
STATUS: DEPLOYED
STATUS: deployed
...
.. note::
Helm automatically created the namespace **isolated-app** while
installing the chart.
#. Confirm that **my-2nd-provisioner** has been deployed.
#. Confirm that ``my-2nd-provisioner`` has been deployed.
.. code-block:: none
~(keystone_admin)$ helm list -a
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
my-2nd-provisioner 1 Mon May 27 05:04:51 2019 DEPLOYED rbd-provisioner-0.1.0 isolated-app
my-app3 1 Sun May 26 22:52:16 2019 DEPLOYED mysql-1.1.1 5.7.14 new-app3
my-new-sc-app 1 Sun May 26 23:11:37 2019 DEPLOYED mysql-1.1.1 5.7.14 new-sc-app
my-release 1 Sun May 26 22:31:08 2019 DEPLOYED mysql-1.1.1 5.7.14 default
...
~(keystone_admin)]$ helm list -n isolated-app
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-2nd-provisioner isolated-app 1 2022-12-14 04:20:00.345212618 +0000 UTC deployed ceph-csi-rbd-3.6.2 3.6.2
#. Confirm that the **2nd-storage** storage class was created.
#. Confirm that the ``2nd-storage`` storage class was created.
.. code-block:: none
~(keystone_admin)$ kubectl get sc --all-namespaces
NAME PROVISIONER AGE
2nd-storage ceph.com/2nd-rbd 61s
general (default) ceph.com/rbd 6h39m
special-storage-class ceph.com/rbd 5h58m
~(keystone_admin)]$ kubectl get sc -A
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
2nd-provisioner cool-rbd-provisioner.csi.ceph.com Delete Immediate true 6m4s
cephfs cephfs.csi.ceph.com Delete Immediate true 10m
general (default) rbd.csi.ceph.com Delete Immediate true 10m
You can now create and mount PVCs from the new |RBD| provisioner's
**2nd-storage** storage class, from within the **isolated-app**
``2nd-storage`` storage class, from within the ``isolated-app``
namespace.

View File

@ -30,7 +30,7 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
.. code-block:: none
% cat <<EOF > wrx-busybox.yaml
~(keystone_admin)]$ cat <<EOF > wrx-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@ -69,7 +69,7 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
.. code-block:: none
% kubectl apply -f wrx-busybox.yaml
~(keystone_admin)]$ kubectl apply -f wrx-busybox.yaml
deployment.apps/wrx-busybox created
#. Attach to the busybox and create files on the Persistent Volumes.
@ -79,16 +79,16 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
.. code-block:: none
% kubectl get pods
~(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
wrx-busybox-6455997c76-4kg8v 1/1 Running 0 108s
wrx-busybox-6455997c76-crmw6 1/1 Running 0 108s
wrx-busybox-767564b9cf-g8ln8 1/1 Running 0 49s
wrx-busybox-767564b9cf-jrk5z 1/1 Running 0 49s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-4kg8v -c busybox -i -t
kubectl attach wrx-busybox-767564b9cf-g8ln8 -c busybox -i -t
#. From the container's console, list the disks to verify that the Persistent Volume is attached.
@ -96,10 +96,11 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783748 29658172 6% /
overlay 31441920 9695488 21746432 31% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
tmpfs 12295352 0 12295352 0% /sys/fs/cgroup
192.168.204.2:6789:/volumes/csi/pvc-volumes-565a1160-7b6c-11ed-84b8-0e99d59ed96d/cf39026c-06fc-413a-bce9-b13fb66254a3
1048576 0 1048576 0% /mnt1
The PVC is mounted as /mnt1.
@ -111,20 +112,20 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
# cd /mnt1
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8vi
i-was-here-wrx-busybox-767564b9cf-g8ln8
#. End the container session.
.. code-block:: none
% exit
wrx-busybox-6455997c76-4kg8v -c busybox -i -t' command when the pod is running
Session ended, resume using 'kubectl attach wrx-busybox-767564b9cf-g8ln8 -c busybox -i -t' command when the pod is running
#. Connect to the other busybox container
.. code-block:: none
% kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t
~(keystone_admin)]$ kubectl attach wrx-busybox-767564b9cf-jrk5z -i -t
#. Optional: From the container's console list the disks to verify that the PVC is attached.
@ -132,10 +133,11 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
% df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 1783888 29658032 6% /
overlay 31441920 9695512 21746408 31% /
tmpfs 65536 0 65536 0% /dev
tmpfs 5033188 0 5033188 0% /sys/fs/cgroup
ceph-fuse 516542464 643072 515899392 0% /mnt1
tmpfs 12295352 0 12295352 0% /sys/fs/cgroup
192.168.204.2:6789:/volumes/csi/pvc-volumes-565a1160-7b6c-11ed-84b8-0e99d59ed96d/cf39026c-06fc-413a-bce9-b13fb66254a3
1048576 0 1048576 0% /mnt1
#. Verify that the file created from the other container exists and that this container can also write to the Persistent Volume.
@ -143,27 +145,25 @@ configurations created in |prod| |stor-doc|: :ref:`Create ReadWriteMany Persiste
.. code-block:: none
# cd /mnt1
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v
# ls
i-was-here-wrx-busybox-767564b9cf-g8ln8
# echo ${HOSTNAME}
wrx-busybox-6455997c76-crmw6
wrx-busybox-767564b9cf-jrk5z
# touch i-was-here-${HOSTNAME}
# ls /mnt1
i-was-here-wrx-busybox-6455997c76-4kg8v i-was-here-wrx-busybox-6455997c76-crmw6
# ls
i-was-here-wrx-busybox-767564b9cf-g8ln8 i-was-here-wrx-busybox-767564b9cf-jrk5z
#. End the container session.
.. code-block:: none
% exit
Session ended, resume using 'kubectl attach wrx-busybox-6455997c76-crmw6 -c busybox -i -t' command when the pod is running
Session ended, resume using 'kubectl attach wrx-busybox-767564b9cf-jrk5z -c busybox -i -t' command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f wrx-busybox.yaml
~(keystone_admin)]$ kubectl delete -f wrx-busybox.yaml
For more information on Persistent Volume Support, see, :ref:`About Persistent Volume Support <about-persistent-volume-support>`.

View File

@ -35,7 +35,7 @@ You should refer to the Volume Claim examples. For more information, see,
.. code-block:: none
% cat <<EOF > rwo-busybox.yaml
~(keystone_admin)]$ cat <<EOF > rwo-busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@ -80,8 +80,7 @@ You should refer to the Volume Claim examples. For more information, see,
.. code-block:: none
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
~(keystone_admin)]$ kubectl apply -f rwo-busybox.yaml
#. Attach to the busybox and create files on the Persistent Volumes.
@ -91,15 +90,15 @@ You should refer to the Volume Claim examples. For more information, see,
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
rwo-busybox-5c4f877455-gkg2s 1/1 Running 0 19s
~(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rwo-busybox-5c84dd4dcd-xxb2b 1/1 Running 0 25s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach rwo-busybox-5c4f877455-gkg2s -c busybox -i -t
~(keystone_admin)]$ kubectl attach rwo-busybox-5c84dd4dcd-xxb2b -c busybox -i -t
#. From the container's console, list the disks to verify that the
Persistent Volumes are attached.
@ -108,15 +107,17 @@ You should refer to the Volume Claim examples. For more information, see,
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
overlay 31441920 9694828 21747092 31% /
tmpfs 65536 0 65536 0% /dev
tmpfs 12295352 0 12295352 0% /sys/fs/cgroup
/dev/rbd1 996780 24 980372 0% /mnt1
/dev/rbd0 996780 24 980372 0% /mnt2
The PVCs are mounted as /mnt1 and /mnt2.
#.
#. Create files in the mounted volumes.
.. code-block:: none
@ -136,15 +137,13 @@ You should refer to the Volume Claim examples. For more information, see,
.. code-block:: none
# exit
Session ended, resume using
'kubectl attach busybox-5c4f877455-gkg2s -c busybox -i -t' command when
the pod is running
Session ended, resume using 'kubectl attach rwo-busybox-5c84dd4dcd-xxb2b -c busybox -i -t' command when the pod is running
#. Terminate the busybox container.
.. code-block:: none
% kubectl delete -f rwo-busybox.yaml
~(keystone_admin)]$ kubectl delete -f rwo-busybox.yaml
#. Recreate the busybox container, again attached to persistent volumes.
@ -152,22 +151,21 @@ You should refer to the Volume Claim examples. For more information, see,
.. code-block:: none
% kubectl apply -f rwo-busybox.yaml
deployment.apps/rwo-busybox created
~(keystone_admin)]$ kubectl apply -f rwo-busybox.yaml
#. List the available pods.
.. code-block:: none
% kubectl get pods
NAME READY STATUS RESTARTS AGE
rwo-busybox-5c4f877455-jgcc4 1/1 Running 0 19s
~(keystone_admin)]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rwo-busybox-5c84dd4dcd-pgcfw 1/1 Running 0 29s
#. Connect to the pod shell for CLI access.
.. code-block:: none
% kubectl attach busybox-5c4f877455-jgcc4 -c busybox -i -t
~(keystone_admin)]$ kubectl attach rwo-busybox-5c84dd4dcd-pgcfw -c busybox -i -t
#. From the container's console, list the disks to verify that the PVCs
are attached.
@ -176,13 +174,11 @@ You should refer to the Volume Claim examples. For more information, see,
# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 31441920 3239984 28201936 10% /
overlay 31441920 9694844 21747076 31% /
tmpfs 65536 0 65536 0% /dev
tmpfs 65900776 0 65900776 0% /sys/fs/cgroup
/dev/rbd0 999320 2564 980372 0% /mnt1
/dev/rbd1 999320 2564 980372 0% /mnt2
/dev/sda4 20027216 4952208 14034624 26%
...
tmpfs 12295352 0 12295352 0% /sys/fs/cgroup
/dev/rbd0 996780 24 980372 0% /mnt1
/dev/rbd1 996780 24 980372 0% /mnt2
#. Verify that the files created during the earlier container session