
Create sections Create Cephfs Volume Snapshot Class and Create RBD Volume Snapshot Class Update helm-override-show outputs Fix conflict Closes-bug: 2055207 Change-Id: I203643805d26871fc6de1671d57e72ee2179b545 Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
15 KiB
Enable ReadWriteMany PVC Support in Additional Namespaces
The default general cephfs-provisioner storage class is enabled for the default, kube-system, and kube-public namespaces. To enable an additional namespace, for example for an application-specific namespace, a modification to the configuration (Helm overrides) of the cephfs-provisioner service is required.
The following example illustrates the configuration of three additional application-specific namespaces to access the cephfs-provisioner cephfs storage class.
Note
Due to limitations with templating and merging of overrides, the entire storage class must be redefined in the override when updating specific values.
List installed Helm chart overrides for the platform-integ-apps.
~(keystone_admin)]$ system helm-override-list platform-integ-apps +--------------------+----------------------+ | chart name | overrides namespaces | +--------------------+----------------------+ | ceph-pools-audit | ['kube-system'] | | cephfs-provisioner | ['kube-system'] | | rbd-provisioner | ['kube-system'] | +--------------------+----------------------+
Review existing overrides for the cephfs-provisioner chart. You will refer to this information in the following step.
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system +--------------------+------------------------------------------------------+ | Property | Value | +--------------------+------------------------------------------------------+ | attributes | enabled: true | | | | | combined_overrides | classdefaults: | | | adminId: admin | | | adminSecretName: ceph-secret-admin | | | monitors: | | | - 192.168.204.2:6789 | | | csiConfig: | | | - cephFS: | | | subvolumeGroup: csi | | | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 | | | monitors: | | | - 192.168.204.2:6789 | | | provisioner: | | | replicaCount: 1 | | | snapshotter: | | | enabled: true | | | snapshotClass: | | | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | storageClasses: | | | - additionalNamespaces: | | | - default | | | - kube-public | | | chunk_size: 64 | | | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 | | | controllerExpandSecret: ceph-pool-kube-cephfs-data | | | crush_rule_name: storage_tier_ruleset | | | data_pool_name: kube-cephfs-data | | | fs_name: kube-cephfs | | | metadata_pool_name: kube-cephfs-metadata | | | name: cephfs | | | nodeStageSecret: ceph-pool-kube-cephfs-data | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | replication: 1 | | | userId: ceph-pool-kube-cephfs-data | | | userSecretName: ceph-pool-kube-cephfs-data | | | volumeNamePrefix: pvc-volumes- | | | | | name | cephfs-provisioner | | namespace | kube-system | | system_overrides | classdefaults: | | | adminId: admin | | | adminSecretName: ceph-secret-admin | | | monitors: ['192.168.204.2:6789'] | | | csiConfig: | | | - cephFS: {subvolumeGroup: csi} | | | clusterID: !!binary | | | | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw | | | monitors: ['192.168.204.2:6789'] | | | provisioner: | | | replicaCount: 1 | | | snapshotter: {enabled: true} | | | snapshotClass: | | | clusterID: !!binary | | | | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | storageClasses: | | | - additionalNamespaces: [default, kube-public] | | | chunk_size: 64 | | | clusterID: !!binary | | | | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw | | | controllerExpandSecret: ceph-pool-kube-cephfs-data | | | crush_rule_name: storage_tier_ruleset | | | data_pool_name: kube-cephfs-data | | | fs_name: kube-cephfs | | | metadata_pool_name: kube-cephfs-metadata | | | name: cephfs | | | nodeStageSecret: ceph-pool-kube-cephfs-data | | | provisionerSecret: ceph-pool-kube-cephfs-data | | | replication: 1 | | | userId: ceph-pool-kube-cephfs-data | | | userSecretName: ceph-pool-kube-cephfs-data | | | volumeNamePrefix: pvc-volumes- | | | | | user_overrides | None | +--------------------+------------------------------------------------------+
Create an overrides yaml file defining the new namespaces.
In this example, create the file
/home/sysadmin/update-namespaces.yaml
with the following content:~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml storageClasses: - additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3] chunk_size: 64 claim_root: /pvc-volumes crush_rule_name: storage_tier_ruleset data_pool_name: kube-cephfs-data fs_name: kube-cephfs metadata_pool_name: kube-cephfs-metadata name: cephfs replication: 2 userId: ceph-pool-kube-cephfs-data userSecretName: ceph-pool-kube-cephfs-data EOF
Apply the overrides file to the chart.
~(keystone_admin)]$ system helm-override-update --values /home/sysadmin/update-namespaces.yaml platform-integ-apps cephfs-provisioner kube-system +----------------+----------------------------------------------+ | Property | Value | +----------------+----------------------------------------------+ | name | cephfs-provisioner | | namespace | kube-system | | user_overrides | storageClasses: | | | - additionalNamespaces: | | | - default | | | - kube-public | | | - new-app | | | - new-app2 | | | - new-app3 | | | chunk_size: 64 | | | claim_root: /pvc-volumes | | | crush_rule_name: storage_tier_ruleset | | | data_pool_name: kube-cephfs-data | | | fs_name: kube-cephfs | | | metadata_pool_name: kube-cephfs-metadata | | | name: cephfs | | | replication: 2 | | | userId: ceph-pool-kube-cephfs-data | | | userSecretName: ceph-pool-kube-cephfs-data | +----------------+----------------------------------------------+
Confirm that the new overrides have been applied to the chart.
The following output has been edited for brevity.
~(keystone_admin)]$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system +--------------------+---------------------------------------------+ | Property | Value | +--------------------+---------------------------------------------+ | user_overrides | storageClasses: | | | - additionalNamespaces: | | | - default | | | - kube-public | | | - new-app | | | - new-app2 | | | - new-app3 | | | chunk_size: 64 | | | claim_root: /pvc-volumes | | | crush_rule_name: storage_tier_ruleset | | | data_pool_name: kube-cephfs-data | | | fs_name: kube-cephfs | | | metadata_pool_name: kube-cephfs-metadata | | | name: cephfs | | | replication: 2 | | | userId: ceph-pool-kube-cephfs-data | | | userSecretName: ceph-pool-kube-cephfs-data| +--------------------+---------------------------------------------+
Apply the overrides.
Run the
application-apply
command.~(keystone_admin)]$ system application-apply platform-integ-apps +---------------+--------------------------------------+ | Property | Value | +---------------+--------------------------------------+ | active | True | | app_version | 1.0-62 | | created_at | 2022-12-14T04:14:08.878186+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | platform-integ-apps-fluxcd-manifests | | name | platform-integ-apps | | progress | None | | status | applying | | updated_at | 2022-12-14T04:58:58.543295+00:00 | +---------------+--------------------------------------+
Monitor progress using the
application-list
command.~(keystone_admin)]$ system application-list +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+ | platform-integ-apps | 1.0-62 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed | +--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
You can now create and mount PVCs from the default provisioner's general storage class, from within these application-specific namespaces.