docs/doc/source/storage/kubernetes/about-persistent-volume-support.rst
Juanita-Balaraj 63cd4f5fdc CephFS RWX Support in Host-based Ceph
Incorporated patchset 1 review comments
Updated patchset 5 review comments
Updated patchset 6 review comments
Fixed merge conflicts
Updated patchset 8 review comments

Change-Id: Icd7b08ab69273f6073b960a13cf59905532f851a
Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
2021-05-03 16:39:45 -04:00

68 lines
2.2 KiB
ReStructuredText

.. rhb1561120463240
.. _about-persistent-volume-support:
===============================
About Persistent Volume Support
===============================
Persistent Volume Claims \(PVCs\) are requests for storage resources in your
cluster. By default, container images have an ephemeral file system. In order
for containers to persist files beyond the lifetime of the container, a
Persistent Volume Claim can be created to obtain a persistent volume which the
container can mount and read/write files.
Management and customization tasks for Kubernetes |PVCs|
can be accomplished by using StorageClasses set up by two Helm charts;
**rbd-provisioner** and **cephfs-provisioner**. The **rbd-provisioner**,
and **cephfs-provisioner** Helm charts are included in the
**platform-integ-apps** system application, which is automatically loaded and
applied as part of the |prod| installation.
PVCs are supported with the following options:
- with accessMode of ReadWriteOnce backed by Ceph |RBD|
- only one container can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **rbd-provisioner** Helm chart provided by
platform-integ-apps
- with accessMode of ReadWriteMany backed by CephFS
- multiple containers can attach to these PVCs
- management and customization tasks related to these PVCs are done
through the **cephfs-provisioner** Helm chart provided by
platform-integ-apps
After platform-integ-apps is applied the following system configurations are
created:
- **Ceph Pools**
.. code-block:: none
~(keystone_admin)]$ ceph osd lspools
kube-rbd
kube-cephfs-data
kube-cephfs-metadata
- **CephFS**
.. code-block:: none
~(keystone_admin)]$ ceph fs ls
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
- **Kubernetes StorageClasses**
.. code-block:: none
~(keystone_admin)]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
cephfs ceph.com/cephfs Delete Immediate false
general (default) ceph.com/rbd Delete Immediate false