From 0cd68919facd78c22094c9baffce5000d1129819 Mon Sep 17 00:00:00 2001 From: Stephen Taylor Date: Fri, 9 Aug 2024 13:12:09 -0600 Subject: [PATCH] [ceph] Remove references to the CephFS provisioner The CephFS provisioner is being removed from openstack-helm-infra, so references to it here are also being removed. Change-Id: Ia01935f037bb6f91299757d8049db4d1c5dc4a10 --- doc/source/testing/ceph-node-resiliency.rst | 12 --------- doc/source/testing/ceph-upgrade.rst | 28 --------------------- 2 files changed, 40 deletions(-) diff --git a/doc/source/testing/ceph-node-resiliency.rst b/doc/source/testing/ceph-node-resiliency.rst index 05b98f34bd..9a607507e7 100644 --- a/doc/source/testing/ceph-node-resiliency.rst +++ b/doc/source/testing/ceph-node-resiliency.rst @@ -247,8 +247,6 @@ Step 1: Initial Ceph and OpenStack deployment ubuntu@mnode1:/opt/openstack-helm$ kubectl get pods -n ceph --show-all=false -o wide NAME READY STATUS RESTARTS AGE IP NODE - ceph-cephfs-provisioner-784c6f9d59-ndsgn 1/1 Running 0 1h 192.168.4.15 mnode2 - ceph-cephfs-provisioner-784c6f9d59-vgzzx 1/1 Running 0 1h 192.168.3.17 mnode3 ceph-mds-6f66956547-5x4ng 1/1 Running 0 1h 192.168.4.14 mnode2 ceph-mds-6f66956547-c25cx 1/1 Running 0 1h 192.168.3.14 mnode3 ceph-mgr-5746dd89db-9dbmv 1/1 Running 0 1h 192.168.10.248 mnode3 @@ -314,7 +312,6 @@ In this test env, let's shutdown ``mnode3`` node. .. code-block:: console - ceph ceph-cephfs-provisioner-784c6f9d59-vgzzx 0 (0%) 0 (0%) 0 (0%) 0 (0%) ceph ceph-mds-6f66956547-c25cx 0 (0%) 0 (0%) 0 (0%) 0 (0%) ceph ceph-mgr-5746dd89db-9dbmv 0 (0%) 0 (0%) 0 (0%) 0 (0%) ceph ceph-mon-5qn68 0 (0%) 0 (0%) 0 (0%) 0 (0%) @@ -546,9 +543,6 @@ In this test env, let's shutdown ``mnode3`` node. ubuntu@mnode1:/opt/openstack-helm$ kubectl get pods -n ceph --show-all=false -o wide NAME READY STATUS RESTARTS AGE IP NODE - ceph-cephfs-provisioner-784c6f9d59-n92dx 1/1 Running 0 1m 192.168.0.206 mnode1 - ceph-cephfs-provisioner-784c6f9d59-ndsgn 1/1 Running 0 1h 192.168.4.15 mnode2 - ceph-cephfs-provisioner-784c6f9d59-vgzzx 1/1 Unknown 0 1h 192.168.3.17 mnode3 ceph-mds-6f66956547-57tf9 1/1 Running 0 1m 192.168.0.207 mnode1 ceph-mds-6f66956547-5x4ng 1/1 Running 0 1h 192.168.4.14 mnode2 ceph-mds-6f66956547-c25cx 1/1 Unknown 0 1h 192.168.3.14 mnode3 @@ -835,9 +829,6 @@ After applying labels, let's check status ubuntu@mnode1:~$ kubectl get pods -n ceph --show-all=false -o wide Flag --show-all has been deprecated, will be removed in an upcoming release NAME READY STATUS RESTARTS AGE IP NODE - ceph-cephfs-provisioner-784c6f9d59-n92dx 1/1 Running 0 10m 192.168.0.206 mnode1 - ceph-cephfs-provisioner-784c6f9d59-ndsgn 1/1 Running 0 1h 192.168.4.15 mnode2 - ceph-cephfs-provisioner-784c6f9d59-vgzzx 1/1 Unknown 0 1h 192.168.3.17 mnode3 ceph-mds-6f66956547-57tf9 1/1 Running 0 10m 192.168.0.207 mnode1 ceph-mds-6f66956547-5x4ng 1/1 Running 0 1h 192.168.4.14 mnode2 ceph-mds-6f66956547-c25cx 1/1 Unknown 0 1h 192.168.3.14 mnode3 @@ -1163,9 +1154,6 @@ Above output shows Ceph cluster in HEALTH_OK with all OSDs and MONs up and runni ubuntu@mnode1:~$ kubectl get pods -n ceph --show-all=false -o wide Flag --show-all has been deprecated, will be removed in an upcoming release NAME READY STATUS RESTARTS AGE IP NODE - ceph-cephfs-provisioner-784c6f9d59-n92dx 1/1 Running 0 25m 192.168.0.206 mnode1 - ceph-cephfs-provisioner-784c6f9d59-ndsgn 1/1 Running 0 1h 192.168.4.15 mnode2 - ceph-cephfs-provisioner-784c6f9d59-vgzzx 1/1 Unknown 0 1h 192.168.3.17 mnode3 ceph-mds-6f66956547-57tf9 1/1 Running 0 25m 192.168.0.207 mnode1 ceph-mds-6f66956547-5x4ng 1/1 Running 0 1h 192.168.4.14 mnode2 ceph-mds-6f66956547-c25cx 1/1 Unknown 0 1h 192.168.3.14 mnode3 diff --git a/doc/source/testing/ceph-upgrade.rst b/doc/source/testing/ceph-upgrade.rst index cc646a481e..da84798f7a 100644 --- a/doc/source/testing/ceph-upgrade.rst +++ b/doc/source/testing/ceph-upgrade.rst @@ -135,8 +135,6 @@ Steps: NAME READY STATUS RESTARTS AGE ceph-bootstrap-s4jkx 0/1 Completed 0 10m ceph-cephfs-client-key-generator-6bmzz 0/1 Completed 0 3m - ceph-cephfs-provisioner-784c6f9d59-2z6ww 1/1 Running 0 3m - ceph-cephfs-provisioner-784c6f9d59-sg8wv 1/1 Running 0 3m ceph-mds-745576757f-4vdn4 1/1 Running 0 6m ceph-mds-745576757f-bxdcs 1/1 Running 0 6m ceph-mds-keyring-generator-f5lxf 0/1 Completed 0 10m @@ -190,15 +188,6 @@ Steps: Showing partial output from kubectl describe command to show which image is Docker container is using -.. code-block:: console - - ubuntu@mnode1:~$ kubectl describe pod -n ceph ceph-cephfs-provisioner-784c6f9d59-2z6ww - - Containers: - ceph-cephfs-provisioner: - Container ID: docker://98ed65617f6c4b60fe60d94b8707e52e0dd4c87791e7d72789a0cb603fa80e2c - Image: quay.io/external_storage/cephfs-provisioner:v0.1.1 - .. code-block:: console ubuntu@mnode1:~$ kubectl describe pod -n ceph ceph-rbd-provisioner-84665cb84f-6s55r @@ -229,8 +218,6 @@ Continue with OSH multinode guide to install other Openstack charts. NAME READY STATUS RESTARTS AGE ceph-bootstrap-s4jkx 0/1 Completed 0 2h ceph-cephfs-client-key-generator-6bmzz 0/1 Completed 0 2h - ceph-cephfs-provisioner-784c6f9d59-2z6ww 1/1 Running 0 2h - ceph-cephfs-provisioner-784c6f9d59-sg8wv 1/1 Running 0 2h ceph-mds-745576757f-4vdn4 1/1 Running 0 2h ceph-mds-745576757f-bxdcs 1/1 Running 0 2h ceph-mds-keyring-generator-f5lxf 0/1 Completed 0 2h @@ -425,16 +412,12 @@ No interruption to OSH pods. .. code-block:: console - ceph-cephfs-provisioner-784c6f9d59-2z6ww 0/1 Terminating 0 2h - ceph-cephfs-provisioner-784c6f9d59-sg8wv 0/1 Terminating 0 2h ceph-rbd-provisioner-84665cb84f-6s55r 0/1 Terminating 0 2h ceph-rbd-provisioner-84665cb84f-chwhd 0/1 Terminating 0 2h .. code-block:: console - ceph-cephfs-provisioner-65ffbd47c4-cl4hj 1/1 Running 0 1m - ceph-cephfs-provisioner-65ffbd47c4-qrtg2 1/1 Running 0 1m ceph-rbd-provisioner-5bfb577ffd-b7tkx 1/1 Running 0 1m ceph-rbd-provisioner-5bfb577ffd-m6gg6 1/1 Running 0 1m @@ -447,8 +430,6 @@ pods are running. No interruption to OSH pods. ceph-bootstrap-s4jkx 0/1 Completed 0 2h ceph-cephfs-client-key-generator-6bmzz 0/1 Completed 0 2h - ceph-cephfs-provisioner-65ffbd47c4-cl4hj 1/1 Running 0 2m - ceph-cephfs-provisioner-65ffbd47c4-qrtg2 1/1 Running 0 2m ceph-mds-5fdcb5c64c-c52xq 1/1 Running 0 8m ceph-mds-5fdcb5c64c-t9nmb 1/1 Running 0 8m ceph-mds-keyring-generator-f5lxf 0/1 Completed 0 2h @@ -549,15 +530,6 @@ pods are running. No interruption to OSH pods. 17) Check which images Provisionors and Mon-Check PODs are using -.. code-block:: console - - ubuntu@mnode1:/opt/openstack-helm$ kubectl describe pod -n ceph ceph-cephfs-provisioner-65ffbd47c4-cl4hj - - Containers: - ceph-cephfs-provisioner: - Container ID: docker://079f148c1fb9ba13ed6caa0ca9d1e63b455373020a565a212b5bd261cbaedb43 - Image: quay.io/external_storage/cephfs-provisioner:v0.1.2 - .. code-block:: console ubuntu@mnode1:/opt/openstack-helm$ kubectl describe pod -n ceph ceph-rbd-provisioner-5bfb577ffd-b7tkx