docs/doc/source/storage/kubernetes/replace-osds-on-an-aio-sx-single-disk-system-without-backup-951eefebd1f2.rst
Keane Lim cbc821d750 New disk replacement procedures
Change-Id: I703a61da792e59fdd19dfcaef5376b7f2f2ca975
Signed-off-by: Keane Lim <keane.lim@windriver.com>
2022-03-30 17:50:44 -04:00

2.0 KiB

Replace OSDs on an AIO-SX Single Disk System without Backup

  1. Get a list of all pools and their settings (size, min_size, pg_num, pgp_num).

    ~(keystone_admin)$ ceph osd lspools # list all pools
    ~(keystone_admin)$ ceph osd pool get $POOLNAME $SETTING

    Keep the pool names and settings as they will be used in step 12.

  2. Lock the controller.

    ~(keystone_admin)$ system host-lock controller-0
  3. Remove all applications that use ceph pools.

    ~(keystone_admin)$ system application-list # list the applications
    ~(keystone_admin)$ system application-remove $APPLICATION_NAME # remove
    application

    Keep the names of the removed applications as they will be used in step 11.

  4. Make a backup of /etc/pmon.d/ceph.conf to a safe location and remove the ceph.conf file from the /etc/pmon.d folder.

  5. Stop ceph-mds.

    ~(keystone_admin)$ /etc/init.d/ceph stop mds
  6. Declare ceph fs as failed and delete it.

    ~(keystone_admin)$ ceph mds fail 0
    ~(keystone_admin)$ ceph fs rm <ceph fs filename> --yes-i-really-mean-it
  7. Allow Ceph pools to be deleted.

    ~(keystone_admin)$ ceph tell mon.\* injectargs 
    '--mon-allow-pool-delete=true'
  8. Remove all the pools.

    ~(keystone_admin)$ ceph osd pool ls | xargs -i ceph osd pool delete {} 
    {} --yes-i-really-really-mean-it
  9. Shutdown machine, replace disk, turn it on and wait for boot to finish.

  10. Move the backed up ceph.conf from step 4 to /etc/pmon.d and unlock the controller.

  11. Add the applications that were removed in step 3.

  12. Verify that all pools and settings listed in step 1 are recreated.