openstack-helm-infra/tools/deployment/tenant-ceph/019-setup-ceph-loopback-device.sh
Chinasubbareddy Mallavarapu 3bde9f5b90 [CEPH] OSH-INFRA: use loopback devices for ceph osds
- This is to make use of loopback devices for ceph osds since
support for directory backed osds going to deprecate.

- Move to bluestore from filestore for ceph-osds.
- Seperate DB and WAL partitions from data so that gates will validate
  the scenario where we will have fast storage disk for DB and WAL.

Change-Id: Ief6de17c53d6cb57ef604895fdc66dc6c604fd89
2020-06-29 14:09:32 +00:00

22 lines
910 B
Bash
Executable File

#!/bin/bash
set -xe
sudo df -lh
sudo lsblk
sudo mkdir -p /var/lib/openstack-helm/ceph
sudo truncate -s 10G /var/lib/openstack-helm/ceph/ceph-osd-data-loopbackfile.img
sudo truncate -s 8G /var/lib/openstack-helm/ceph/ceph-osd-db-wal-loopbackfile.img
sudo losetup /dev/loop0 /var/lib/openstack-helm/ceph/ceph-osd-data-loopbackfile.img
sudo losetup /dev/loop1 /var/lib/openstack-helm/ceph/ceph-osd-db-wal-loopbackfile.img
#second disk for tenant-ceph
sudo mkdir -p /var/lib/openstack-helm/tenant-ceph
sudo truncate -s 10G /var/lib/openstack-helm/tenant-ceph/ceph-osd-data-loopbackfile.img
sudo truncate -s 8G /var/lib/openstack-helm/tenant-ceph/ceph-osd-db-wal-loopbackfile.img
sudo losetup /dev/loop2 /var/lib/openstack-helm/tenant-ceph/ceph-osd-data-loopbackfile.img
sudo losetup /dev/loop3 /var/lib/openstack-helm/tenant-ceph/ceph-osd-db-wal-loopbackfile.img
# lets check the devices
sudo df -lh
sudo lsblk