integ/config/puppet-modules/openstack/puppet-ceph-2.4.1/debian/patches/0009-fix-ceph-osd-disk-partition-for-nvme-disks.patch
Dan Voiculeasa bac46cc0e0 debian: Replace puppet-module-ceph-3.1.1 with 2.4.1
This work is part of Debian integration effort. This work only affects
debian.

We package the same version of ceph for both CentOS and Debian.
Since we know the puppet-ceph module on CentOS is supposed to work,
use this on Debian also to reduce testing and possible issues.

Patches were copied from CentOS and not touched.
Drop one patch to metadata.json, we know we have some work to do in
that area to clear puppet warnings, but will be done part of a
generic clearing puppet warnings effort.

The sources need to be patched to work with debhelper-compat 13, which
we don't care now.

There are some integration issues, but testing so far revealed that
during a puppet replay for aio manifest ceph data and ceph journal
partitions were created.

Story: 2009101
Task: 43431
Signed-off-by: Dan Voiculeasa <dan.voiculeasa@windriver.com>
Change-Id: I90adc736ea52e6c4f9946520156f53e572c224cc
2022-03-11 11:38:20 +02:00

90 lines
3.5 KiB
Diff

From b0dd34d2d580c817f9ef6eb62927ba63bebe73c3 Mon Sep 17 00:00:00 2001
From: Daniel Badea <daniel.badea@windriver.com>
Date: Thu, 25 Apr 2019 15:37:53 +0000
Subject: [PATCH] fix ceph osd disk partition for nvme disks
---
manifests/osd.pp | 38 +++++++++++++++++++++++++++++++-------
1 file changed, 31 insertions(+), 7 deletions(-)
diff --git a/manifests/osd.pp b/manifests/osd.pp
index c51a445..5bd30c5 100644
--- a/manifests/osd.pp
+++ b/manifests/osd.pp
@@ -138,10 +138,17 @@ test -z $(ceph-disk list $(readlink -f ${data}) | egrep -o '[0-9a-f]{8}-([0-9a-f
command => "/bin/true # comment to satisfy puppet syntax requirements
set -ex
-ceph-disk --verbose --log-stdout prepare --filestore ${cluster_uuid_option} ${uuid_option} ${osdid_option} --fs-type xfs --zap-disk $(readlink -f ${data}) $(readlink -f ${journal})
+disk=$(readlink -f ${data})
+ceph-disk --verbose --log-stdout prepare --filestore ${cluster_uuid_option} ${uuid_option} ${osdid_option} --fs-type xfs --zap-disk \${disk} $(readlink -f ${journal})
mkdir -p /var/lib/ceph/osd/ceph-${osdid}
ceph auth del osd.${osdid} || true
-mount $(readlink -f ${data})1 /var/lib/ceph/osd/ceph-${osdid}
+part=\${disk}
+if [[ \$part == *nvme* ]]; then
+ part=\${part}p1
+else
+ part=\${part}1
+fi
+mount $(readlink -f \${part}) /var/lib/ceph/osd/ceph-${osdid}
ceph-osd --id ${osdid} --mkfs --mkkey --mkjournal
ceph auth add osd.${osdid} osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-${osdid}/keyring
umount /var/lib/ceph/osd/ceph-${osdid}
@@ -183,12 +190,17 @@ if ! test -b \$disk ; then
chown -h ceph:ceph \$disk
fi
fi
-# activate happens via udev when using the entire device
+part=\${disk}
+if [[ \${part} == *nvme* ]]; then
+ part=\${part}p1
+else
+ part=\${part}1
+fi
if ! test -b \$disk || ! test -b \${disk}1 || ! test -b \${disk}p1 ; then
- ceph-disk activate \${disk}1 || true
+ ceph-disk activate \${part} || true
fi
if test -f ${udev_rules_file}.disabled && ( test -b \${disk}1 || test -b \${disk}p1 ); then
- ceph-disk activate \${disk}1 || true
+ ceph-disk activate \${part} || true
fi
",
unless => "/bin/true # comment to satisfy puppet syntax requirements
@@ -206,8 +218,14 @@ ls -ld /var/lib/ceph/osd/${cluster_name}-* | grep \" $(readlink -f ${data})\$\"
command => "/bin/true # comment to satisfy puppet syntax requirements
set -ex
disk=$(readlink -f ${data})
+part=\${disk}
+if [[ \${part} == *nvme* ]]; then
+ part=\${part}p1
+else
+ part=\${part}1
+fi
if [ -z \"\$id\" ] ; then
- id=$(ceph-disk list | sed -nEe \"s:^ *\${disk}1? .*(ceph data|mounted on).*osd\\.([0-9]+).*:\\2:p\")
+ id=$(ceph-disk list | sed -nEe \"s:^ *\${part}? .*(ceph data|mounted on).*osd\\.([0-9]+).*:\\2:p\")
fi
if [ -z \"\$id\" ] ; then
id=$(ls -ld /var/lib/ceph/osd/${cluster_name}-* | sed -nEe \"s:.*/${cluster_name}-([0-9]+) *-> *\${disk}\$:\\1:p\" || true)
@@ -227,8 +245,14 @@ fi
unless => "/bin/true # comment to satisfy puppet syntax requirements
set -ex
disk=$(readlink -f ${data})
+part=${disk}
+if [[ \$part == *nvme* ]]; then
+ part=\${part}p1
+else
+ part=\${part}1
+fi
if [ -z \"\$id\" ] ; then
- id=$(ceph-disk list | sed -nEe \"s:^ *\${disk}1? .*(ceph data|mounted on).*osd\\.([0-9]+).*:\\2:p\")
+ id=$(ceph-disk list | sed -nEe \"s:^ *\${part}? .*(ceph data|mounted on).*osd\\.([0-9]+).*:\\2:p\")
fi
if [ -z \"\$id\" ] ; then
id=$(ls -ld /var/lib/ceph/osd/${cluster_name}-* | sed -nEe \"s:.*/${cluster_name}-([0-9]+) *-> *\${disk}\$:\\1:p\" || true)
--
1.8.3.1