Ceph: A script to check object replication across the hosts
this script will create an object and see if the object is getting replicated across diffrent hosts or not. Change-Id: Ic5056c1a07dc5d5b6a5d6fc24e3d9a75fa46458f
This commit is contained in:
parent
23730808d4
commit
26991ad182
30
ceph-mon/templates/bin/utils/_checkObjectReplication.py.tpl
Executable file
30
ceph-mon/templates/bin/utils/_checkObjectReplication.py.tpl
Executable file
@ -0,0 +1,30 @@
|
|||||||
|
#!/usr/bin/python2
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import collections
|
||||||
|
|
||||||
|
if (int(len(sys.argv)) == 1):
|
||||||
|
print "Please provide pool name to test , example: checkObjectReplication.py <pool name>"
|
||||||
|
sys.exit(1)
|
||||||
|
else:
|
||||||
|
poolName = sys.argv[1]
|
||||||
|
cmdRep = 'ceph osd map' + ' ' + str(poolName) + ' ' + 'testreplication -f json-pretty'
|
||||||
|
objectRep = subprocess.check_output(cmdRep, shell=True)
|
||||||
|
repOut = json.loads(objectRep)
|
||||||
|
osdNumbers = repOut['up']
|
||||||
|
print "Test object got replicated on these osds:" + " " + str(osdNumbers)
|
||||||
|
|
||||||
|
osdHosts= []
|
||||||
|
for osd in osdNumbers:
|
||||||
|
cmdFind = 'ceph osd find' + ' ' + str(osd)
|
||||||
|
osdFind = subprocess.check_output(cmdFind , shell=True)
|
||||||
|
osdHost = json.loads(osdFind)
|
||||||
|
osdHostLocation = osdHost['crush_location']
|
||||||
|
osdHosts.append(osdHostLocation['host'])
|
||||||
|
|
||||||
|
print "Test object got replicated on these hosts:" + " " + str(osdHosts)
|
||||||
|
|
||||||
|
print "Hosts hosting multiple copies of a placement groups are:" + str([item for item, count in collections.Counter(osdHosts).items() if count > 1])
|
||||||
|
sys.exit(0)
|
@ -59,5 +59,7 @@ data:
|
|||||||
|
|
||||||
utils-checkPGs.sh: |
|
utils-checkPGs.sh: |
|
||||||
{{ tuple "bin/utils/_checkPGs.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
|
{{ tuple "bin/utils/_checkPGs.sh.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
|
||||||
|
utils-checkObjectReplication.py: |
|
||||||
|
{{ tuple "bin/utils/_checkObjectReplication.py.tpl" . | include "helm-toolkit.utils.template" | indent 4 }}
|
||||||
|
|
||||||
{{- end }}
|
{{- end }}
|
||||||
|
@ -182,6 +182,10 @@ spec:
|
|||||||
mountPath: /tmp/utils-checkPGs.sh
|
mountPath: /tmp/utils-checkPGs.sh
|
||||||
subPath: utils-checkPGs.sh
|
subPath: utils-checkPGs.sh
|
||||||
readOnly: true
|
readOnly: true
|
||||||
|
- name: ceph-mon-bin
|
||||||
|
mountPath: /tmp/checkObjectReplication.py
|
||||||
|
subPath: utils-checkObjectReplication.py
|
||||||
|
readOnly: true
|
||||||
- name: ceph-mon-etc
|
- name: ceph-mon-etc
|
||||||
mountPath: /etc/ceph/ceph.conf
|
mountPath: /etc/ceph/ceph.conf
|
||||||
subPath: ceph.conf
|
subPath: ceph.conf
|
||||||
|
@ -7,3 +7,4 @@ Ceph Resiliency
|
|||||||
|
|
||||||
README
|
README
|
||||||
failure-domain
|
failure-domain
|
||||||
|
validate-object-replication
|
||||||
|
@ -0,0 +1,65 @@
|
|||||||
|
===========================================
|
||||||
|
Ceph - Test object replication across hosts
|
||||||
|
===========================================
|
||||||
|
|
||||||
|
This document captures steps to validate object replcation is happening across
|
||||||
|
hosts or not .
|
||||||
|
|
||||||
|
|
||||||
|
Setup:
|
||||||
|
======
|
||||||
|
- Follow OSH single node or multinode guide to bring up OSH envronment.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Step 1: Setup the OSH environment and check ceph cluster health
|
||||||
|
=================================================================
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Make sure we have healthy ceph cluster running
|
||||||
|
|
||||||
|
``Ceph status:``
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
ubuntu@mnode1:/opt/openstack-helm$ kubectl exec -n ceph ceph-mon-5qn68 -- ceph -s
|
||||||
|
cluster:
|
||||||
|
id: 54d9af7e-da6d-4980-9075-96bb145db65c
|
||||||
|
health: HEALTH_OK
|
||||||
|
|
||||||
|
services:
|
||||||
|
mon: 3 daemons, quorum mnode1,mnode2,mnode3
|
||||||
|
mgr: mnode2(active), standbys: mnode3
|
||||||
|
mds: cephfs-1/1/1 up {0=mds-ceph-mds-6f66956547-c25cx=up:active}, 1 up:standby
|
||||||
|
osd: 3 osds: 3 up, 3 in
|
||||||
|
rgw: 2 daemons active
|
||||||
|
|
||||||
|
data:
|
||||||
|
pools: 19 pools, 101 pgs
|
||||||
|
objects: 354 objects, 260 MB
|
||||||
|
usage: 77807 MB used, 70106 MB / 144 GB avail
|
||||||
|
pgs: 101 active+clean
|
||||||
|
|
||||||
|
io:
|
||||||
|
client: 48769 B/s wr, 0 op/s rd, 12 op/s wr
|
||||||
|
|
||||||
|
|
||||||
|
- Ceph cluster is in HEALTH_OK state with 3 MONs and 3 OSDs.
|
||||||
|
|
||||||
|
|
||||||
|
Step 2: Run validation script
|
||||||
|
=============================
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
Exec into ceph mon pod and execute the validation script by giving pool name as
|
||||||
|
first arugment, as shown below rbd is the pool name .
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
ubuntu@mnode1:/opt/openstack-helm$ /tmp/checkObjectReplication.py rbd
|
||||||
|
Test object got replicated on these osds: [1, 0, 2]
|
||||||
|
Test object got replicated on these hosts: [u'mnode1', u'mnode2', u'mnode3']
|
||||||
|
Hosts hosting multiple copies of a placement groups are:[]
|
||||||
|
|
||||||
|
- If there are any objects replicated on same host then we will see them in the last
|
||||||
|
line of the script output
|
Loading…
Reference in New Issue
Block a user