d29efccdbb
There exists a case for bluestore OSDs where the OSD init process detects that an OSD has already been initialized in the deployed Ceph cluster, but the cluster osdmap does not have an entry for it. This change corrects this case to zap and reinitialize the disk when OSD_FORCE_REPAIR is set to 1. It also clarifies a log message in this case when OSD_FORCE_REPAIR is 0 to state that a manual repair is necessary. Change-Id: I2f00fa655bf5359dcc80c36d6c2ce33e3ce33166
52 lines
2.5 KiB
YAML
52 lines
2.5 KiB
YAML
---
|
|
ceph-osd:
|
|
- 0.1.0 Initial Chart
|
|
- 0.1.1 Change helm-toolkit dependency to >= 0.1.0
|
|
- 0.1.2 wait for only osd pods from post apply job
|
|
- 0.1.3 Search for complete logical volume name for OSD data volumes
|
|
- 0.1.4 Don't try to prepare OSD disks that are already deployed
|
|
- 0.1.5 Fix the sync issue between osds when using shared disk for metadata
|
|
- 0.1.6 Logic improvement for used osd disk detection
|
|
- 0.1.7 Synchronization audit for the ceph-volume osd-init script
|
|
- 0.1.8 Update post apply job
|
|
- 0.1.9 Check inactive PGs multiple times
|
|
- 0.1.10 Fix typo in check inactive PGs logic
|
|
- 0.1.11 Fix post-apply job failure related to fault tolerance
|
|
- 0.1.12 Add a check for misplaced objects to the post-apply job
|
|
- 0.1.13 Remove default OSD configuration
|
|
- 0.1.14 Alias synchronized commands and fix descriptor leak
|
|
- 0.1.15 Correct naming convention for logical volumes in disk_zap()
|
|
- 0.1.16 dmsetup remove logical devices using correct device names
|
|
- 0.1.17 Fix a bug with DB orphan volume removal
|
|
- 0.1.18 Uplift from Nautilus to Octopus release
|
|
- 0.1.19 Update rbac api version
|
|
- 0.1.20 Update directory-based OSD deployment for image changes
|
|
- 0.1.21 Refactor Ceph OSD Init Scripts - First PS
|
|
- 0.1.22 Refactor Ceph OSD Init Scripts - Second PS
|
|
- 0.1.23 Use full image ref for docker official images
|
|
- 0.1.24 Ceph OSD Init Improvements
|
|
- 0.1.25 Export crash dumps when Ceph daemons crash
|
|
- 0.1.26 Mount /var/crash inside ceph-osd pods
|
|
- 0.1.27 Limit Ceph OSD Container Security Contexts
|
|
- 0.1.28 Change var crash mount propagation to HostToContainer
|
|
- 0.1.29 Fix Ceph checkDNS script
|
|
- 0.1.30 Ceph OSD log-runner container should run as ceph user
|
|
- 0.1.31 Helm 3 - Fix Job labels
|
|
- 0.1.32 Update htk requirements
|
|
- 0.1.33 Update log-runner container for MAC
|
|
- 0.1.34 Remove wait for misplaced objects during OSD restarts
|
|
- 0.1.35 Consolidate mon_endpoints discovery
|
|
- 0.1.36 Add OSD device location pre-check
|
|
- 0.1.37 Add a disruptive OSD restart to the post-apply job
|
|
- 0.1.38 Skip pod wait in post-apply job when disruptive
|
|
- 0.1.39 Allow for unconditional OSD restart
|
|
- 0.1.40 Remove udev interactions from osd-init
|
|
- 0.1.41 Remove ceph-mon dependency in ceph-osd liveness probe
|
|
- 0.1.42 Added OCI registry authentication
|
|
- 0.1.43 Update all Ceph images to Focal
|
|
- 0.1.44 Update Ceph to 17.2.6
|
|
- 0.1.45 Extend the ceph-osd post-apply job PG wait
|
|
- 0.1.46 Use Helm toolkit functions for Ceph probes
|
|
- 0.1.47 Add disk zap to OSD init forced repair case
|
|
...
|