a4cf508a38
When a DB gets deleted, we clear out its metadata. This included sysmeta such as that used to tell shards the name of their root DB. Previously, this would cause deleted shards to pop back to life as roots that claimed to have objects still sitting in whatever container they sharnk into. Now, use the metadata if it's available, but when it's not, go by the state of the DB's "own shard range" -- deleted shards should be marked deleted, while roots never are. This allows us to actually clean up the database files; you can test this by doing something like * Run `nosetests test/probe/test_sharder.py:TestContainerSharding.test_shrinking` * Run `find /srv/*/*/containers -name '*.db'` to see how many databases are left on disk. There should be 15: 3 for the root container, 6 for the two shards on the first pass, and another 6 for the two shards on the second pass. * Edit container configs to decrease reclaim_age -- even 1 should be fine. * Run `swift-init main start` to restart the servers. * Run `swift-init container-sharder once` to have the shards get marked deleted. * Run `swift-init container-updater once` to ensure all containers have reported. * Run `swift-init container-replicator once` to clean up the containers. * Run `find /srv/*/*/containers -name '*.db'` again to verify no containers remain on disk. Change-Id: Icba98f1c9e17e8ade3f0e1b9a23360cf5ab8c86b |
||
---|---|---|
.. | ||
__init__.py | ||
test_auditor.py | ||
test_backend.py | ||
test_reconciler.py | ||
test_replicator.py | ||
test_server.py | ||
test_sharder.py | ||
test_sync_store.py | ||
test_sync.py | ||
test_updater.py |