404ac092d1
As I understand it db replication starts with a preflight sync request to the remote container server who's response will include the last synced row_id that it has on file for the sending nodes database id. If the difference in the last sync point returned is more than 50% of the local sending db's rows, it'll fall back to sending the whole db over rsync and let the remote end merge items locally - but generally there's just a few rows missing and they're shipped over the wire as json and stuffed into some rather normal looking merge_items calls. The one thing that's a bit different with these remote merge_items calls (compared to your average run of the mill eat a bunch of entries out of a .pending file) is the is source kwarg. When this optional kwarg comes into merge_items it's the remote sending db's uuid, and after we eat all the rows it sent us we update our local incoming_sync table for that uuid so that next time when it makes it's pre-flight sync request we can tell it where it left off. Now normally the sending db is going to push out it's rows up from the returned sync_point in 1000 item diffs, up to 10 batches total (per_diff and max_diffs options) - 10K rows. If that goes well then everything is in sync up to at least the point it started, and the sending db will *also* ship over *it's* incoming_sync rows to merge_syncs on the remote end. Since the sending db is in sync with these other db's up to those points so is the remote db now by way of the transitive property. Also note through some weird artifact that I'm not entirely convinced isn't an unrelated and possibly benign bug the incoming_sync table on the sending db will often also happen to include it's own uuid - maybe it got pushed back to it from another node? Anyway, that seemed to work well enough until a sending db got diff capped (i.e. sent it's 10K rows and wasn't finished), when this happened the final merge_syncs call never gets sent because the remote end is definitely *not* up to date with the other databases that the sending db is - it's not even up-to-date with the sending db yet! But the hope is certainly that on the next pass it'll be able to finish sending the remaining items. But since the remote end is who decides what the last successfully synced row with this local sending db was - it's super important that the incoming_sync table is getting updated in merge_items when that source kwarg is there. I observed this simple and straight forward process wasn't working well in one case - which is weird considering it didn't have much in the way of tests. After I had the test and started looking into it seemed maybe the source kwarg handling got over-indented a bit in the bulk insert merge_items refactor. I think this is correct - maybe we could send someone up to the mountain temple to seek out gholt? Change-Id: I4137388a97925814748ecc36b3ab5f1ac3309659 |
||
---|---|---|
bin | ||
doc | ||
etc | ||
examples | ||
swift | ||
test | ||
.coveragerc | ||
.functests | ||
.gitignore | ||
.gitreview | ||
.mailmap | ||
.probetests | ||
.unittests | ||
AUTHORS | ||
babel.cfg | ||
CHANGELOG | ||
CONTRIBUTING.md | ||
LICENSE | ||
MANIFEST.in | ||
README.md | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
Swift
A distributed object storage system designed to scale from a single machine to thousands of servers. Swift is optimized for multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound.
Swift provides a simple, REST-based API fully documented at http://docs.openstack.org/.
Swift was originally developed as the basis for Rackspace's Cloud Files and was open-sourced in 2010 as part of the OpenStack project. It has since grown to include contributions from many companies and has spawned a thriving ecosystem of 3rd party tools. Swift's contributors are listed in the AUTHORS file.
Docs
To build documentation install sphinx (pip install sphinx
), run
python setup.py build_sphinx
, and then browse to /doc/build/html/index.html.
These docs are auto-generated after every commit and available online at
http://docs.openstack.org/developer/swift/.
For Developers
The best place to get started is the "SAIO - Swift All In One". This document will walk you through setting up a development cluster of Swift in a VM. The SAIO environment is ideal for running small-scale tests against swift and trying out new features and bug fixes.
You can run unit tests with .unittests
and functional tests with
.functests
.
If you would like to start contributing, check out these notes to help you get started.
Code Organization
- bin/: Executable scripts that are the processes run by the deployer
- doc/: Documentation
- etc/: Sample config files
- swift/: Core code
- account/: account server
- common/: code shared by different modules
- middleware/: "standard", officially-supported middleware
- ring/: code implementing Swift's ring
- container/: container server
- obj/: object server
- proxy/: proxy server
- test/: Unit and functional tests
Data Flow
Swift is a WSGI application and uses eventlet's WSGI server. After the
processes are running, the entry point for new requests is the Application
class in swift/proxy/server.py
. From there, a controller is chosen, and the
request is processed. The proxy may choose to forward the request to a back-
end server. For example, the entry point for requests to the object server is
the ObjectController
class in swift/obj/server.py
.
For Deployers
Deployer docs are also available at http://docs.openstack.org/developer/swift/. A good starting point is at http://docs.openstack.org/developer/swift/deployment_guide.html
You can run functional tests against a swift cluster with .functests
. These
functional tests require /etc/swift/test.conf
to run. A sample config file
can be found in this source tree in test/sample.conf
.
For Client Apps
For client applications, official Python language bindings are provided at http://github.com/openstack/python-swiftclient.
Complete API documentation at http://docs.openstack.org/api/openstack-object-storage/1.0/content/
For more information come hang out in #openstack-swift on freenode.
Thanks,
The Swift Development Team