The ring builder's placement algorithm has two goals: first, to ensure that each partition has its replicas as far apart as possible, and second, to ensure that partitions are fairly distributed according to device weight. In many cases, it succeeds in both, but sometimes those goals conflict. When that happens, operators may want to relax the rules a little bit in order to reach a compromise solution. Imagine a cluster of 3 nodes (A, B, C), each with 20 identical disks, and using 3 replicas. The ring builder will place 1 replica of each partition on each node, as you'd expect. Now imagine that one disk fails in node C and is removed from the ring. The operator would probably be okay with remaining at 1 replica per node (unless their disks are really close to full), but to accomplish that, they have to multiply the weights of the other disks in node C by 20/19 to make C's total weight stay the same. Otherwise, the ring builder will move partitions around such that some partitions have replicas only on nodes A and B. If 14 more disks failed in node C, the operator would probably be okay with some data not living on C, as a 4x increase in storage requirements is likely to fill disks. This commit introduces the notion of "overload": how much extra partition space can be placed on each disk *over* what the weight dictates. For example, an overload of 0.1 means that a device can take up to 10% more partitions than its weight would imply in order to make the replica dispersion better. Overload only has an effect when replica-dispersion and device weights come into conflict. The overload is a single floating-point value for the builder file. Existing builders get an overload of 0.0, so there will be no behavior change on existing rings. In the example above, imagine the operator sets an overload of 0.112 on his rings. If node C loses a drive, each other drive can take on up to 11.2% more data. Splitting the dead drive's partitions among the remaining 19 results in a 5.26% increase, so everything that was on node C stays on node C. If another disk dies, then we're up to an 11.1% increase, and so everything still stays on node C. If a third disk dies, then we've reached the limits of the overload, so some partitions will begin to reside solely on nodes A and B. DocImpact Change-Id: I3593a1defcd63b6ed8eae9c1c66b9d3428b33864
Swift
A distributed object storage system designed to scale from a single machine to thousands of servers. Swift is optimized for multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound.
Swift provides a simple, REST-based API fully documented at http://docs.openstack.org/.
Swift was originally developed as the basis for Rackspace's Cloud Files and was open-sourced in 2010 as part of the OpenStack project. It has since grown to include contributions from many companies and has spawned a thriving ecosystem of 3rd party tools. Swift's contributors are listed in the AUTHORS file.
Docs
To build documentation install sphinx (pip install sphinx
), run
python setup.py build_sphinx
, and then browse to /doc/build/html/index.html.
These docs are auto-generated after every commit and available online at
http://docs.openstack.org/developer/swift/.
For Developers
The best place to get started is the "SAIO - Swift All In One". This document will walk you through setting up a development cluster of Swift in a VM. The SAIO environment is ideal for running small-scale tests against swift and trying out new features and bug fixes.
You can run unit tests with .unittests
and functional tests with
.functests
.
If you would like to start contributing, check out these notes to help you get started.
Code Organization
- bin/: Executable scripts that are the processes run by the deployer
- doc/: Documentation
- etc/: Sample config files
- swift/: Core code
- account/: account server
- common/: code shared by different modules
- middleware/: "standard", officially-supported middleware
- ring/: code implementing Swift's ring
- container/: container server
- obj/: object server
- proxy/: proxy server
- test/: Unit and functional tests
Data Flow
Swift is a WSGI application and uses eventlet's WSGI server. After the
processes are running, the entry point for new requests is the Application
class in swift/proxy/server.py
. From there, a controller is chosen, and the
request is processed. The proxy may choose to forward the request to a back-
end server. For example, the entry point for requests to the object server is
the ObjectController
class in swift/obj/server.py
.
For Deployers
Deployer docs are also available at http://docs.openstack.org/developer/swift/. A good starting point is at http://docs.openstack.org/developer/swift/deployment_guide.html
You can run functional tests against a swift cluster with .functests
. These
functional tests require /etc/swift/test.conf
to run. A sample config file
can be found in this source tree in test/sample.conf
.
For Client Apps
For client applications, official Python language bindings are provided at http://github.com/openstack/python-swiftclient.
Complete API documentation at http://docs.openstack.org/api/openstack-object-storage/1.0/content/
For more information come hang out in #openstack-swift on freenode.
Thanks,
The Swift Development Team