b7ea6c7150
SwiftService uploads large objects using a thread pool. (The pool defaults to 5 and we're not currently configuring it larger or smaller) Instead of using that, spin up upload threads on our own so that we can get rid of the swiftclient depend. A few notes: - We're using the new async feature of the Adapter wrapper, which rate limits at the _start_ of a REST call. This is sane as far as we can tell, but also might not be what someone is expecting. - We'll skip the thread pool uploader for objects that are smaller than the default max segment size. - In splitting the file into segments, we'd like to avoid reading all of the segments into RAM when we don't need to - so there is a file-like wrapper class which can be passed to requests. This implements a read-view of a portion of the file. In a pathological case, this could be slower due to disk seeking on the read side. However, let's go back and deal with buffering when we have a problem - I imagine that the REST upload will be the bottleneck long before the overhead of interleaved disk seeks will be. Change-Id: Id9258980d2e0782e4e3c0ac26c7f11dc4db80354
25 lines
445 B
Plaintext
25 lines
445 B
Plaintext
pbr>=0.11,<2.0
|
|
|
|
munch
|
|
decorator
|
|
jmespath
|
|
jsonpatch
|
|
ipaddress
|
|
os-client-config>=1.22.0
|
|
requestsexceptions>=1.1.1
|
|
six
|
|
|
|
keystoneauth1>=2.11.0
|
|
netifaces>=0.10.4
|
|
python-novaclient>=2.21.0,!=2.27.0,!=2.32.0
|
|
python-keystoneclient>=0.11.0
|
|
python-cinderclient>=1.3.1
|
|
python-neutronclient>=2.3.10
|
|
python-troveclient>=1.2.0
|
|
python-ironicclient>=0.10.0
|
|
python-heatclient>=1.0.0
|
|
python-designateclient>=2.1.0
|
|
python-magnumclient>=2.1.0
|
|
|
|
dogpile.cache>=0.5.3
|