2f1111a436
During a PUT of an object, the proxy instanciates one Putter per object-server that will store data (either the full object or a fragment, depending on the storage policy). Each Putter is owning a Queue that will be used to bufferize data chunks before they are written to the socket connected to the object-server. The chunks are moved from the queue to the socket by a greenthread. There is one greenthread per Putter. If the client is uploading faster than the object-servers can manage, the Queue could grow and consume a lot of memory. To avoid that, the queue is bounded (default: 10). Having a bounded queue also allows to ensure that all object-servers will get the data at the same rate because if one queue is full, the greenthread reading from the client socket will block when trying to write to the queue. So the global rate is the one of the slowest object-server. The thing is, every operating system manages socket buffers for incoming and outgoing data. Concerning the send buffer, the behavior is such that if the buffer is full, a call to write() will block, otherwise the call will return immediately. It behaves a lot like the Putter's Queue, except that the size of the buffer is dynamic so it adapts itself to the speed of the receiver. Thus, managing a queue in addition to the socket send buffer is a duplicate queueing/buffering that provides no interest but is, as shown by profiling and benchmarks, very CPU costly. This patch removes the queuing mecanism. Instead, the greenthread reading data from the client will directly write to the socket. If an object-server is getting slow, the buffer will fulfill, blocking the reader greenthread. Benchmark shows a CPU consumption reduction of more than 30% will the observed rate for an upload is increasing by about 45%. Change-Id: Icf8f800cb25096f93d3faa1e6ec091eb29500758