Set default wsgi workers to cpu_count
Change the default value of wsgi workers from 1 to auto. The new default value for workers in the proxy, container, account & object wsgi servers will spawn as many workers per process as you have cpu cores. This will not be ideal for some configurations, but it's much more likely to produce a successful out of the box deployment. Inspect the number of cpu_cores using python's multiprocessing when available. Multiprocessing was added in python 2.6, but I know I've compiled python without it before on accident. The cpu_count method seems to be pretty system agnostic, but it says it can raise NotImplementedError or sometimes return 0. Add a new utility method 'config_auto_int_value' to pull an integer out of the config which has a dynamic default. * drive by s/container/proxy/ in proxy-server.conf.5 * fix misplaced max_clients in *-server.conf-sample * update doc/development_saio to force workers = 1 DocImpact Change-Id: Ifa563d22952c902ab8cbe1d339ba385413c54e95
This commit is contained in:
parent
72faf7b86d
commit
de3acec4bf
@ -60,7 +60,16 @@ TCP port the account server should bind to. The default is 6002.
|
||||
.IP \fBbacklog\fR
|
||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
||||
.IP \fBworkers\fR
|
||||
Number of account server workers to fork. The default is 1.
|
||||
The number of pre-forked processes that will accept connections. Zero means
|
||||
no fork. The default is auto which will make the server try to match the
|
||||
number of effective cpu cores if python multiprocessing is available (included
|
||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
||||
that individual workers will use many eventlet co-routines to service multiple
|
||||
concurrent requests.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. The default is 1024.
|
||||
.IP \fBuser\fR
|
||||
The system user that the account server will run as. The default is swift.
|
||||
.IP \fBswift_dir\fR
|
||||
@ -114,12 +123,6 @@ Logging level. The default is INFO.
|
||||
Enables request logging. The default is True.
|
||||
.IP "\fB set log_address\fR
|
||||
Logging address. The default is /dev/log.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. By increasing the
|
||||
number of workers to a much higher value, one can reduce the impact of slow file system
|
||||
operations in one request from negatively impacting other requests. The default is 1024.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
|
@ -60,7 +60,16 @@ TCP port the container server should bind to. The default is 6001.
|
||||
.IP \fBbacklog\fR
|
||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
||||
.IP \fBworkers\fR
|
||||
Number of container server workers to fork. The default is 1.
|
||||
The number of pre-forked processes that will accept connections. Zero means
|
||||
no fork. The default is auto which will make the server try to match the
|
||||
number of effective cpu cores if python multiprocessing is available (included
|
||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
||||
that individual workers will use many eventlet co-routines to service multiple
|
||||
concurrent requests.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. The default is 1024.
|
||||
.IP \fBuser\fR
|
||||
The system user that the container server will run as. The default is swift.
|
||||
.IP \fBswift_dir\fR
|
||||
@ -120,12 +129,6 @@ Logging address. The default is /dev/log.
|
||||
Request timeout to external services. The default is 3 seconds.
|
||||
.IP \fBconn_timeout\fR
|
||||
Connection timeout to external services. The default is 0.5 seconds.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. By increasing the
|
||||
number of workers to a much higher value, one can reduce the impact of slow file system
|
||||
operations in one request from negatively impacting other requests. The default is 1024.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
|
@ -60,7 +60,16 @@ TCP port the object server should bind to. The default is 6000.
|
||||
.IP \fBbacklog\fR
|
||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
||||
.IP \fBworkers\fR
|
||||
Number of object server workers to fork. The default is 1.
|
||||
The number of pre-forked processes that will accept connections. Zero means
|
||||
no fork. The default is auto which will make the server try to match the
|
||||
number of effective cpu cores if python multiprocessing is available (included
|
||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
||||
that individual workers will use many eventlet co-routines to service multiple
|
||||
concurrent requests.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. The default is 1024.
|
||||
.IP \fBuser\fR
|
||||
The system user that the object server will run as. The default is swift.
|
||||
.IP \fBswift_dir\fR
|
||||
@ -120,12 +129,6 @@ Logging address. The default is /dev/log.
|
||||
Request timeout to external services. The default is 3 seconds.
|
||||
.IP \fBconn_timeout\fR
|
||||
Connection timeout to external services. The default is 0.5 seconds.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. By increasing the
|
||||
number of workers to a much higher value, one can reduce the impact of slow file system
|
||||
operations in one request from negatively impacting other requests. The default is 1024.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
|
@ -59,9 +59,18 @@ TCP port the proxy server should bind to. The default is 80.
|
||||
.IP \fBbacklog\fR
|
||||
TCP backlog. Maximum number of allowed pending connections. The default value is 4096.
|
||||
.IP \fBworkers\fR
|
||||
Number of container server workers to fork. The default is 1.
|
||||
The number of pre-forked processes that will accept connections. Zero means
|
||||
no fork. The default is auto which will make the server try to match the
|
||||
number of effective cpu cores if python multiprocessing is available (included
|
||||
with most python distributions >= 2.6) or fallback to one. It's worth noting
|
||||
that individual workers will use many eventlet co-routines to service multiple
|
||||
concurrent requests.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. The default is 1024.
|
||||
.IP \fBuser\fR
|
||||
The system user that the container server will run as. The default is swift.
|
||||
The system user that the proxy server will run as. The default is swift.
|
||||
.IP \fBswift_dir\fR
|
||||
Swift configuration directory. The default is /etc/swift.
|
||||
.IP \fBcert_file\fR
|
||||
@ -537,12 +546,6 @@ object. The default is 10 segments.
|
||||
.IP \fBrate_limit_segments_per_sec\fR
|
||||
Once segment rate-limiting kicks in for an object, limit segments served to N
|
||||
per second. The default is 1.
|
||||
.IP \fBmax_clients\fR
|
||||
Maximum number of clients one worker can process simultaneously (it will
|
||||
actually accept(2) N + 1). Setting this to one (1) will only handle one request
|
||||
at a time, without accepting another request concurrently. By increasing the
|
||||
number of workers to a much higher value, one can reduce the impact of slow file system
|
||||
operations in one request from negatively impacting other requests. The default is 1024.
|
||||
.RE
|
||||
.PD
|
||||
|
||||
|
@ -328,17 +328,23 @@ mount_check true Whether or not check if the devices are
|
||||
bind_ip 0.0.0.0 IP Address for server to bind to
|
||||
bind_port 6000 Port for server to bind to
|
||||
bind_timeout 30 Seconds to attempt bind before giving up
|
||||
workers 1 Number of workers to fork
|
||||
workers auto Override the number of pre-forked workers
|
||||
that will accept connections. If set it
|
||||
should be an integer, zero means no fork. If
|
||||
unset, it will try to default to the number
|
||||
of effective cpu cores and fallback to one.
|
||||
Increasing the number of workers may reduce
|
||||
the possibility of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests, but may not be as
|
||||
efficient as tuning :ref:`threads_per_disk
|
||||
<object-server-options>`
|
||||
max_clients 1024 Maximum number of clients one worker can
|
||||
process simultaneously (it will actually
|
||||
accept(2) N + 1). Setting this to one (1)
|
||||
will only handle one request at a time,
|
||||
without accepting another request
|
||||
concurrently. By increasing the number of
|
||||
workers to a much higher value, one can
|
||||
reduce the impact of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests.
|
||||
concurrently.
|
||||
disable_fallocate false Disable "fast fail" fallocate checks if the
|
||||
underlying filesystem does not support it.
|
||||
log_custom_handlers None Comma-separated list of functions to call
|
||||
@ -353,6 +359,8 @@ fallocate_reserve 0 You can set fallocate_reserve to the number of
|
||||
early.
|
||||
=================== ========== =============================================
|
||||
|
||||
.. _object-server-options:
|
||||
|
||||
[object-server]
|
||||
|
||||
================== ============= ===========================================
|
||||
@ -462,17 +470,22 @@ mount_check true Whether or not check if the devices are
|
||||
bind_ip 0.0.0.0 IP Address for server to bind to
|
||||
bind_port 6001 Port for server to bind to
|
||||
bind_timeout 30 Seconds to attempt bind before giving up
|
||||
workers 1 Number of workers to fork
|
||||
workers auto Override the number of pre-forked workers
|
||||
that will accept connections. If set it
|
||||
should be an integer, zero means no fork. If
|
||||
unset, it will try to default to the number
|
||||
of effective cpu cores and fallback to one.
|
||||
Increasing the number of workers may reduce
|
||||
the possibility of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests. See
|
||||
:ref:`general-service-tuning`
|
||||
max_clients 1024 Maximum number of clients one worker can
|
||||
process simultaneously (it will actually
|
||||
accept(2) N + 1). Setting this to one (1)
|
||||
will only handle one request at a time,
|
||||
without accepting another request
|
||||
concurrently. By increasing the number of
|
||||
workers to a much higher value, one can
|
||||
reduce the impact of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests.
|
||||
concurrently.
|
||||
user swift User to run as
|
||||
disable_fallocate false Disable "fast fail" fallocate checks if the
|
||||
underlying filesystem does not support it.
|
||||
@ -582,17 +595,22 @@ mount_check true Whether or not check if the devices are
|
||||
bind_ip 0.0.0.0 IP Address for server to bind to
|
||||
bind_port 6002 Port for server to bind to
|
||||
bind_timeout 30 Seconds to attempt bind before giving up
|
||||
workers 1 Number of workers to fork
|
||||
workers auto Override the number of pre-forked workers
|
||||
that will accept connections. If set it
|
||||
should be an integer, zero means no fork. If
|
||||
unset, it will try to default to the number
|
||||
of effective cpu cores and fallback to one.
|
||||
Increasing the number of workers may reduce
|
||||
the possibility of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests. See
|
||||
:ref:`general-service-tuning`
|
||||
max_clients 1024 Maximum number of clients one worker can
|
||||
process simultaneously (it will actually
|
||||
accept(2) N + 1). Setting this to one (1)
|
||||
will only handle one request at a time,
|
||||
without accepting another request
|
||||
concurrently. By increasing the number of
|
||||
workers to a much higher value, one can
|
||||
reduce the impact of slow file system
|
||||
operations in one request from negatively
|
||||
impacting other requests.
|
||||
concurrently.
|
||||
user swift User to run as
|
||||
db_preallocation off If you don't mind the extra disk space usage in
|
||||
overhead, you can turn this on to preallocate
|
||||
@ -696,7 +714,15 @@ bind_port 80 Port for server to bind to
|
||||
bind_timeout 30 Seconds to attempt bind before
|
||||
giving up
|
||||
swift_dir /etc/swift Swift configuration directory
|
||||
workers 1 Number of workers to fork
|
||||
workers auto Override the number of
|
||||
pre-forked workers that will
|
||||
accept connections. If set it
|
||||
should be an integer, zero
|
||||
means no fork. If unset, it
|
||||
will try to default to the
|
||||
number of effective cpu cores
|
||||
and fallback to one. See
|
||||
:ref:`general-service-tuning`
|
||||
max_clients 1024 Maximum number of clients one
|
||||
worker can process
|
||||
simultaneously (it will
|
||||
@ -705,13 +731,7 @@ max_clients 1024 Maximum number of clients one
|
||||
will only handle one request at
|
||||
a time, without accepting
|
||||
another request
|
||||
concurrently. By increasing the
|
||||
number of workers to a much
|
||||
higher value, one can reduce
|
||||
the impact of slow file system
|
||||
operations in one request from
|
||||
negatively impacting other
|
||||
requests.
|
||||
concurrently.
|
||||
user swift User to run as
|
||||
cert_file Path to the ssl .crt. This
|
||||
should be enabled for testing
|
||||
@ -921,6 +941,8 @@ it is a good idea for all the servers). At Rackspace, we use NTP with a local
|
||||
NTP server to ensure that the system times are as close as possible. This
|
||||
should also be monitored to ensure that the times do not vary too much.
|
||||
|
||||
.. _general-service-tuning:
|
||||
|
||||
----------------------
|
||||
General Service Tuning
|
||||
----------------------
|
||||
|
@ -300,6 +300,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
|
||||
[DEFAULT]
|
||||
bind_port = 8080
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL1
|
||||
eventlet_debug = true
|
||||
@ -342,6 +343,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6012
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL2
|
||||
recon_cache_path = /var/cache/swift
|
||||
@ -370,6 +372,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6022
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL3
|
||||
recon_cache_path = /var/cache/swift2
|
||||
@ -398,6 +401,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6032
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL4
|
||||
recon_cache_path = /var/cache/swift3
|
||||
@ -426,6 +430,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6042
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL5
|
||||
recon_cache_path = /var/cache/swift4
|
||||
@ -454,6 +459,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6011
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL2
|
||||
recon_cache_path = /var/cache/swift
|
||||
@ -484,6 +490,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6021
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL3
|
||||
recon_cache_path = /var/cache/swift2
|
||||
@ -514,6 +521,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6031
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL4
|
||||
recon_cache_path = /var/cache/swift3
|
||||
@ -544,6 +552,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6041
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL5
|
||||
recon_cache_path = /var/cache/swift4
|
||||
@ -575,6 +584,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6010
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL2
|
||||
recon_cache_path = /var/cache/swift
|
||||
@ -603,6 +613,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6020
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL3
|
||||
recon_cache_path = /var/cache/swift2
|
||||
@ -631,6 +642,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6030
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL4
|
||||
recon_cache_path = /var/cache/swift3
|
||||
@ -659,6 +671,7 @@ Sample configuration files are provided with all defaults in line-by-line commen
|
||||
mount_check = false
|
||||
disable_fallocate = true
|
||||
bind_port = 6040
|
||||
workers = 1
|
||||
user = <your-user-name>
|
||||
log_facility = LOG_LOCAL5
|
||||
recon_cache_path = /var/cache/swift4
|
||||
|
@ -3,13 +3,19 @@
|
||||
# bind_port = 6002
|
||||
# bind_timeout = 30
|
||||
# backlog = 4096
|
||||
# workers = 1
|
||||
# user = swift
|
||||
# swift_dir = /etc/swift
|
||||
# devices = /srv/node
|
||||
# mount_check = true
|
||||
# disable_fallocate = false
|
||||
#
|
||||
# Use an integer to override the number of pre-forked processes that will
|
||||
# accept connections.
|
||||
# workers = auto
|
||||
#
|
||||
# Maximum concurrent requests per worker
|
||||
# max_clients = 1024
|
||||
#
|
||||
# You can specify default log routing here if you want:
|
||||
# log_name = swift
|
||||
# log_facility = LOG_LOCAL0
|
||||
@ -55,7 +61,6 @@ use = egg:swift#account
|
||||
# set log_address = /dev/log
|
||||
#
|
||||
# auto_create_account_prefix = .
|
||||
# max_clients = 1024
|
||||
#
|
||||
# Configure parameter for creating specific server
|
||||
# To handle all verbs, including replication verbs, do not specify
|
||||
|
@ -3,13 +3,19 @@
|
||||
# bind_port = 6001
|
||||
# bind_timeout = 30
|
||||
# backlog = 4096
|
||||
# workers = 1
|
||||
# user = swift
|
||||
# swift_dir = /etc/swift
|
||||
# devices = /srv/node
|
||||
# mount_check = true
|
||||
# disable_fallocate = false
|
||||
#
|
||||
# Use an integer to override the number of pre-forked processes that will
|
||||
# accept connections.
|
||||
# workers = auto
|
||||
#
|
||||
# Maximum concurrent requests per worker
|
||||
# max_clients = 1024
|
||||
#
|
||||
# This is a comma separated list of hosts allowed in the X-Container-Sync-To
|
||||
# field for containers.
|
||||
# allowed_sync_hosts = 127.0.0.1
|
||||
@ -62,7 +68,6 @@ use = egg:swift#container
|
||||
# conn_timeout = 0.5
|
||||
# allow_versions = False
|
||||
# auto_create_account_prefix = .
|
||||
# max_clients = 1024
|
||||
#
|
||||
# Configure parameter for creating specific server
|
||||
# To handle all verbs, including replication verbs, do not specify
|
||||
|
@ -3,7 +3,6 @@
|
||||
# bind_port = 6000
|
||||
# bind_timeout = 30
|
||||
# backlog = 4096
|
||||
# workers = 1
|
||||
# user = swift
|
||||
# swift_dir = /etc/swift
|
||||
# devices = /srv/node
|
||||
@ -11,6 +10,13 @@
|
||||
# disable_fallocate = false
|
||||
# expiring_objects_container_divisor = 86400
|
||||
#
|
||||
# Use an integer to override the number of pre-forked processes that will
|
||||
# accept connections.
|
||||
# workers = auto
|
||||
#
|
||||
# Maximum concurrent requests per worker
|
||||
# max_clients = 1024
|
||||
#
|
||||
# You can specify default log routing here if you want:
|
||||
# log_name = swift
|
||||
# log_facility = LOG_LOCAL0
|
||||
@ -74,7 +80,6 @@ use = egg:swift#object
|
||||
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, X-Object-Manifest, X-Static-Large-Object
|
||||
#
|
||||
# auto_create_account_prefix = .
|
||||
# max_clients = 1024
|
||||
#
|
||||
# Configure parameter for creating specific server
|
||||
# To handle all verbs, including replication verbs, do not specify
|
||||
|
@ -4,9 +4,17 @@
|
||||
# bind_timeout = 30
|
||||
# backlog = 4096
|
||||
# swift_dir = /etc/swift
|
||||
# workers = 1
|
||||
# user = swift
|
||||
#
|
||||
# Use an integer to override the number of pre-forked processes that will
|
||||
# accept connections. Should default to the number of effective cpu
|
||||
# cores in the system. It's worth noting that individual workers will
|
||||
# use many eventlet co-routines to service multiple concurrent requests.
|
||||
# workers = auto
|
||||
#
|
||||
# Maximum concurrent requests per worker
|
||||
# max_clients = 1024
|
||||
#
|
||||
# Set the following two lines to enable SSL. This is for testing only.
|
||||
# cert_file = /etc/swift/proxy.crt
|
||||
# key_file = /etc/swift/proxy.key
|
||||
@ -46,7 +54,6 @@
|
||||
#
|
||||
# client_timeout = 60
|
||||
# eventlet_debug = false
|
||||
# max_clients = 1024
|
||||
|
||||
[pipeline:main]
|
||||
pipeline = catch_errors healthcheck proxy-logging cache slo ratelimit tempauth container-quotas account-quotas proxy-logging proxy-server
|
||||
|
@ -142,6 +142,22 @@ def config_true_value(value):
|
||||
(isinstance(value, basestring) and value.lower() in TRUE_VALUES)
|
||||
|
||||
|
||||
def config_auto_int_value(value, default):
|
||||
"""
|
||||
Returns default if value is None or 'auto'.
|
||||
Returns value as an int or raises ValueError otherwise.
|
||||
"""
|
||||
if value is None or \
|
||||
(isinstance(value, basestring) and value.lower() == 'auto'):
|
||||
return default
|
||||
try:
|
||||
value = int(value)
|
||||
except (TypeError, ValueError):
|
||||
raise ValueError('Config option must be a integer or the '
|
||||
'string "auto", not "%s".' % value)
|
||||
return value
|
||||
|
||||
|
||||
def noop_libc_function(*args):
|
||||
return 0
|
||||
|
||||
|
@ -34,7 +34,13 @@ from swift.common import utils
|
||||
from swift.common.swob import Request
|
||||
from swift.common.utils import capture_stdio, disable_fallocate, \
|
||||
drop_privileges, get_logger, NullLogger, config_true_value, \
|
||||
validate_configuration, get_hub
|
||||
validate_configuration, get_hub, config_auto_int_value
|
||||
|
||||
try:
|
||||
import multiprocessing
|
||||
CPU_COUNT = multiprocessing.cpu_count() or 1
|
||||
except (ImportError, NotImplementedError):
|
||||
CPU_COUNT = 1
|
||||
|
||||
|
||||
class NamedConfigLoader(loadwsgi.ConfigLoader):
|
||||
@ -255,7 +261,8 @@ def run_wsgi(conf_path, app_section, *args, **kwargs):
|
||||
# redirect errors to logger and close stdio
|
||||
capture_stdio(logger)
|
||||
|
||||
worker_count = int(conf.get('workers', '1'))
|
||||
worker_count = config_auto_int_value(conf.get('workers'), CPU_COUNT)
|
||||
|
||||
# Useful for profiling [no forks].
|
||||
if worker_count == 0:
|
||||
run_server(conf, logger, sock)
|
||||
|
@ -1121,6 +1121,26 @@ log_name = %(yarr)s'''
|
||||
finally:
|
||||
utils.TRUE_VALUES = orig_trues
|
||||
|
||||
def test_config_auto_int_value(self):
|
||||
expectations = {
|
||||
# (value, default) : expected,
|
||||
('1', 0): 1,
|
||||
(1, 0): 1,
|
||||
('asdf', 0): ValueError,
|
||||
('auto', 1): 1,
|
||||
('AutO', 1): 1,
|
||||
('Aut0', 1): ValueError,
|
||||
(None, 1): 1,
|
||||
}
|
||||
for (value, default), expected in expectations.items():
|
||||
try:
|
||||
rv = utils.config_auto_int_value(value, default)
|
||||
except Exception, e:
|
||||
if e.__class__ is not expected:
|
||||
raise
|
||||
else:
|
||||
self.assertEquals(expected, rv)
|
||||
|
||||
def test_streq_const_time(self):
|
||||
self.assertTrue(utils.streq_const_time('abc123', 'abc123'))
|
||||
self.assertFalse(utils.streq_const_time('a', 'aaaaa'))
|
||||
|
@ -300,6 +300,7 @@ class TestWSGI(unittest.TestCase):
|
||||
[DEFAULT]
|
||||
eventlet_debug = yes
|
||||
client_timeout = 30
|
||||
max_clients = 1000
|
||||
swift_dir = TEMPDIR
|
||||
|
||||
[pipeline:main]
|
||||
@ -307,6 +308,10 @@ class TestWSGI(unittest.TestCase):
|
||||
|
||||
[app:proxy-server]
|
||||
use = egg:swift#proxy
|
||||
# while "set" values normally override default
|
||||
set client_timeout = 20
|
||||
# this section is not in conf during run_server
|
||||
set max_clients = 10
|
||||
"""
|
||||
|
||||
contents = dedent(config)
|
||||
@ -333,8 +338,10 @@ class TestWSGI(unittest.TestCase):
|
||||
server_sock, server_app, server_logger = args
|
||||
self.assertEquals(sock, server_sock)
|
||||
self.assert_(isinstance(server_app, swift.proxy.server.Application))
|
||||
self.assertEquals(20, server_app.client_timeout)
|
||||
self.assert_(isinstance(server_logger, wsgi.NullLogger))
|
||||
self.assert_('custom_pool' in kwargs)
|
||||
self.assertEquals(1000, kwargs['custom_pool'].size)
|
||||
|
||||
def test_run_server_conf_dir(self):
|
||||
config_dir = {
|
||||
|
Loading…
Reference in New Issue
Block a user