proxy: Add a chance to skip memcache for get_*_info calls
If you've got thousands of requests per second for objects in a single container, you basically NEVER want that container's info to ever fall out of memcache. If it *does*, all those clients are almost certainly going to overload the container. Avoid this by allowing some small fraction of requests to bypass and refresh the cache, pushing out the TTL as long as there continue to be requests to the container. The likelihood of skipping the cache is configurable, similar to what we did for shard range sets. Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f Closes-Bug: #1883324
This commit is contained in:
parent
24acc6e56b
commit
5c6407bf59
@ -156,203 +156,231 @@ ionice_priority None I/O scheduling p
|
|||||||
[proxy-server]
|
[proxy-server]
|
||||||
**************
|
**************
|
||||||
|
|
||||||
====================================== =============== =====================================
|
============================================== =============== =====================================
|
||||||
Option Default Description
|
Option Default Description
|
||||||
-------------------------------------- --------------- -------------------------------------
|
---------------------------------------------- --------------- -------------------------------------
|
||||||
use Entry point for paste.deploy for
|
use Entry point for paste.deploy for
|
||||||
the proxy server. For most
|
the proxy server. For most
|
||||||
cases, this should be
|
cases, this should be
|
||||||
``egg:swift#proxy``.
|
``egg:swift#proxy``.
|
||||||
set log_name proxy-server Label used when logging
|
set log_name proxy-server Label used when logging
|
||||||
set log_facility LOG_LOCAL0 Syslog log facility
|
set log_facility LOG_LOCAL0 Syslog log facility
|
||||||
set log_level INFO Log level
|
set log_level INFO Log level
|
||||||
set log_headers True If True, log headers in each
|
set log_headers True If True, log headers in each
|
||||||
request
|
request
|
||||||
set log_handoffs True If True, the proxy will log
|
set log_handoffs True If True, the proxy will log
|
||||||
whenever it has to failover to a
|
whenever it has to failover to a
|
||||||
handoff node
|
handoff node
|
||||||
recheck_account_existence 60 Cache timeout in seconds to
|
recheck_account_existence 60 Cache timeout in seconds to
|
||||||
send memcached for account
|
send memcached for account
|
||||||
existence
|
existence
|
||||||
recheck_container_existence 60 Cache timeout in seconds to
|
recheck_container_existence 60 Cache timeout in seconds to
|
||||||
send memcached for container
|
send memcached for container
|
||||||
existence
|
existence
|
||||||
object_chunk_size 65536 Chunk size to read from
|
account_existence_skip_cache_pct 0.0 Periodically, bypass the cache
|
||||||
object servers
|
for account info requests and
|
||||||
client_chunk_size 65536 Chunk size to read from
|
goto disk to refresh the data
|
||||||
clients
|
in the cache. This is a percentage
|
||||||
memcache_servers 127.0.0.1:11211 Comma separated list of
|
of requests should randomly skip.
|
||||||
memcached servers
|
Values around 0.0 - 0.1 (1 in every
|
||||||
ip:port or [ipv6addr]:port
|
1000) are recommended.
|
||||||
memcache_max_connections 2 Max number of connections to
|
container_existence_skip_cache_pct 0.0 Periodically, bypass the cache
|
||||||
each memcached server per
|
for container info requests and
|
||||||
worker
|
goto disk to refresh the data
|
||||||
node_timeout 10 Request timeout to external
|
in the cache. This is a percentage
|
||||||
services
|
of requests should randomly skip.
|
||||||
recoverable_node_timeout node_timeout Request timeout to external
|
Values around 0.0 - 0.1 (1 in every
|
||||||
services for requests that, on
|
1000) are recommended.
|
||||||
failure, can be recovered
|
container_updating_shard_ranges_skip_cache_pct 0.0 Periodically, bypass the cache
|
||||||
from. For example, object GET.
|
for shard_range update requests and
|
||||||
client_timeout 60 Timeout to read one chunk
|
goto disk to refresh the data
|
||||||
from a client
|
in the cache. This is a percentage
|
||||||
conn_timeout 0.5 Connection timeout to
|
of requests should randomly skip.
|
||||||
external services
|
Values around 0.0 - 0.1 (1 in every
|
||||||
error_suppression_interval 60 Time in seconds that must
|
1000) are recommended.
|
||||||
elapse since the last error
|
container_listing_shard_ranges_skip_cache_pct 0.0 Periodically, bypass the cache
|
||||||
for a node to be considered
|
for shard_range listing info requests
|
||||||
no longer error limited
|
and goto disk to refresh the data
|
||||||
error_suppression_limit 10 Error count to consider a
|
in the cache. This is a percentage
|
||||||
node error limited
|
of requests should randomly skip.
|
||||||
allow_account_management false Whether account PUTs and DELETEs
|
Values around 0.0 - 0.1 (1 in every
|
||||||
are even callable
|
1000) are recommended.
|
||||||
account_autocreate false If set to 'true' authorized
|
object_chunk_size 65536 Chunk size to read from
|
||||||
accounts that do not yet exist
|
object servers
|
||||||
within the Swift cluster will
|
client_chunk_size 65536 Chunk size to read from
|
||||||
be automatically created.
|
clients
|
||||||
max_containers_per_account 0 If set to a positive value,
|
memcache_servers 127.0.0.1:11211 Comma separated list of
|
||||||
trying to create a container
|
memcached servers
|
||||||
when the account already has at
|
ip:port or [ipv6addr]:port
|
||||||
least this maximum containers
|
memcache_max_connections 2 Max number of connections to
|
||||||
will result in a 403 Forbidden.
|
each memcached server per
|
||||||
Note: This is a soft limit,
|
worker
|
||||||
meaning a user might exceed the
|
node_timeout 10 Request timeout to external
|
||||||
cap for
|
services
|
||||||
recheck_account_existence before
|
recoverable_node_timeout node_timeout Request timeout to external
|
||||||
the 403s kick in.
|
services for requests that, on
|
||||||
max_containers_whitelist This is a comma separated list
|
failure, can be recovered
|
||||||
of account names that ignore
|
from. For example, object GET.
|
||||||
the max_containers_per_account
|
client_timeout 60 Timeout to read one chunk
|
||||||
cap.
|
from a client
|
||||||
rate_limit_after_segment 10 Rate limit the download of
|
conn_timeout 0.5 Connection timeout to
|
||||||
large object segments after
|
external services
|
||||||
this segment is downloaded.
|
error_suppression_interval 60 Time in seconds that must
|
||||||
rate_limit_segments_per_sec 1 Rate limit large object
|
elapse since the last error
|
||||||
downloads at this rate.
|
for a node to be considered
|
||||||
request_node_count 2 * replicas Set to the number of nodes to
|
no longer error limited
|
||||||
contact for a normal request.
|
error_suppression_limit 10 Error count to consider a
|
||||||
You can use '* replicas' at the
|
node error limited
|
||||||
end to have it use the number
|
allow_account_management false Whether account PUTs and DELETEs
|
||||||
given times the number of
|
are even callable
|
||||||
replicas for the ring being used
|
account_autocreate false If set to 'true' authorized
|
||||||
for the request.
|
accounts that do not yet exist
|
||||||
swift_owner_headers <see the sample These are the headers whose
|
within the Swift cluster will
|
||||||
conf file for values will only be shown to
|
be automatically created.
|
||||||
the list of swift_owners. The exact
|
max_containers_per_account 0 If set to a positive value,
|
||||||
default definition of a swift_owner is
|
trying to create a container
|
||||||
headers> up to the auth system in use,
|
when the account already has at
|
||||||
but usually indicates
|
least this maximum containers
|
||||||
administrative responsibilities.
|
will result in a 403 Forbidden.
|
||||||
sorting_method shuffle Storage nodes can be chosen at
|
Note: This is a soft limit,
|
||||||
random (shuffle), by using timing
|
meaning a user might exceed the
|
||||||
measurements (timing), or by using
|
cap for
|
||||||
an explicit match (affinity).
|
recheck_account_existence before
|
||||||
Using timing measurements may allow
|
the 403s kick in.
|
||||||
for lower overall latency, while
|
max_containers_whitelist This is a comma separated list
|
||||||
using affinity allows for finer
|
of account names that ignore
|
||||||
control. In both the timing and
|
the max_containers_per_account
|
||||||
affinity cases, equally-sorting nodes
|
cap.
|
||||||
are still randomly chosen to spread
|
rate_limit_after_segment 10 Rate limit the download of
|
||||||
load. This option may be overridden
|
large object segments after
|
||||||
in a per-policy configuration
|
this segment is downloaded.
|
||||||
section.
|
rate_limit_segments_per_sec 1 Rate limit large object
|
||||||
timing_expiry 300 If the "timing" sorting_method is
|
downloads at this rate.
|
||||||
used, the timings will only be valid
|
request_node_count 2 * replicas Set to the number of nodes to
|
||||||
for the number of seconds configured
|
contact for a normal request.
|
||||||
by timing_expiry.
|
You can use '* replicas' at the
|
||||||
concurrent_gets off Use replica count number of
|
end to have it use the number
|
||||||
threads concurrently during a
|
given times the number of
|
||||||
GET/HEAD and return with the
|
replicas for the ring being used
|
||||||
first successful response. In
|
for the request.
|
||||||
the EC case, this parameter only
|
swift_owner_headers <see the sample These are the headers whose
|
||||||
affects an EC HEAD as an EC GET
|
conf file for values will only be shown to
|
||||||
behaves differently.
|
the list of swift_owners. The exact
|
||||||
concurrency_timeout conn_timeout This parameter controls how long
|
default definition of a swift_owner is
|
||||||
to wait before firing off the
|
headers> up to the auth system in use,
|
||||||
next concurrent_get thread. A
|
but usually indicates
|
||||||
value of 0 would we fully concurrent,
|
administrative responsibilities.
|
||||||
any other number will stagger the
|
sorting_method shuffle Storage nodes can be chosen at
|
||||||
firing of the threads. This number
|
random (shuffle), by using timing
|
||||||
should be between 0 and node_timeout.
|
measurements (timing), or by using
|
||||||
The default is conn_timeout (0.5).
|
an explicit match (affinity).
|
||||||
nice_priority None Scheduling priority of server
|
Using timing measurements may allow
|
||||||
processes.
|
for lower overall latency, while
|
||||||
Niceness values range from -20 (most
|
using affinity allows for finer
|
||||||
favorable to the process) to 19 (least
|
control. In both the timing and
|
||||||
favorable to the process). The default
|
affinity cases, equally-sorting nodes
|
||||||
does not modify priority.
|
are still randomly chosen to spread
|
||||||
ionice_class None I/O scheduling class of server
|
load. This option may be overridden
|
||||||
processes. I/O niceness class values
|
in a per-policy configuration
|
||||||
are IOPRIO_CLASS_RT (realtime),
|
section.
|
||||||
IOPRIO_CLASS_BE (best-effort),
|
timing_expiry 300 If the "timing" sorting_method is
|
||||||
and IOPRIO_CLASS_IDLE (idle).
|
used, the timings will only be valid
|
||||||
The default does not modify class and
|
for the number of seconds configured
|
||||||
priority. Linux supports io scheduling
|
by timing_expiry.
|
||||||
priorities and classes since 2.6.13
|
concurrent_gets off Use replica count number of
|
||||||
with the CFQ io scheduler.
|
threads concurrently during a
|
||||||
Work only with ionice_priority.
|
GET/HEAD and return with the
|
||||||
ionice_priority None I/O scheduling priority of server
|
first successful response. In
|
||||||
processes. I/O niceness priority is
|
the EC case, this parameter only
|
||||||
a number which goes from 0 to 7.
|
affects an EC HEAD as an EC GET
|
||||||
The higher the value, the lower the
|
behaves differently.
|
||||||
I/O priority of the process. Work
|
concurrency_timeout conn_timeout This parameter controls how long
|
||||||
only with ionice_class.
|
to wait before firing off the
|
||||||
Ignored if IOPRIO_CLASS_IDLE is set.
|
next concurrent_get thread. A
|
||||||
read_affinity None Specifies which backend servers to
|
value of 0 would we fully concurrent,
|
||||||
prefer on reads; used in conjunction
|
any other number will stagger the
|
||||||
with the sorting_method option being
|
firing of the threads. This number
|
||||||
set to 'affinity'. Format is a comma
|
should be between 0 and node_timeout.
|
||||||
separated list of affinity descriptors
|
The default is conn_timeout (0.5).
|
||||||
of the form <selection>=<priority>.
|
nice_priority None Scheduling priority of server
|
||||||
The <selection> may be r<N> for
|
processes.
|
||||||
selecting nodes in region N or
|
Niceness values range from -20 (most
|
||||||
r<N>z<M> for selecting nodes in
|
favorable to the process) to 19 (least
|
||||||
region N, zone M. The <priority>
|
favorable to the process). The default
|
||||||
value should be a whole number
|
does not modify priority.
|
||||||
that represents the priority to
|
ionice_class None I/O scheduling class of server
|
||||||
be given to the selection; lower
|
processes. I/O niceness class values
|
||||||
numbers are higher priority.
|
are IOPRIO_CLASS_RT (realtime),
|
||||||
Default is empty, meaning no
|
IOPRIO_CLASS_BE (best-effort),
|
||||||
preference. This option may be
|
and IOPRIO_CLASS_IDLE (idle).
|
||||||
overridden in a per-policy
|
The default does not modify class and
|
||||||
configuration section.
|
priority. Linux supports io scheduling
|
||||||
write_affinity None Specifies which backend servers to
|
priorities and classes since 2.6.13
|
||||||
prefer on writes. Format is a comma
|
with the CFQ io scheduler.
|
||||||
separated list of affinity
|
Work only with ionice_priority.
|
||||||
descriptors of the form r<N> for
|
ionice_priority None I/O scheduling priority of server
|
||||||
region N or r<N>z<M> for region N,
|
processes. I/O niceness priority is
|
||||||
zone M. Default is empty, meaning no
|
a number which goes from 0 to 7.
|
||||||
preference. This option may be
|
The higher the value, the lower the
|
||||||
overridden in a per-policy
|
I/O priority of the process. Work
|
||||||
configuration section.
|
only with ionice_class.
|
||||||
write_affinity_node_count 2 * replicas The number of local (as governed by
|
Ignored if IOPRIO_CLASS_IDLE is set.
|
||||||
the write_affinity setting) nodes to
|
read_affinity None Specifies which backend servers to
|
||||||
attempt to contact first on writes,
|
prefer on reads; used in conjunction
|
||||||
before any non-local ones. The value
|
with the sorting_method option being
|
||||||
should be an integer number, or use
|
set to 'affinity'. Format is a comma
|
||||||
'* replicas' at the end to have it
|
separated list of affinity descriptors
|
||||||
use the number given times the number
|
of the form <selection>=<priority>.
|
||||||
of replicas for the ring being used
|
The <selection> may be r<N> for
|
||||||
for the request. This option may be
|
selecting nodes in region N or
|
||||||
overridden in a per-policy
|
r<N>z<M> for selecting nodes in
|
||||||
configuration section.
|
region N, zone M. The <priority>
|
||||||
write_affinity_handoff_delete_count auto The number of local (as governed by
|
value should be a whole number
|
||||||
the write_affinity setting) handoff
|
that represents the priority to
|
||||||
nodes to attempt to contact on
|
be given to the selection; lower
|
||||||
deletion, in addition to primary
|
numbers are higher priority.
|
||||||
nodes. Example: in geographically
|
Default is empty, meaning no
|
||||||
distributed deployment, If replicas=3,
|
preference. This option may be
|
||||||
sometimes there may be 1 primary node
|
overridden in a per-policy
|
||||||
and 2 local handoff nodes in one region
|
configuration section.
|
||||||
holding the object after uploading but
|
write_affinity None Specifies which backend servers to
|
||||||
before object replicated to the
|
prefer on writes. Format is a comma
|
||||||
appropriate locations in other regions.
|
separated list of affinity
|
||||||
In this case, include these handoff
|
descriptors of the form r<N> for
|
||||||
nodes to send request when deleting
|
region N or r<N>z<M> for region N,
|
||||||
object could help make correct decision
|
zone M. Default is empty, meaning no
|
||||||
for the response. The default value 'auto'
|
preference. This option may be
|
||||||
means Swift will calculate the number
|
overridden in a per-policy
|
||||||
automatically, the default value is
|
configuration section.
|
||||||
(replicas - len(local_primary_nodes)).
|
write_affinity_node_count 2 * replicas The number of local (as governed by
|
||||||
This option may be overridden in a
|
the write_affinity setting) nodes to
|
||||||
per-policy configuration section.
|
attempt to contact first on writes,
|
||||||
====================================== =============== =====================================
|
before any non-local ones. The value
|
||||||
|
should be an integer number, or use
|
||||||
|
'* replicas' at the end to have it
|
||||||
|
use the number given times the number
|
||||||
|
of replicas for the ring being used
|
||||||
|
for the request. This option may be
|
||||||
|
overridden in a per-policy
|
||||||
|
configuration section.
|
||||||
|
write_affinity_handoff_delete_count auto The number of local (as governed by
|
||||||
|
the write_affinity setting) handoff
|
||||||
|
nodes to attempt to contact on
|
||||||
|
deletion, in addition to primary
|
||||||
|
nodes. Example: in geographically
|
||||||
|
distributed deployment, If replicas=3,
|
||||||
|
sometimes there may be 1 primary node
|
||||||
|
and 2 local handoff nodes in one region
|
||||||
|
holding the object after uploading but
|
||||||
|
before object replicated to the
|
||||||
|
appropriate locations in other regions.
|
||||||
|
In this case, include these handoff
|
||||||
|
nodes to send request when deleting
|
||||||
|
object could help make correct decision
|
||||||
|
for the response. The default value 'auto'
|
||||||
|
means Swift will calculate the number
|
||||||
|
automatically, the default value is
|
||||||
|
(replicas - len(local_primary_nodes)).
|
||||||
|
This option may be overridden in a
|
||||||
|
per-policy configuration section.
|
||||||
|
============================================== =============== =====================================
|
||||||
|
@ -153,8 +153,10 @@ use = egg:swift#proxy
|
|||||||
# data is present in memcache, we can periodically refresh the data in memcache
|
# data is present in memcache, we can periodically refresh the data in memcache
|
||||||
# without causing a thundering herd. Values around 0.0 - 0.1 (i.e., one in
|
# without causing a thundering herd. Values around 0.0 - 0.1 (i.e., one in
|
||||||
# every thousand requests skips cache, or fewer) are recommended.
|
# every thousand requests skips cache, or fewer) are recommended.
|
||||||
|
# container_existence_skip_cache_pct = 0.0
|
||||||
# container_updating_shard_ranges_skip_cache_pct = 0.0
|
# container_updating_shard_ranges_skip_cache_pct = 0.0
|
||||||
# container_listing_shard_ranges_skip_cache_pct = 0.0
|
# container_listing_shard_ranges_skip_cache_pct = 0.0
|
||||||
|
# account_existence_skip_cache_pct = 0.0
|
||||||
#
|
#
|
||||||
# object_chunk_size = 65536
|
# object_chunk_size = 65536
|
||||||
# client_chunk_size = 65536
|
# client_chunk_size = 65536
|
||||||
|
@ -167,6 +167,9 @@ from swift.common.registry import register_swift_info, \
|
|||||||
class ListingEtagMiddleware(object):
|
class ListingEtagMiddleware(object):
|
||||||
def __init__(self, app):
|
def __init__(self, app):
|
||||||
self.app = app
|
self.app = app
|
||||||
|
# Pass this along so get_container_info will have the configured
|
||||||
|
# odds to skip cache
|
||||||
|
self._pipeline_final_app = app._pipeline_final_app
|
||||||
|
|
||||||
def __call__(self, env, start_response):
|
def __call__(self, env, start_response):
|
||||||
# a lot of this is cribbed from listing_formats / swob.Request
|
# a lot of this is cribbed from listing_formats / swob.Request
|
||||||
|
@ -47,5 +47,8 @@ def filter_factory(global_conf, **local_conf):
|
|||||||
if 'symlink' not in get_swift_info():
|
if 'symlink' not in get_swift_info():
|
||||||
raise ValueError('object versioning requires symlinks')
|
raise ValueError('object versioning requires symlinks')
|
||||||
app = ObjectVersioningMiddleware(app, conf)
|
app = ObjectVersioningMiddleware(app, conf)
|
||||||
|
# Pass this along so get_container_info will have the configured
|
||||||
|
# odds to skip cache
|
||||||
|
app._pipeline_final_app = app.app._pipeline_final_app
|
||||||
return VersionedWritesMiddleware(app, conf)
|
return VersionedWritesMiddleware(app, conf)
|
||||||
return versioning_filter
|
return versioning_filter
|
||||||
|
@ -3716,7 +3716,11 @@ class StreamingPile(GreenAsyncPile):
|
|||||||
|
|
||||||
# Keep populating the pile as greenthreads become available
|
# Keep populating the pile as greenthreads become available
|
||||||
for args in args_iter:
|
for args in args_iter:
|
||||||
yield next(self)
|
try:
|
||||||
|
to_yield = next(self)
|
||||||
|
except StopIteration:
|
||||||
|
break
|
||||||
|
yield to_yield
|
||||||
self.spawn(func, *args)
|
self.spawn(func, *args)
|
||||||
|
|
||||||
# Drain the pile
|
# Drain the pile
|
||||||
|
@ -750,7 +750,30 @@ def _get_info_from_memcache(app, env, account, container=None):
|
|||||||
cache_key = get_cache_key(account, container)
|
cache_key = get_cache_key(account, container)
|
||||||
memcache = cache_from_env(env, True)
|
memcache = cache_from_env(env, True)
|
||||||
if memcache:
|
if memcache:
|
||||||
info = memcache.get(cache_key)
|
try:
|
||||||
|
proxy_app = app._pipeline_final_app
|
||||||
|
except AttributeError:
|
||||||
|
# Only the middleware entry-points get a reference to the
|
||||||
|
# proxy-server app; if a middleware composes itself as multiple
|
||||||
|
# filters, we'll just have to choose a reasonable default
|
||||||
|
skip_chance = 0.0
|
||||||
|
logger = None
|
||||||
|
else:
|
||||||
|
if container:
|
||||||
|
skip_chance = proxy_app.container_existence_skip_cache
|
||||||
|
else:
|
||||||
|
skip_chance = proxy_app.account_existence_skip_cache
|
||||||
|
logger = proxy_app.logger
|
||||||
|
info_type = 'container' if container else 'account'
|
||||||
|
if skip_chance and random.random() < skip_chance:
|
||||||
|
info = None
|
||||||
|
if logger:
|
||||||
|
logger.increment('%s.info.cache.skip' % info_type)
|
||||||
|
else:
|
||||||
|
info = memcache.get(cache_key)
|
||||||
|
if logger:
|
||||||
|
logger.increment('%s.info.cache.%s' % (
|
||||||
|
info_type, 'hit' if info else 'miss'))
|
||||||
if info and six.PY2:
|
if info and six.PY2:
|
||||||
# Get back to native strings
|
# Get back to native strings
|
||||||
new_info = {}
|
new_info = {}
|
||||||
|
@ -193,6 +193,10 @@ class Application(object):
|
|||||||
|
|
||||||
def __init__(self, conf, logger=None, account_ring=None,
|
def __init__(self, conf, logger=None, account_ring=None,
|
||||||
container_ring=None):
|
container_ring=None):
|
||||||
|
# This is for the sake of tests which instantiate an Application
|
||||||
|
# directly rather than via loadapp().
|
||||||
|
self._pipeline_final_app = self
|
||||||
|
|
||||||
if conf is None:
|
if conf is None:
|
||||||
conf = {}
|
conf = {}
|
||||||
if logger is None:
|
if logger is None:
|
||||||
@ -230,12 +234,16 @@ class Application(object):
|
|||||||
self.recheck_account_existence = \
|
self.recheck_account_existence = \
|
||||||
int(conf.get('recheck_account_existence',
|
int(conf.get('recheck_account_existence',
|
||||||
DEFAULT_RECHECK_ACCOUNT_EXISTENCE))
|
DEFAULT_RECHECK_ACCOUNT_EXISTENCE))
|
||||||
|
self.container_existence_skip_cache = config_percent_value(
|
||||||
|
conf.get('container_existence_skip_cache_pct', 0))
|
||||||
self.container_updating_shard_ranges_skip_cache = \
|
self.container_updating_shard_ranges_skip_cache = \
|
||||||
config_percent_value(conf.get(
|
config_percent_value(conf.get(
|
||||||
'container_updating_shard_ranges_skip_cache_pct', 0))
|
'container_updating_shard_ranges_skip_cache_pct', 0))
|
||||||
self.container_listing_shard_ranges_skip_cache = \
|
self.container_listing_shard_ranges_skip_cache = \
|
||||||
config_percent_value(conf.get(
|
config_percent_value(conf.get(
|
||||||
'container_listing_shard_ranges_skip_cache_pct', 0))
|
'container_listing_shard_ranges_skip_cache_pct', 0))
|
||||||
|
self.account_existence_skip_cache = config_percent_value(
|
||||||
|
conf.get('account_existence_skip_cache_pct', 0))
|
||||||
self.allow_account_management = \
|
self.allow_account_management = \
|
||||||
config_true_value(conf.get('allow_account_management', 'no'))
|
config_true_value(conf.get('allow_account_management', 'no'))
|
||||||
self.container_ring = container_ring or Ring(swift_dir,
|
self.container_ring = container_ring or Ring(swift_dir,
|
||||||
|
@ -77,6 +77,8 @@ class FakeSwift(object):
|
|||||||
ALLOWED_METHODS = [
|
ALLOWED_METHODS = [
|
||||||
'PUT', 'POST', 'DELETE', 'GET', 'HEAD', 'OPTIONS', 'REPLICATE',
|
'PUT', 'POST', 'DELETE', 'GET', 'HEAD', 'OPTIONS', 'REPLICATE',
|
||||||
'SSYNC', 'UPDATE']
|
'SSYNC', 'UPDATE']
|
||||||
|
container_existence_skip_cache = 0.0
|
||||||
|
account_existence_skip_cache = 0.0
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self._calls = []
|
self._calls = []
|
||||||
|
@ -29,8 +29,13 @@ from test.unit.common.middleware.s3api.helpers import FakeSwift
|
|||||||
|
|
||||||
|
|
||||||
class FakeApp(object):
|
class FakeApp(object):
|
||||||
|
container_existence_skip_cache = 0.0
|
||||||
|
account_existence_skip_cache = 0.0
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
self._pipeline_final_app = self
|
||||||
self.swift = FakeSwift()
|
self.swift = FakeSwift()
|
||||||
|
self.logger = debug_logger()
|
||||||
|
|
||||||
def _update_s3_path_info(self, env):
|
def _update_s3_path_info(self, env):
|
||||||
"""
|
"""
|
||||||
|
@ -2124,7 +2124,8 @@ class TestContainerController(TestRingBase):
|
|||||||
req.environ['swift.infocache']['shard-listing/a/c'])
|
req.environ['swift.infocache']['shard-listing/a/c'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
||||||
['container.shard_listing.backend.200'])
|
['container.info.cache.miss',
|
||||||
|
'container.shard_listing.backend.200'])
|
||||||
|
|
||||||
# container is sharded and proxy has that state cached, but
|
# container is sharded and proxy has that state cached, but
|
||||||
# no shard ranges cached; expect a cache miss and write-back
|
# no shard ranges cached; expect a cache miss and write-back
|
||||||
@ -2161,7 +2162,8 @@ class TestContainerController(TestRingBase):
|
|||||||
req.environ['swift.infocache']['shard-listing/a/c'])
|
req.environ['swift.infocache']['shard-listing/a/c'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
||||||
['container.shard_listing.cache.miss',
|
['container.info.cache.hit',
|
||||||
|
'container.shard_listing.cache.miss',
|
||||||
'container.shard_listing.backend.200'])
|
'container.shard_listing.backend.200'])
|
||||||
|
|
||||||
# container is sharded and proxy does have that state cached and
|
# container is sharded and proxy does have that state cached and
|
||||||
@ -2185,7 +2187,8 @@ class TestContainerController(TestRingBase):
|
|||||||
req.environ['swift.infocache']['shard-listing/a/c'])
|
req.environ['swift.infocache']['shard-listing/a/c'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
||||||
['container.shard_listing.cache.hit'])
|
['container.info.cache.hit',
|
||||||
|
'container.shard_listing.cache.hit'])
|
||||||
|
|
||||||
# if there's a chance to skip cache, maybe we go to disk again...
|
# if there's a chance to skip cache, maybe we go to disk again...
|
||||||
self.memcache.clear_calls()
|
self.memcache.clear_calls()
|
||||||
@ -2221,7 +2224,8 @@ class TestContainerController(TestRingBase):
|
|||||||
req.environ['swift.infocache']['shard-listing/a/c'])
|
req.environ['swift.infocache']['shard-listing/a/c'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
||||||
['container.shard_listing.cache.skip',
|
['container.info.cache.hit',
|
||||||
|
'container.shard_listing.cache.skip',
|
||||||
'container.shard_listing.backend.200'])
|
'container.shard_listing.backend.200'])
|
||||||
|
|
||||||
# ... or maybe we serve from cache
|
# ... or maybe we serve from cache
|
||||||
@ -2245,8 +2249,8 @@ class TestContainerController(TestRingBase):
|
|||||||
req.environ['swift.infocache']['shard-listing/a/c'])
|
req.environ['swift.infocache']['shard-listing/a/c'])
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
[x[0][0] for x in self.logger.logger.log_dict['increment']],
|
||||||
['container.shard_listing.cache.hit'])
|
['container.info.cache.hit',
|
||||||
|
'container.shard_listing.cache.hit'])
|
||||||
# put this back the way we found it for later subtests
|
# put this back the way we found it for later subtests
|
||||||
self.app.container_listing_shard_ranges_skip_cache = 0.0
|
self.app.container_listing_shard_ranges_skip_cache = 0.0
|
||||||
|
|
||||||
@ -2396,7 +2400,8 @@ class TestContainerController(TestRingBase):
|
|||||||
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
|
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
|
||||||
self.assertEqual(b'', resp.body)
|
self.assertEqual(b'', resp.body)
|
||||||
self.assertEqual(404, resp.status_int)
|
self.assertEqual(404, resp.status_int)
|
||||||
self.assertEqual({'container.shard_listing.cache.miss': 1,
|
self.assertEqual({'container.info.cache.hit': 1,
|
||||||
|
'container.shard_listing.cache.miss': 1,
|
||||||
'container.shard_listing.backend.404': 1},
|
'container.shard_listing.backend.404': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
|
|
||||||
@ -2429,7 +2434,8 @@ class TestContainerController(TestRingBase):
|
|||||||
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
|
self.assertEqual(404, self.memcache.calls[2][1][1]['status'])
|
||||||
self.assertEqual(b'', resp.body)
|
self.assertEqual(b'', resp.body)
|
||||||
self.assertEqual(404, resp.status_int)
|
self.assertEqual(404, resp.status_int)
|
||||||
self.assertEqual({'container.shard_listing.cache.error': 1,
|
self.assertEqual({'container.info.cache.hit': 1,
|
||||||
|
'container.shard_listing.cache.error': 1,
|
||||||
'container.shard_listing.backend.404': 1},
|
'container.shard_listing.backend.404': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
|
|
||||||
@ -2452,7 +2458,8 @@ class TestContainerController(TestRingBase):
|
|||||||
[mock.call.get('container/a/c'),
|
[mock.call.get('container/a/c'),
|
||||||
mock.call.get('shard-listing/a/c', raise_on_error=True)],
|
mock.call.get('shard-listing/a/c', raise_on_error=True)],
|
||||||
self.memcache.calls)
|
self.memcache.calls)
|
||||||
self.assertEqual({'container.shard_listing.cache.hit': 1},
|
self.assertEqual({'container.info.cache.hit': 1,
|
||||||
|
'container.shard_listing.cache.hit': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
return resp
|
return resp
|
||||||
|
|
||||||
@ -2542,7 +2549,8 @@ class TestContainerController(TestRingBase):
|
|||||||
# shards were cached
|
# shards were cached
|
||||||
self.assertEqual('sharded',
|
self.assertEqual('sharded',
|
||||||
self.memcache.calls[2][1][1]['sharding_state'])
|
self.memcache.calls[2][1][1]['sharding_state'])
|
||||||
self.assertEqual({'container.shard_listing.backend.200': 1},
|
self.assertEqual({'container.info.cache.miss': 1,
|
||||||
|
'container.shard_listing.backend.200': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
return resp
|
return resp
|
||||||
|
|
||||||
@ -2635,7 +2643,8 @@ class TestContainerController(TestRingBase):
|
|||||||
self.memcache.calls)
|
self.memcache.calls)
|
||||||
self.assertEqual('sharded',
|
self.assertEqual('sharded',
|
||||||
self.memcache.calls[2][1][1]['sharding_state'])
|
self.memcache.calls[2][1][1]['sharding_state'])
|
||||||
self.assertEqual({'container.shard_listing.backend.200': 1},
|
self.assertEqual({'container.info.cache.miss': 1,
|
||||||
|
'container.shard_listing.backend.200': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
|
|
||||||
def _do_test_GET_shard_ranges_no_cache_write(self, resp_hdrs):
|
def _do_test_GET_shard_ranges_no_cache_write(self, resp_hdrs):
|
||||||
@ -2807,7 +2816,8 @@ class TestContainerController(TestRingBase):
|
|||||||
self.memcache.calls)
|
self.memcache.calls)
|
||||||
self.assertEqual(resp.headers.get('X-Backend-Sharding-State'),
|
self.assertEqual(resp.headers.get('X-Backend-Sharding-State'),
|
||||||
self.memcache.calls[1][1][1]['sharding_state'])
|
self.memcache.calls[1][1][1]['sharding_state'])
|
||||||
self.assertEqual({'container.shard_listing.backend.200': 1},
|
self.assertEqual({'container.info.cache.miss': 1,
|
||||||
|
'container.shard_listing.backend.200': 1},
|
||||||
self.logger.get_increment_counts())
|
self.logger.get_increment_counts())
|
||||||
self.memcache.delete_all()
|
self.memcache.delete_all()
|
||||||
|
|
||||||
|
@ -508,6 +508,7 @@ class TestController(unittest.TestCase):
|
|||||||
|
|
||||||
def test_get_account_info_returns_values_as_strings(self):
|
def test_get_account_info_returns_values_as_strings(self):
|
||||||
app = mock.MagicMock()
|
app = mock.MagicMock()
|
||||||
|
app._pipeline_final_app.account_existence_skip_cache = 0.0
|
||||||
memcache = mock.MagicMock()
|
memcache = mock.MagicMock()
|
||||||
memcache.get = mock.MagicMock()
|
memcache.get = mock.MagicMock()
|
||||||
memcache.get.return_value = {
|
memcache.get.return_value = {
|
||||||
@ -533,6 +534,7 @@ class TestController(unittest.TestCase):
|
|||||||
|
|
||||||
def test_get_container_info_returns_values_as_strings(self):
|
def test_get_container_info_returns_values_as_strings(self):
|
||||||
app = mock.MagicMock()
|
app = mock.MagicMock()
|
||||||
|
app._pipeline_final_app.container_existence_skip_cache = 0.0
|
||||||
memcache = mock.MagicMock()
|
memcache = mock.MagicMock()
|
||||||
memcache.get = mock.MagicMock()
|
memcache.get = mock.MagicMock()
|
||||||
memcache.get.return_value = {
|
memcache.get.return_value = {
|
||||||
@ -4134,9 +4136,10 @@ class TestReplicatedObjectController(
|
|||||||
|
|
||||||
self.assertEqual(resp.status_int, 202)
|
self.assertEqual(resp.status_int, 202)
|
||||||
stats = self.app.logger.get_increment_counts()
|
stats = self.app.logger.get_increment_counts()
|
||||||
self.assertEqual({'object.shard_updating.cache.miss': 1,
|
self.assertEqual({'account.info.cache.miss': 1,
|
||||||
|
'container.info.cache.miss': 1,
|
||||||
|
'object.shard_updating.cache.miss': 1,
|
||||||
'object.shard_updating.backend.200': 1}, stats)
|
'object.shard_updating.backend.200': 1}, stats)
|
||||||
# verify statsd prefix is not mutated
|
|
||||||
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
|
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
|
||||||
|
|
||||||
backend_requests = fake_conn.requests
|
backend_requests = fake_conn.requests
|
||||||
@ -4234,7 +4237,9 @@ class TestReplicatedObjectController(
|
|||||||
|
|
||||||
self.assertEqual(resp.status_int, 202)
|
self.assertEqual(resp.status_int, 202)
|
||||||
stats = self.app.logger.get_increment_counts()
|
stats = self.app.logger.get_increment_counts()
|
||||||
self.assertEqual({'object.shard_updating.cache.hit': 1}, stats)
|
self.assertEqual({'account.info.cache.miss': 1,
|
||||||
|
'container.info.cache.miss': 1,
|
||||||
|
'object.shard_updating.cache.hit': 1}, stats)
|
||||||
# verify statsd prefix is not mutated
|
# verify statsd prefix is not mutated
|
||||||
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
|
self.assertEqual([], self.app.logger.log_dict['set_statsd_prefix'])
|
||||||
|
|
||||||
@ -4328,7 +4333,9 @@ class TestReplicatedObjectController(
|
|||||||
|
|
||||||
self.assertEqual(resp.status_int, 202)
|
self.assertEqual(resp.status_int, 202)
|
||||||
stats = self.app.logger.get_increment_counts()
|
stats = self.app.logger.get_increment_counts()
|
||||||
self.assertEqual({'object.shard_updating.cache.hit': 1}, stats)
|
self.assertEqual({'account.info.cache.miss': 1,
|
||||||
|
'container.info.cache.miss': 1,
|
||||||
|
'object.shard_updating.cache.hit': 1}, stats)
|
||||||
|
|
||||||
# cached shard ranges are still there
|
# cached shard ranges are still there
|
||||||
cache_key = 'shard-updating/a/c'
|
cache_key = 'shard-updating/a/c'
|
||||||
@ -4366,7 +4373,11 @@ class TestReplicatedObjectController(
|
|||||||
|
|
||||||
self.assertEqual(resp.status_int, 202)
|
self.assertEqual(resp.status_int, 202)
|
||||||
stats = self.app.logger.get_increment_counts()
|
stats = self.app.logger.get_increment_counts()
|
||||||
self.assertEqual({'object.shard_updating.cache.skip': 1,
|
self.assertEqual({'account.info.cache.miss': 1,
|
||||||
|
'account.info.cache.hit': 1,
|
||||||
|
'container.info.cache.miss': 1,
|
||||||
|
'container.info.cache.hit': 1,
|
||||||
|
'object.shard_updating.cache.skip': 1,
|
||||||
'object.shard_updating.cache.hit': 1,
|
'object.shard_updating.cache.hit': 1,
|
||||||
'object.shard_updating.backend.200': 1}, stats)
|
'object.shard_updating.backend.200': 1}, stats)
|
||||||
# verify statsd prefix is not mutated
|
# verify statsd prefix is not mutated
|
||||||
@ -4425,10 +4436,15 @@ class TestReplicatedObjectController(
|
|||||||
|
|
||||||
self.assertEqual(resp.status_int, 202)
|
self.assertEqual(resp.status_int, 202)
|
||||||
stats = self.app.logger.get_increment_counts()
|
stats = self.app.logger.get_increment_counts()
|
||||||
self.assertEqual({'object.shard_updating.cache.skip': 1,
|
self.assertEqual(stats, {
|
||||||
'object.shard_updating.cache.hit': 1,
|
'account.info.cache.hit': 2,
|
||||||
'object.shard_updating.cache.error': 1,
|
'account.info.cache.miss': 1,
|
||||||
'object.shard_updating.backend.200': 2}, stats)
|
'container.info.cache.hit': 2,
|
||||||
|
'container.info.cache.miss': 1,
|
||||||
|
'object.shard_updating.cache.skip': 1,
|
||||||
|
'object.shard_updating.cache.hit': 1,
|
||||||
|
'object.shard_updating.cache.error': 1,
|
||||||
|
'object.shard_updating.backend.200': 2})
|
||||||
|
|
||||||
do_test('POST', 'sharding')
|
do_test('POST', 'sharding')
|
||||||
do_test('POST', 'sharded')
|
do_test('POST', 'sharded')
|
||||||
|
Loading…
Reference in New Issue
Block a user