Updated Mitaka Cache On results
It turned out, that first results portion was collected on the environment with sporadic connectivity issue between Keystone and Memcache. Valid results were collected and processed. Change-Id: I6c8f388ef380a19016c4bc9cf49582c8a8f433d1
This commit is contained in:
parent
6cbe58d467
commit
71342f7520
@ -1,3 +1,5 @@
|
||||
.. _keystone_performance:
|
||||
|
||||
============================
|
||||
Keystone Performance testing
|
||||
============================
|
||||
|
@ -4,8 +4,8 @@ Keystone DB / cache operations analysis
|
||||
Environment description
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The test plan (test case #1) is executed at the all-in-one virtual environment,
|
||||
created with Oracle VM VirtualBox Manager.
|
||||
The :ref:`keystone_performance` (test case #1) is executed at the all-in-one
|
||||
virtual environment, created with Oracle VM VirtualBox Manager.
|
||||
|
||||
Hardware
|
||||
~~~~~~~~
|
||||
@ -60,26 +60,28 @@ Due to the collected profiling information, it can be clearly seen that
|
||||
Keystone was significantly changed during Mitaka timeframe, and lots of changes
|
||||
seem to be related to the following:
|
||||
* Federation support (introduced more complexity on the DB level)
|
||||
* Moving to oslo.cache instead of local `dogpile.cache` usage in Liberty.
|
||||
* Moving to oslo.cache instead of local `dogpile.cache` usage in Liberty and
|
||||
introducing local context cache layer for per-request caching.
|
||||
|
||||
Federation support introduced multiple SQL JOINs usage in Keystone and made
|
||||
the database scheme a bit more complex. As for the caching layer usage, one
|
||||
specific issue is clearly seen in Mitaka keystone operations caching. Although
|
||||
all Liberty and Mitaka environments had identical caching layer configuration,
|
||||
from HTML reports it can be clearly seen that on Liberty all possible methods
|
||||
were successfully cached and while their calling cached copy was always used.
|
||||
On Mitaka OpenStack for some reason cached copy was not used sometimes and the
|
||||
DB requests were processed.
|
||||
the database scheme a bit more complex. In further multinode research it's
|
||||
planned to check how this is influencing operations DB operations performance
|
||||
in case, for instance, if Galera cluster is used.
|
||||
|
||||
As for the caching layer usage, one specific issue is clearly seen in Mitaka
|
||||
Keystone operations caching. Although local context cache should reduce number
|
||||
of calls to Memcache via storing already grabbed data for the specific API
|
||||
request in local thread, this was not observed (the duplicated function calls
|
||||
still used Memcache for cache purposes). The `Keystone bug`_ was filed to
|
||||
investigate this behaviour.
|
||||
|
||||
One more interesting moment is also related to the cache usage. If the cache
|
||||
will be turned off in Keystone configuration explicitly, the profiling still
|
||||
shows `data being fetched`_ from the Memcache.
|
||||
|
||||
.. _Keystone bug: https://bugs.launchpad.net/keystone/+bug/1567403
|
||||
.. _data being fetched: https://bugs.launchpad.net/keystone/+bug/1567413
|
||||
|
||||
One more interesting moment is related to the `keystone_authtoken` middleware
|
||||
(and, more specifically, its cache) usage. Although all environments that took
|
||||
part in the research had exactly the same `keystone_authtoken` middleware
|
||||
configuration, described in the test plan and containing `memcache_servers`
|
||||
parameter set up, Mitaka environments profiling shows that all OpenStack
|
||||
services that used `keystone_authtoken` middleware did not use the external
|
||||
Memcached cached token copy. All of them used Keystone API every time REST API
|
||||
request was coming to the OpenStack services. This behaviour needs to be
|
||||
investigated separately.
|
||||
|
||||
Reports
|
||||
^^^^^^^
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -49,7 +49,8 @@ NODES_TEMPLATE = {
|
||||
}
|
||||
}
|
||||
},
|
||||
"Cached operations": {},
|
||||
"Memcache cached operations": {},
|
||||
"Context cached operations": {},
|
||||
"Cached time spent": collections.OrderedDict(),
|
||||
}
|
||||
|
||||
@ -61,7 +62,7 @@ def define_node(node):
|
||||
if node["info"]["project"] != "keystone":
|
||||
return
|
||||
|
||||
if node["info"]["name"] not in ["db", "cache"]:
|
||||
if node["info"]["name"] not in ["db", "cache", "memoize"]:
|
||||
return
|
||||
|
||||
time_spent = node["info"]["finished"] - node["info"]["started"]
|
||||
@ -70,21 +71,43 @@ def define_node(node):
|
||||
process_db_calls(node, time_spent)
|
||||
elif node["info"]["name"] == "cache":
|
||||
if not node["children"]:
|
||||
process_cache_calls(node, time_spent, "cache")
|
||||
else:
|
||||
for child in node["children"]:
|
||||
define_node(child)
|
||||
elif node["info"]["name"] == "memoize":
|
||||
if not node["children"] or len(node["children"]) == 1:
|
||||
process_cache_calls(node, time_spent)
|
||||
else:
|
||||
for child in node["children"]:
|
||||
define_node(child)
|
||||
|
||||
|
||||
def process_cache_calls(node, time_spent):
|
||||
cache_info = node["info"]["meta.raw_payload.cache-start"][
|
||||
def process_cache_calls(node, time_spent, prefix="memoize"):
|
||||
cache_info = node["info"]["meta.raw_payload.%s-start" % prefix][
|
||||
"info"]["fn_info"]
|
||||
if not NODES["Cached operations"].get(cache_info):
|
||||
NODES["Cached operations"][cache_info] = 0
|
||||
NODES["Cached operations"][cache_info] += 1
|
||||
if not NODES["Cached time spent"].get(time_spent):
|
||||
NODES["Cached time spent"][time_spent] = []
|
||||
NODES["Cached time spent"][time_spent].append(cache_info)
|
||||
|
||||
def add_info(pref):
|
||||
if not NODES["%s cached operations" % pref].get(cache_info):
|
||||
NODES["%s cached operations" % pref][cache_info] = 0
|
||||
NODES["%s cached operations" % pref][cache_info] += 1
|
||||
if not NODES["Cached time spent"].get(time_spent):
|
||||
NODES["Cached time spent"][time_spent] = []
|
||||
NODES["Cached time spent"][time_spent].append(cache_info)
|
||||
|
||||
# Liberty env, all cache is Memcached Cache
|
||||
if not node["children"]:
|
||||
add_info("Memcache")
|
||||
return
|
||||
|
||||
# Mitaka env with nested 2-layer cache
|
||||
ctxt_cache_info = node["children"][0]
|
||||
if ctxt_cache_info["children"]:
|
||||
memcache_cache_info = ctxt_cache_info["children"][0]
|
||||
if not memcache_cache_info["children"]:
|
||||
add_info("Memcache")
|
||||
else:
|
||||
add_info("Context")
|
||||
|
||||
|
||||
def process_db_calls(node, time_spent):
|
||||
@ -154,7 +177,7 @@ def process_selects(statement, time_spent):
|
||||
|
||||
def define_nodes(data):
|
||||
for child in data["children"]:
|
||||
if not child["children"]:
|
||||
if not child["children"] or child["info"]["name"] == "memoize":
|
||||
define_node(child)
|
||||
else:
|
||||
define_nodes(child)
|
||||
@ -255,24 +278,33 @@ def prepare_tables(nodes):
|
||||
"time_spent"][ts]:
|
||||
multi_join_queries.add_row([db_query_tmpl % query, ts])
|
||||
|
||||
# prepare table with cache info
|
||||
cache_table = prettytable.PrettyTable(["**Cached operations**",
|
||||
# prepare table(s) with cache info
|
||||
cache_table = prettytable.PrettyTable(["**Cache**",
|
||||
"**Cached operations**",
|
||||
"**args**",
|
||||
"**kwargs**",
|
||||
"**Times used**"])
|
||||
cache_table.align["**Cached operations**"] = "l"
|
||||
cache_table.align["**args**"] = "l"
|
||||
cache_table.align["**kwargs**"] = "l"
|
||||
cache_table.align["**Times used**"] = "l"
|
||||
cache_table.max_width = 100
|
||||
cache_table.header = True
|
||||
cache_table.hrules = prettytable.ALL
|
||||
|
||||
for operation, times in nodes["Cached operations"].iteritems():
|
||||
operation = operation[1:-1].split(", ")
|
||||
cache_table.add_row([operation[0][1:-1],
|
||||
", ".join(operation[1:-1])[1:-1],
|
||||
operation[-1][1:-1], times])
|
||||
for cache in ["Memcache cached operations", "Context cached operations"]:
|
||||
name_map = {
|
||||
"Memcache cached operations": "Memcache",
|
||||
"Context cached operations": "Local context",
|
||||
}
|
||||
cache_table.align["**Cache**"] = "l"
|
||||
cache_table.align["**Cached operations**"] = "l"
|
||||
cache_table.align["**args**"] = "l"
|
||||
cache_table.align["**kwargs**"] = "l"
|
||||
cache_table.align["**Times used**"] = "l"
|
||||
cache_table.max_width = 100
|
||||
cache_table.header = True
|
||||
cache_table.hrules = prettytable.ALL
|
||||
|
||||
for operation, times in nodes[cache].iteritems():
|
||||
operation = operation[1:-1].split(", ")
|
||||
cache_table.add_row([name_map[cache],
|
||||
operation[0][1:-1],
|
||||
", ".join(operation[1:-1])[1:-1],
|
||||
operation[-1][1:-1], times])
|
||||
|
||||
return common_info_table, multi_join_queries, outliers_table, cache_table
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user