150a89162d
This commit was bulk generated and pushed by the OpenDev sysadmins as a part of the Git hosting and code review systems migration detailed in these mailing list posts: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html Attempts have been made to correct repository namespaces and hostnames based on simple pattern matching, but it's possible some were updated incorrectly or missed entirely. Please reach out to us via the contact information listed at https://opendev.org/ with any questions you may have. |
||
---|---|---|
devstack | ||
doc | ||
scalpels | ||
scripts | ||
tests | ||
tools | ||
.gitignore | ||
.gitreview | ||
.testr.conf | ||
LICENSE | ||
README.rst | ||
requirements.txt | ||
setup.cfg | ||
setup.py | ||
test-requirements.txt | ||
tox.ini |
Scalpels
Scalpels is a distributed tracing or debuging system for OpenStack.
Background
OpenStack is made of multiple Python-based projects. Each project has similiar but different architecture. Scalpels intergates some useful scripts or 3rd tools to help operator track system status in your cloud environments.
Contribute
This project is a prototype now and is under development. If you have interests in this work, please contact @kun_huang, at #openstack-chinese channel.
Mission
Scalpels is a kind of "debugfs" for OpenStack. It gathers data from number of tracers like proc or systemtap which could be used to quantize preformance under workloads.
Single Node Architecture
This type of deployment is used as POC in OpenStack community CI.
In All-In-One scenario, Scalpels Client works as rpc client, Scalpels Agent works as rpc server. When Scalpels Agent receives a rpc to start tracer 3, it will start a unix process for tracer 3, which can write data to database. Scalpels Client can read data from database directly.
Multiple Node Architecture
This is under Designing:
In multiple nodes scenario, Scalpels Server are non-state servers, they are distributed in multiple nodes, each Scalpels Server knows all agents' location and could forward the request to start tracers.
Scalpels Agent is introduced to manage trace process, it can be combined with Scalpels server during implementation.
Tracers can write data into redis bus instead of database to keep data consistency.
Agent-Tracer-Worker
The relationship is:
Scalpels Agent: manager tracers by start/stop signals. Tracer: Start worker process and write its stdout to database. Worker: It fetches data from operating system.
Ideas
Each project will have scripts working:
- on python calls
- on sql queries
- on filesystem I/O
- on RPC calls if need
- on necessary system calls
- on common system statistics