Digging and Tracing system for OpenStack
Go to file
xion 6612059a9f add an io count tracer
Change-Id: I9ad826eca3ea97505a660e8d226780de7c7ecbc7
2016-06-23 09:08:00 +08:00
devstack test multiple result for one tracer 2015-11-19 22:41:51 +08:00
doc Remove unused pngmath Sphinx extension 2016-02-29 19:59:15 +01:00
scalpels enable pep8 2015-12-07 13:51:11 +08:00
scripts add an io count tracer 2016-06-23 09:08:00 +08:00
tests add an io count tracer 2016-06-23 09:08:00 +08:00
tools add an io count tracer 2016-06-23 09:08:00 +08:00
.gitignore add git ignore 2015-10-20 17:33:28 +08:00
.gitreview add gitreview 2015-10-23 19:49:46 +08:00
.testr.conf add tox-related stuff 2015-10-23 23:06:30 +08:00
LICENSE Initial commit 2015-09-16 14:20:51 +08:00
README.rst Update readme file of scalpels 2015-12-18 00:17:03 -05:00
requirements.txt add oslo.messaging requirements 2015-11-09 01:42:39 +08:00
setup.cfg redesign endpoints 2015-11-11 13:18:43 +08:00
setup.py add simple python framework 2015-10-20 15:39:06 +08:00
test-requirements.txt add tox-related stuff 2015-10-23 23:06:30 +08:00
tox.ini py26/py33 are no longer supported by Infra's CI 2015-12-26 14:38:14 +05:30

Scalpels

Scalpels is a distributed tracing or debuging system for OpenStack.

Background

OpenStack is made of multiple Python-based projects. Each project has similiar but different architecture. Scalpels intergates some useful scripts or 3rd tools to help operator track system status in your cloud environments.

Contribute

This project is a prototype now and is under development. If you have interests in this work, please contact @kun_huang, at #openstack-chinese channel.

Mission

Scalpels is a kind of "debugfs" for OpenStack. It gathers data from number of tracers like proc or systemtap which could be used to quantize preformance under workloads.

Single Node Architecture

This type of deployment is used as POC in OpenStack community CI.

All-in-One deployment

In All-In-One scenario, Scalpels Client works as rpc client, Scalpels Agent works as rpc server. When Scalpels Agent receives a rpc to start tracer 3, it will start a unix process for tracer 3, which can write data to database. Scalpels Client can read data from database directly.

Multiple Node Architecture

This is under Designing:

Multiple deployment

In multiple nodes scenario, Scalpels Server are non-state servers, they are distributed in multiple nodes, each Scalpels Server knows all agents' location and could forward the request to start tracers.

Scalpels Agent is introduced to manage trace process, it can be combined with Scalpels server during implementation.

Tracers can write data into redis bus instead of database to keep data consistency.

Agent-Tracer-Worker

The relationship is:

agent-tracer-worker.png

Scalpels Agent: manager tracers by start/stop signals. Tracer: Start worker process and write its stdout to database. Worker: It fetches data from operating system.

Ideas

Each project will have scripts working:

  • on python calls
  • on sql queries
  • on filesystem I/O
  • on RPC calls if need
  • on necessary system calls
  • on common system statistics