Tooling for converting subunit streams into a SQL DB
Go to file
Masayuki Igawa decc8b0ecc Update TODO.rst to point the StoryBoard
This commit updates TODO.rst to point the StoryBoard. Because it's hard
to keep up-to-date. I already dumped out original work items to the
StoryBoard[1] with "todo" tag.

[1] https://storyboard.openstack.org/#!/project/747

Change-Id: I96885ebcc6d2235ffa7b1245759691f04c0965a5
2016-03-25 14:38:32 +09:00
config-generator Add sample config files for commands 2014-10-06 20:29:49 +00:00
doc/source Fix documentation before the release 2015-11-24 12:46:36 -05:00
etc Add sample config files for commands 2014-10-06 20:29:49 +00:00
releasenotes Add missing release notes before the 1.3.0 release 2016-02-17 19:17:45 -05:00
subunit2sql Don't extend the passed in targets 2016-03-02 08:48:52 -08:00
.coveragerc Omit subunit2sql/tests from coverage 2016-01-18 09:56:41 +09:00
.gitignore Add reno release notes to subunit2sql 2015-11-20 18:59:18 -05:00
.gitreview Add a .gitreview file 2014-07-12 12:48:23 -04:00
.testr.conf Add DB API testing framework 2015-09-16 22:30:16 +00:00
CONTRIBUTING.rst Add a contributing file 2014-12-15 23:19:05 -05:00
LICENSE Initial commit 2014-06-12 23:59:00 -04:00
MANIFEST.in Add manifest file to include base alembic config 2014-09-30 14:44:46 -04:00
README.rst Fix typo in README.rst 2016-01-01 19:28:30 +09:00
requirements.txt Add support to subunit2sql cli to specify a run_at time 2015-11-30 22:40:41 +00:00
setup.cfg Add min ver to extra libs for graph 2016-01-08 18:20:27 +09:00
setup.py Initial commit 2014-06-12 23:59:00 -04:00
test-requirements.txt Add reno release notes to subunit2sql 2015-11-20 18:59:18 -05:00
TODO.rst Update TODO.rst to point the StoryBoard 2016-03-25 14:38:32 +09:00
tox.ini Add command deleting *.pyc before executing ostestr 2016-01-08 16:52:21 +09:00

subunit2SQL README

subunit2SQL is a tool for storing test results data in a SQL database. Like it's name implies it was originally designed around converting subunit streams to data in a SQL database and the packaged utilities assume a subunit stream as the input format. However, the data model used for the DB does not preclude using any test result format. Additionally the analysis tooling built on top of a database is data format agnostic. However if you choose to use a different result format as an input for the database additional tooling using the DB api would need to be created to parse a different test result output format. It's also worth pointing out that subunit has several language library bindings available. So as a user you could create a small filter to convert a different format to subunit. Creating a filter should be fairly easy and then you don't have to worry about writing a tool like subunit2sql to use a different format.

For multiple distributed test runs that are generating subunit output it is useful to store the results in a unified repository. This is the motivation for the _testrepository project which does a good job for centralizing the results from multiple test runs.

However, imagine something like the OpenStack CI system where the same basic test suite is normally run several hundreds of times a day. To provide useful introspection on the data from those runs and to build trends over time the test results need to be stored in a format that allows for easy querying. Using a SQL database makes a lot of sense for doing this, which was the original motivation for the project.

At a high level subunit2SQL uses alembic migrations to setup a DB schema that can then be used by the subunit2sql tool to parse subunit streams and populate the DB. Then there are tools for interacting with the stored data in the subunit2sql-graph command as well as the sql2subunit command to create a subunit stream from data in the database. Additionally, subunit2sql provides a Python DB API that can be used to query information from the stored data to build other tooling.

Usage

DB Setup

The usage of subunit2sql is split into 2 stages. First you need to prepare a database with the proper schema; subunit2sql-db-manage should be used to do this. The utility requires db connection info which can be specified on the command or with a config file. Obviously the sql connector type, user, password, address, and database name should be specific to your environment. subunit2sql-db-manage will use alembic to setup the db schema. You can run the db migrations with the command:

subunit2sql-db-manage --database-connection mysql://subunit:pass@127.0.0.1/subunit upgrade head

or with a config file:

subunit2sql-db-manage --config-file subunit2sql.conf upgrade head

This will bring the DB schema up to the latest version for subunit2sql.

subunit2sql

Once you have a database setup with the proper database schema you can then use the subunit2sql command to populate the database with data from your test runs. subunit2sql takes in a subunit v2 either through stdin or by passing it file paths as positional arguments to the script. If only a subunit v1 stream is available, it can be converted to a subunit v2 stream using the subunit-1to2 utility.

There are several options for running subunit2sql, they can be listed with:

subunit2sql --help

The only required option is --database-connection. The options can either be used on the CLI, or put in a config file. If a config file is used you need to specify the location on the CLI.

Most of the optional arguments deal with how subunit2sql interacts with the SQL DB. However, it is worth pointing out that the artifacts option and the run_meta option are used to pass additional metadata into the database for the run(s) being added. The artifacts option should be used to pass in a url or path that points to any logs or other external test artifacts related to the run being added. The run_meta option takes in a dictionary which will be added to the database as key value pairs associated with the run being added.

sql2subunit

The sql2subunit utility is used for taking a run_id and creating a subunit v2 stream from the data in the DB about that run. To create a new subunit stream run:

sql2subunit $RUN_ID

along with any options that you would normally use to either specify a config file or the DB connection info. Running this command will print to stdout the subunit v2 stream for the run specified by $RUN_ID, unless the --out_path argument is specified to write it to a file instead.