Tempest plugin for whitebox testing. For testing things not exposed through the REST APIs.
Go to file
Artom Lifshitz c452206146 Update hacking for Python 3
In change I3f4216f66606fbc450a46c93de306399b7f3cd65 [1], tempest
updated their use of hacking for Python 3. Among other things, this
removed tempest.hacking.checks.factory, which we consumed. Update our
use of hacking to keep pace, and un-break the gate.

[1] I3f4216f66606fbc450a46c93de306399b7f3cd65

Change-Id: Ia4e0b3cca8cb984900b49804b964d03ea6466191
2020-05-03 11:29:45 -04:00
devstack Merge "Add tests for virtio scsi disks" 2020-03-23 14:23:32 +00:00
playbooks Zuul job 2019-11-20 14:57:06 -05:00
roles Use ansible_become instead of become in multinode devstack gate 2019-12-19 16:39:48 +00:00
whitebox_tempest_plugin Normalize numa_topology for consistent equality tests 2020-04-03 13:37:54 +00:00
.gitignore Update gitignore 2019-04-02 15:51:24 +00:00
.gitreview Move .gitreview from RDO to OpenStack 2019-09-16 11:54:32 -04:00
.stestr.conf Subclass API tests instead of scenario 2018-07-13 10:38:09 -04:00
.zuul.yaml Cap stestr at 2.6.0 for python27 2020-03-27 22:35:24 -04:00
LICENSE Includes license and fixes copyright headers 2017-12-12 17:55:00 +01:00
README.rst Add max_compute_nodes option 2019-01-27 20:38:15 -05:00
requirements.txt update requirements to support older releases 2020-03-02 12:39:20 +00:00
setup.cfg Re-home project 2018-01-10 11:57:05 +00:00
setup.py Initial commit 2017-12-12 17:55:00 +01:00
test-requirements.txt Update hacking for Python 3 2020-05-03 11:29:45 -04:00
tox.ini Update hacking for Python 3 2020-05-03 11:29:45 -04:00

Whitebox Tempest plugin

This repo is a Tempest plugin that contains scenario tests ran against TripleO/Director-based deployments.

Important

This is still a work in progress.

Requirements

The tests assume a TripleO/Director-based deployment with an undercloud and overcloud. The tests will be run from the undercloud therefore Tempest should be installed and configured on the undercloud node. It's assumed that the Unix user running the tests, generally stack, has SSH access to all the compute nodes running in the overcloud.

Most tests have specific hardware requirements. These are documented in the tests themselves and the tests should fast-fail if these hardware requirements are not met. You will require multiple nodes to run these tests and will need to manually specify which test to run on which node. For more information on our plans here, refer to roadmap.

For more information on TripleO/Director, refer to the Red Hat OpenStack Platform documentation.

Install, configure and run

  1. Install the plugin.

    This should be done from source. :

    WORKSPACE=/some/directory
    cd $WORKSPACE
    git clone https://github.com/redhat-openstack/whitebox-tempest-plugin
    sudo pip install whitebox-tempest-plugin
  2. Configure Tempest.

    Add the following lines at the end of your tempest.conf file. These determine how your undercloud node, which is running Tempest, should connect to the compute nodes in the overcloud and vice versa. For example:

    [whitebox]
    hypervisors = compute-0.localdomain:192.168.24.6,compute-1.localdomain:192.168.24.12
    # Only set the following if different from the defaults listed
    # ctlplane_ssh_username = heat-admin
    # ctlplane_ssh_private_key_path = /home/stack/.ssh/id_rsa
    containers = true
    max_compute_nodes = 2 # Some tests depend on there being a single
                          # (available) compute node
  3. Execute the tests. :

    tempest run --regex whitebox_tempest_plugin.

How to add a new test

New tests should be added to the whitebox_tempest_plugin/tests directory.

According to the plugin interface doc, you should mainly import "stable" APIs which usually are:

  • tempest.lib.*
  • tempest.config
  • tempest.test_discover.plugins
  • tempest.common.credentials_factory
  • tempest.clients
  • tempest.test

Importing classes from tempest.api.* could be dangerous since future version of Tempest could break.

Roadmap

The different tests found here all have different hardware requirements, and these requirements often conflict. For example, a test that requires a host without HyperThreading enabled cannot be used for a test that requires HyperThreading. As a result, it's not possible to have one "master configuration" that can be used to run all tests. Instead, different tests must be run on different nodes.

At present, this plugin exists in isolation and the running of individual tests on nodes, along with the configuration of said nodes, remains a manual process. However, the end goal for this project is to be able to kick run this test suite of against N overcloud nodes, where each node has a different hardware configuration and N is the total number of different hardware configurations required (one for real-time, one for SR-IOV, etc.). Each node would have a different profile and host aggregates would likely be used to ensure each test runs on its preferred hardware. To get here, we should probably provide a recipe along with hardware configuration steps.

This being said, the above is way off yet. For now, we're focussed on getting the tests in place so we can stop doing all this stuff by hand.