Use nested-virt nodepool label

To avoid being 100% dependant on the OpenEdge (formerly Fort Nebula)
nodepool provider, this patch starts using the nested-virt label. In
addition, because we can no longer expect SMT (aka Hyper-Threading)
to be present with the new nodepool label, the new configurable
[whitebox-hardware]/smt_hosts is introduced. It lists the hostnames of
compute hosts that support SMT.

We do have tests that require multi-NUMA and/or SMT. Those are moved
to a new non-voting job that runs on the multi-numa label provided by
OpenEdge.

Story: 2007395
Change-Id: If62d3a23044bf17f35a370e2f84fb459c166c9b2
This commit is contained in:
Artom Lifshitz 2020-07-13 15:27:15 -04:00
parent 14b70736b9
commit b3a4ae122d
5 changed files with 75 additions and 6 deletions

View File

@ -1,6 +1,3 @@
# TODO(artom) Once https://review.opendev.org/#/c/679656/ merges, we can unify
# the nodeset names and use the one in Nova. Until then, have a different name
# here.
- nodeset:
name: multi-numa-multinode
nodes:
@ -31,12 +28,42 @@
nodes:
- compute
- nodeset:
name: nested-virt-multinode
nodes:
- name: controller
label: nested-virt-ubuntu-bionic
- name: compute
label: nested-virt-ubuntu-bionic
groups:
# Node where tests are executed and test results collected
- name: tempest
nodes:
- controller
# Nodes running the compute service
- name: compute
nodes:
- controller
- compute
# Nodes that are not the controller
- name: subnode
nodes:
- compute
# Switch node for multinode networking setup
- name: switch
nodes:
- controller
# Peer nodes for multinode networking setup
- name: peers
nodes:
- compute
- job:
name: whitebox-multinode-devstack
nodeset: multi-numa-multinode
name: whitebox-multinode-devstack-base
abstract: true
parent: tempest-multinode-full-py3
description: |
Devstack multinode job.
Base devstack multinode job.
required-projects:
openstack/whitebox-tempest-plugin
pre-run: playbooks/whitebox/pre.yaml
@ -69,6 +96,31 @@
tempest:
num_hugepages: 512
- job:
name: whitebox-multinode-devstack
parent: whitebox-multinode-devstack-base
nodeset: nested-virt-multinode
description: |
Runs the entire test suite on single-NUMA, non-SMT, nested virt VMs.
Tests that need SMT and/or more than 1 NUMA node are skipped in the test
code itself.
vars:
tempest_test_regex: ^whitebox_tempest_plugin\.
- job:
name: whitebox-multinode-multinuma-devstack
parent: whitebox-multinode-devstack-base
nodeset: multi-numa-multinode
voting: false
description: |
Runs specific tests that need SMT and/or more than 1 NUMA node.
Non-voting because there is currently only 1 provider of the `multi-numa`
label, and it's not super reliable.
vars:
tempest_test_regex: '(NUMALiveMigrationTest.test_cpu_pinning|CPUThreadPolicyTest)'
devstack_localrc:
SMT_HOSTS: "{{ hostvars['controller']['ansible_facts']['ansible_hostname'] }},{{ hostvars['compute']['ansible_facts']['ansible_hostname'] }}"
- project:
templates:
- openstack-python-jobs
@ -77,6 +129,8 @@
check:
jobs:
- whitebox-multinode-devstack
- whitebox-multinode-multinuma-devstack
gate:
jobs:
- whitebox-multinode-devstack
- whitebox-multinode-multinuma-devstack

View File

@ -9,6 +9,9 @@ function configure {
# nodes are in the env
iniset $TEMPEST_CONFIG whitebox max_compute_nodes $MAX_COMPUTE_NODES
iniset $TEMPEST_CONFIG whitebox available_cinder_storage $WHITEBOX_AVAILABLE_CINDER_STORAGE
if [ -n "$SMT_HOSTS" ]; then
iniset $TEMPEST_CONFIG whitebox-hardware smt_hosts "$SMT_HOSTS"
fi
iniset $TEMPEST_CONFIG whitebox-nova-compute config_path "$WHITEBOX_NOVA_COMPUTE_CONFIG_PATH"
iniset $TEMPEST_CONFIG whitebox-nova-compute restart_command "$WHITEBOX_NOVA_COMPUTE_RESTART_COMMAND"

View File

@ -1,6 +1,7 @@
NOVA_FILTERS="$NOVA_FILTERS,NUMATopologyFilter"
WHITEBOX_AVAILABLE_CINDER_STORAGE=${WHITEBOX_AVAILABLE_CINDER_STORAGE:-24}
SMT_HOSTS=${SMT_HOSTS:-''}
WHITEBOX_NOVA_COMPUTE_CONFIG_PATH=${WHITEBOX_NOVA_COMPUTE_CONFIG_PATH:-/etc/nova/nova-cpu.conf}
WHITEBOX_NOVA_COMPUTE_RESTART_COMMAND=${WHITEBOX_NOVA_COMPUTE_RESTART_COMMAND:-'systemctl restart devstack@n-cpu'}

View File

@ -33,6 +33,7 @@ from tempest.common import compute
from tempest.common import utils
from tempest.common import waiters
from tempest import config
from tempest.lib import decorators
from whitebox_tempest_plugin.api.compute import base
from whitebox_tempest_plugin import exceptions
@ -372,6 +373,8 @@ class CPUThreadPolicyTest(BasePinningTest):
set(sib).isdisjoint(cpu_pinnings.values()),
"vCPUs siblings should not have been used")
@testtools.skipUnless(len(CONF.whitebox_hardware.smt_hosts) > 0,
'At least 1 SMT-capable compute host is required')
def test_threads_prefer(self):
"""Ensure vCPUs *are* placed on thread siblings.
@ -397,6 +400,8 @@ class CPUThreadPolicyTest(BasePinningTest):
"vCPUs siblings were required by not used. Does this host "
"have HyperThreading enabled?")
@testtools.skipUnless(len(CONF.whitebox_hardware.smt_hosts) > 0,
'At least 1 SMT-capable compute host is required')
def test_threads_require(self):
"""Ensure thread siblings are required and used.
@ -483,6 +488,7 @@ class NUMALiveMigrationTest(BasePinningTest):
"""
return set([len(cpu_list) for cpu_list in chain(*args)])
@decorators.skip_because(bug='2007395', bug_type='storyboard')
def test_cpu_pinning(self):
host1, host2 = [self.get_ctlplane_address(host) for host in
self.list_compute_hosts()]

View File

@ -146,4 +146,9 @@ hardware_opts = [
default=None,
help='The vendor id of the underlying vgpu hardware of the compute. '
'An example with Nvidia would be 10de'),
cfg.ListOpt(
'smt_hosts',
default=[],
help='List of compute hosts that have SMT (Hyper-Threading in Intel '
'parlance).'),
]