Eric MacDonald 3b5d1182d4 Add support for any local LDAP user to run collect
This update replaces the currently enforced sysadmin username,
allowing any local LDAP user, that is part of the sudo and
sys_protected groups, to run collect.

This change introduced several challenging new failure modes
that necessitated some refactoring of collect's existing fault
reporting.

Enhancements were made to the detection, handling and reporting of
failures at 'all levels' ; i.e. local host, remote host, subcloud
and subcloud remote host.

Specific attention was put towards handling of most probable failure
modes that include detection and handling of passwordless sudo,
unsupported sudo, unknown usernames, unreachable hosts, invalid
passwords, out of space errors, etc. 'at all levels'

Generally, multi host collect handling continues in the presence of
remote host collect errors or warnings. Whereas local host failures
are typically treated as fatal.

Additionally, reporting of collect timeout cases was improved by
instead of printing only a timeout code number, a string based
timeout cause is now included.

Improvements were also made in the handling and packaging of the
collect.log file. A collect.log is now included at the main bundle
and subcloud collect levels. Some failures to collect remote hosts
are now logged in these files at the appropriate level.
If a user notices that a host or subcloud is missing from a bundle,
say because it was unreachable or had sudo-less password enabled,
the collect log will have a warning message to that effect.

A 5 second yield was added to subcloud collect monitoring to reduce
the cpu load subcloud collect monitoring was inducing. This lines up
with the existing 5 second yield done for host collect monitoring.

In attempt to improve the collect user experience the following
additional improvements were made.

- Attempt to source the openrc file and query system inventory is
  moved after the password prompt. This allows the various password
  checks to be handled early making the tool feel more responsive.

- The global collect timeout now starts only after password is input
  and inventory is read so these operations don't contribute towards
  the collection time.

- Improves how collect reports to the console and logs the hosts and
  or subclouds that 'will be' and 'were successfully' collected from.

- Added expect segment debug logging tied to the --debug option.
  With debug enabled each function's expect segment logs its
  execution output to /tmp with files of the following form.

      /tmp/collect_expect_<username>_<unit>_<function>

- Added a --password option to simplify collect test automation.

- Replaced the subcloud collect verbose option with debug.
  The verbose subcloud collect was known to cause issues.

All the above changes warranted a collect tool up-version to 3.0

Test Plan: A full collect regression was performed

PASS: Verify install and collect testing on the following systems
      - All-In-One SX
      - All-In-One DX
      - Standard DX with 1 worker and 1 storage
      - Simplex DC system with 2 subclouds ; 1 SX and 1 DX

Success Path Handling: both sysadmin and any other username

PASS: Verify collect handling at all levels
PASS: Verify dated collect all for system and subcloud
PASS: Verify all variations of collect host list handling
PASS: Verify collect clean at all levels
PASS: Verify system and subcloud collect --report handling
PASS: Verify collect all --skip-mask
PASS: Verify collect all --timeout
PASS: Verify collect all --inline
PASS: Verify collect all --subcloud
PASS: Verify collect all --verbose
PASS: Verify collect all --verbose --debug
PASS: Verify collect all --version
PASS: Verify collect all --subcloud --inline
PASS: Verify collect all on SX/DX standard and DC systems
PASS: verify new collect --password option
PASS: Verify collect bundle content between sysadmin and other user.
PASS: Verify bundle includes collect.log at bundle and host levels
PASS: Verify collect.log content at each level.
PASS: Verify collect from remote host that does not have this update
PASS: Verify collect from subcloud that does not have this update
PASS: Verify system and subcloud collect using account password with
             special character(s).

Failure Path Handling: error response should clearly indicate the issue

PASS: Verify all level collect handling of unknown username
PASS: Verify all level collect handling with passwordless sudo enabled
PASS: Verify all level collect handling of unsupported sudo
PASS: Verify all level collect handling where hosts run out of scratch
PASS: Verify all level collect handling of a host whose scratch space
             is filled to 75% or more
PASS: Verify a successful collect following the cleanup of a previous
             out of space error.
PASS: Verify collect handling of all non-active controller cases
PASS: Verify collect handling of an invalid hostname
PASS: Verify collect handling of unreachable remote hosts at all levels
PASS: Verify collect handling of an invalid password at all levels
PASS: Verify collect host and subcloud collect timeout handling
PASS: Verify collect global timeout handling
PASS: Verify collect handling of failure to get the remote tarball
PASS: Verify collect debug option handling and data

Story: 2010533
Task: 50419
Change-Id: Ibd827e1c72190bcdcf710b32ad7903cfa397c394
Signed-off-by: Eric MacDonald <eric.macdonald@windriver.com>
2024-07-23 12:08:50 +00:00
2024-04-26 14:19:03 -04:00
2023-09-15 18:41:49 +00:00
2023-09-15 18:41:49 +00:00
2019-09-09 13:43:49 -05:00
2019-09-09 13:43:49 -05:00
2023-09-15 18:41:49 +00:00

utilities

This file serves as documentation for the components and features included on the utilities repository.

PCI IRQ Affinity Agent

While in OpenStack it is possible to enable instances to use PCI devices, the interrupts generated by these devices may be handled by host CPUs that are unrelated to the instance, and this can lead to a performance that is lower than it could be if the device interrupts were handled by the instance CPUs.

The agent only acts over instances with dedicated vCPUs. For instances using shared vCPUs no action will be taken by the agent.

The expected outcome from the agent operation is achieving a higher performance by assigning the instances core to handle the interrupts from PCI devices used by these instances and avoid interrupts consuming excessive cycles from the platform cores.

Agent operation

The agent operates by listening to RabbitMQ notifications from Nova. When an instance is created or moved to the host, the agent checks for an specific flavor spec (detailed below) and if it does then it queries libvirt to map the instance vCPUs into pCPUs from the host.

Once the agent has the CPU mapping, it determines the IRQ for each PCI device used by the instance, and then it loops over all PCI devices and determines which host NUMA node is associated with the device, the pCPUs that are associated with the NUMA node and finally set the CPU affinity for the IRQs of the PCI device based on the pCPU list.

There is also a periodic audit that runs every minute and loops over the existing IRQs, so that if there are new IRQs that weren't mapped before the agent maps them, and if there are PCI devices that aren't associated to an instance that they were before, their IRQ affinity is reset to the default value.

Flavor spec

The PCI IRQ Affinity Agent uses a specific flavor spec for PCI interrupt affining, that is used to determine which vCPUs assigned to the instance must handle the interrupts from the PCI devices:

  • hw:pci_irq_affinity_mask=<vcpus_cpulist>

Where vcpus_cpulist can assume a comma-separated list of values that can be expressed as:

  • int: the vCPU expressed by int will be assigned to handle the interruptions from the PCI devices
  • int1-int2: the vCPUs between int1 and int2 (inclusive) will be used to handle the interruptions from the PCI devices
  • ^int: the vCPU expressed by int will not be assigned to handle the interruptions from the PCI devices and shall be used to exclude a vCPU that was included in a previous range

NOTE: int must be a value between 0 and flavor.vcpus - 1

Example: hw_pci_irq_affinity_mask=1-4,^3,6 means that vCPUs with indexes 1,2,4 and 6 from the vCPU list that Nova allocates to the instance will be assigned to handle interruptions from the PCI devices.

Limitations

  • No CPU affining is performed for instances using shared CPUs (i.e., when using flavor spec hw:cpu_policy=shared)
  • No CPU affining will be performed when invalid ranges are specified on the flavor spec, the agent instead will log error messages indicating the problem

Agent packaging

The agent code resides on the starlingx/utilities repo, along with the spec and docker_image files that are used to build a CentOS image with the agent wheel installed on it.

The agent is deployed by Armada along with the other OpenStack helm charts; refer to PCI IRQ Affinity Agent helm chart on starlingx/openstack-armada-app repository.

Description
StarlingX miscellaneous tools and utilities
Readme 11 MiB
Languages
Shell 53.5%
Python 39.3%
C 3.4%
Makefile 1.2%
HTML 1.1%
Other 1.3%