whitebox-tempest-plugin/playbooks/whitebox/pre.yaml
Artom Lifshitz 71c9c3eeb7 Swtich to Ubuntu Noble image
Besides it being just a good thing to move to the latest LTS, this
also resolved an issue in different behaviour in libvirt versions for
watchdog devices. The q35 machine type always includes an implicit
watchdog device, but that device didn't start showing up in the XML
and having configurable actions until 9.1.0. On Jammy, the libvirt
version is 8.0.0, but in downstream RHOSO 18 CI on RHEL 9.4, the
version is 10.0.0. Moving to Noble unifies the libvirt versions, and
makes the watchdog testing less complicated.

Changes necessary to make this work:

* Start installing crudini with `package` instead of `pip` - new `pip`
  complains about the externally managed environment, and tells us to
  either use pipx to install in a venv, pass --break-system-packages,
  or install using system packages. Let's try the latter.

* Start running our local download job on changes to .zuul.yaml as
  well.

* Disable the br-ex-tcpdump service. It appears to only be useful for
  debugging, and it's crashing on the compute host with Noble. Just
  disable it for now.

Change-Id: I538f900953be2304baf8f63693699651fe763402
2024-06-29 14:04:35 -04:00

87 lines
2.8 KiB
YAML

- hosts: all
roles:
- ensure-pip
tasks:
- name: crudini
package:
name: crudini
state: present
become: yes
- name: Install additional packages needed by job environment
package:
name: "{{ item }}"
state: present
become: yes
loop:
- "{{ extra_packages }}"
when: extra_packages is defined
# NOTE(artom) The run-tempest role runs as the tempest user, so we need to give
# the tempest user SSH access to all hosts. Devstack's orchestrate-devstack
# role should have put a pubkey into the stack user's authorized_keys, so if we
# put the corresponding private key in the tempest user's .ssh, things should
# magically work.
- name: Setup tempest SSH key
include_role:
name: copy-build-sshkey
vars:
ansible_become: yes
copy_sshkey_target_user: 'tempest'
- name: Create compute nodes file
block:
- name: Render compute_nodes.yaml template
template:
src: "../templates/{{compute_node_template_name}}"
dest: /home/zuul/compute_nodes.yaml
run_once: true
delegate_to: controller
- name: Output the rendered file at /home/zuul/compute_nodes.yaml
shell: |
cat /home/zuul/compute_nodes.yaml
run_once: true
delegate_to: controller
when: compute_node_template_name is defined
- hosts: compute
tasks:
- name: Create hugepages for computes
block:
- name: Append to GRUB command line
lineinfile:
path: /etc/default/grub
state: present
backrefs: yes
regexp: GRUB_CMDLINE_LINUX="([^"]*)"
line: GRUB_CMDLINE_LINUX="\1 hugepagesz=2M hugepages={{ num_2M_pages }} hugepagesz=1G hugepages={{ num_1G_pages }} transparent_hugepage=never"
become: yes
- name: Update grub.cfg
# NOTE(artom) This assumes an Ubuntu host
command: update-grub2
become: yes
- name: Reboot
reboot:
become: yes
- name: (Re-)start the Zuul console streamer after the reboot
# NOTE(artom) The job will still work if we don't do this, but the
# console will get spammed with 'Waiting on logger' messages. See
# https://bugs.launchpad.net/openstack-gate/+bug/1806655 for more
# info.
import_role:
name: start-zuul-console
- name: Add 1G hugetlbfs mount
# The 2M hugetlbfs is mounted automatically by the OS, but we need to
# manually add the 1G mount.
shell: |
mkdir /dev/hugepages1G
mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G
become: yes
when: num_2M_pages is defined and num_1G_pages is defined