d4c46ecdef
This replaces hard-coding of the host "bridge.openstack.org" with hard-coding of the first (and only) host in the group "bastion". The idea here is that we can, as much as possible, simply switch one place to an alternative hostname for the bastion such as "bridge.opendev.org" when we upgrade. This is just the testing path, for now; a follow-on will modify the production path (which doesn't really get speculatively tested) This needs to be defined in two places : 1) We need to define this in the run jobs for Zuul to use in the playbooks/zuul/run-*.yaml playbooks, as it sets up and collects logs from the testing bastion host. 2) The nested Ansible run will then use inventory inventory/service/groups.yaml Various other places are updated to use this abstracted group as the bastion host. Variables are moved into the bastion group (which only has one host -- the actual bastion host) which means we only have to update the group mapping to the new host. This is intended to be a no-op change; all the jobs should work the same, but just using the new abstractions. Change-Id: Iffb462371939989b03e5d6ac6c5df63aa7708513
55 lines
1.8 KiB
YAML
55 lines
1.8 KiB
YAML
- hosts: bastion:!disabled
|
|
name: "Bridge: configure the bastion host"
|
|
roles:
|
|
- iptables
|
|
- edit-secrets-script
|
|
- install-docker
|
|
tasks:
|
|
# Skip as no arm64 support available; only used for gate testing,
|
|
# where we can't mix arm64 and x86 nodes, so need a minimally
|
|
# working bridge to drive the tests for mirrors/nodepool
|
|
# etc. things.
|
|
- name: Install openshift/kubectl
|
|
when: ansible_architecture != 'aarch64'
|
|
block:
|
|
- include_role:
|
|
name: install-osc-container
|
|
- include_role:
|
|
name: install-kubectl
|
|
- include_role:
|
|
name: configure-kubectl
|
|
|
|
- include_role:
|
|
name: configure-openstacksdk
|
|
vars:
|
|
openstacksdk_config_template: clouds/bridge_all_clouds.yaml.j2
|
|
|
|
- name: Get rid of all-clouds.yaml
|
|
file:
|
|
state: absent
|
|
path: '/etc/openstack/all-clouds.yaml'
|
|
|
|
- name: Install rackspace DNS backup tool
|
|
include_role:
|
|
name: rax-dns-backup
|
|
|
|
- name: Automated Zuul cluster reboots and updates
|
|
# Note this is run via cron because a zuul job can't run this playbook
|
|
# as the playbook relies on all jobs ending for graceful stops on the
|
|
# executors.
|
|
cron:
|
|
name: "Zuul cluster restart"
|
|
# Start Sundays at 00:01 UTC.
|
|
# Estimated completion time Sunday at 18:00 UTC.
|
|
minute: 1
|
|
hour: 0
|
|
weekday: 6
|
|
job: "flock -n /var/run/zuul_reboot.lock /usr/local/bin/ansible-playbook -f 20 /home/zuul/src/opendev.org/opendev/system-config/playbooks/zuul_reboot.yaml >> /var/log/ansible/zuul_reboot.log 2>&1"
|
|
|
|
- name: Rotate Zuul restart logs
|
|
include_role:
|
|
name: logrotate
|
|
vars:
|
|
logrotate_file_name: /var/log/ansible/zuul_reboot.log
|
|
logrotate_frequency: weekly
|