system-config/playbooks/zuul/run-production-playbook-post.yaml
Ian Wienand 0c90c128d7
Reference bastion through prod_bastion group
In thinking harder about the bootstrap process, it struck me that the
"bastion" group we have is two separate ideas that become a bit
confusing because they share a name.

We have the testing and production paths that need to find a single
bridge node so they can run their nested Ansible.  We've recently
merged changes to the setup playbooks to not hard-code the bridge node
and they now use groups["bastion"][0] to find the bastion host -- but
this group is actually orthogonal to the group of the same name
defined in inventory/service/groups.yaml.

The testing and production paths are running on the executor, and, as
mentioned, need to know the bridge node to log into.  For the testing
path this is happening via the group created in the job definition
from zuul.d/system-config-run.yaml.  For the production jobs, this
group is populated via the add-bastion-host role which dynamically
adds the bridge host and group.

Only the *nested* Ansible running on the bastion host reads
s-c:inventory/service/groups.yaml.  None of the nested-ansible
playbooks need to target only the currently active bastion host.  For
example, we can define as many bridge nodes as we like in the
inventory and run service-bridge.yaml against them.  It won't matter
because the production jobs know the host that is the currently active
bridge as described above.

So, instead of using the same group name in two contexts, rename the
testing/production group "prod_bastion".  groups["prod_bastion"][0]
will be the host that the testing/production jobs use as the bastion
host -- references are updated in this change (i.e. the two places
this group is defined -- the group name in the system-config-run jobs,
and add-bastion-host for production).

We then can return the "bastion" group match to bridge*.opendev.org in
inventory/service/groups.yaml.

This fixes a bootstrapping problem -- if you launch, say,
bridge03.opendev.org the launch node script will now apply the
base.yaml playbook against it, and correctly apply all variables from
the "bastion" group which now matches this new host.  This is what we
want to ensure, e.g. the zuul user and keys are correctly populated.

The other thing we can do here is change the testing path
"prod_bastion" hostname to "bridge99.opendev.org".  By doing this we
ensure we're not hard-coding for the production bridge host in any way
(since if both testing and production are called bridge01.opendev.org
we can hide problems).  This is a big advantage when we want to rotate
the production bridge host, as we can be certain there's no hidden
dependencies.

Change-Id: I137ab824b9a09ccb067b8d5f0bb2896192291883
2022-11-04 09:18:35 +11:00

113 lines
3.6 KiB
YAML

- hosts: localhost
roles:
- add-bastion-host
- hosts: prod_bastion[0]
tasks:
- name: Encrypt log
when: infra_prod_playbook_encrypt_log|default(False)
block:
- name: Create temporary staging area for encrypted logs
tempfile:
state: directory
register: _encrypt_tempdir
- name: Copy log to tempdir as Zuul user
copy:
src: '/var/log/ansible/{{ playbook_name }}.log'
dest: '{{ _encrypt_tempdir.path }}'
owner: zuul
group: zuul
mode: '0644'
remote_src: yes
become: yes
- name: Encrypt logs
include_role:
name: encrypt-logs
vars:
encrypt_logs_files:
- '{{ _encrypt_tempdir.path }}/{{ playbook_name }}.log'
# Artifact URL should just point to root directory, so blank
encrypt_logs_artifact_path: ''
encrypt_logs_download_script_path: '{{ _encrypt_tempdir.path }}'
- name: Return logs
synchronize:
src: '{{ item[0] }}'
dest: '{{ item[1] }}'
mode: pull
verify_host: true
loop:
- ['{{ _encrypt_tempdir.path }}/{{ playbook_name }}.log.gpg', '{{ zuul.executor.log_root }}/{{ playbook_name }}.log.gpg']
- ['{{ _encrypt_tempdir.path }}/download-logs.sh' , '{{ zuul.executor.log_root }}/download-gpg-logs.sh']
always:
- name: Remove temporary staging
file:
path: '{{ _encrypt_tempdir.path }}'
state: absent
when: _encrypt_tempdir is defined
# Not using normal zuul job roles as the bastion host is not a
# test node with all the normal bits in place.
- name: Collect log output
synchronize:
dest: "{{ zuul.executor.log_root }}/{{ playbook_name }}.log"
mode: pull
src: "/var/log/ansible/{{ playbook_name }}.log"
verify_host: true
when: infra_prod_playbook_collect_log
- name: Return playbook log artifact to Zuul
when: infra_prod_playbook_collect_log
zuul_return:
data:
zuul:
artifacts:
- name: "Playbook Log"
url: "{{ playbook_name }}.log"
metadata:
type: text
# Save files locally on bridge
- name: Get original timestamp from file header
shell: |
head -1 /var/log/ansible/{{ playbook_name }}.log | sed -n 's/^Running \(.*\):.*$/\1/p'
args:
executable: /bin/bash
register: _log_timestamp
- name: Turn timestamp into a string
set_fact:
_log_timestamp: '{{ _log_timestamp.stdout | trim }}'
- name: Rename playbook log on bridge
when: not infra_prod_playbook_collect_log
become: yes
copy:
remote_src: yes
src: "/var/log/ansible/{{ playbook_name }}.log"
dest: "/var/log/ansible/{{ playbook_name }}.log.{{ _log_timestamp }}"
# Reset the access/modification time to the timestamp in the filename; this
# makes lining things up more logical
- name: Reset file time
file:
path: '/var/log/ansible/{{ playbook_name }}.log.{{ _log_timestamp }}'
state: touch
modification_time: '{{ _log_timestamp }}'
modification_time_format: '%Y-%m-%dT%H:%M:%S'
access_time: '{{ _log_timestamp }}'
access_time_format: '%Y-%m-%dT%H:%M:%S'
become: yes
- name: Cleanup old playbook logs on bridge
when: not infra_prod_playbook_collect_log
become: yes
shell: |
find /var/log/ansible -name '{{ playbook_name }}.log.*' -type f -mtime +30 -delete