zuul-jobs/roles/upload-logs-s3
Albin Vass 838b0c8877 Add upload-logs-s3
Change-Id: I6ce64734ed5f20a212e6cb953d09ea2769238bea
2020-07-19 21:22:36 +02:00
..
defaults Add upload-logs-s3 2020-07-19 21:22:36 +02:00
library Add upload-logs-s3 2020-07-19 21:22:36 +02:00
tasks Add upload-logs-s3 2020-07-19 21:22:36 +02:00
__init__.py Add upload-logs-s3 2020-07-19 21:22:36 +02:00
README.rst Add upload-logs-s3 2020-07-19 21:22:36 +02:00

Upload logs to S3

Before using this role, create at least one bucket and set up appropriate access controls or lifecycle events. This role will not automatically create buckets.

This role requires the boto3 Python package to be installed in the Ansible environment on the Zuul executor.

Role Variables

This role will not create buckets which do not already exist. If partitioning is not enabled, this is the name of the bucket which will be used. If partitioning is enabled, then this will be used as the prefix for the bucket name which will be separated from the partition name by an underscore. For example, "logs_42" would be the bucket name for partition 42.

Note that you will want to set this to a value that uniquely identifies your Zuul installation.

The endpoint to use when uploading logs to an s3 compatible service. By default this will be automatically constructed by boto but should be set when working with non-aws hosted s3 service.