b1a247e7f5
If labels are not specified on a Job, kubernetes defaults them to include the labels of their underlying Pod template. Helm 3 injects metadata into all resources [0] including a `app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes sees a Job's labels they are no longer empty and thus do not get defaulted to the underlying Pod template's labels. This is a problem since Job labels are depended on by - Armada pre-upgrade delete hooks - Armada wait logic configurations - kubernetes-entrypoint dependencies Thus for each Job template this adds labels matching the underlying Pod template to retain the same labels that were present with Helm 2. [0]: https://github.com/helm/helm/pull/7649 Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d |
||
---|---|---|
.. | ||
bin | ||
monitoring/prometheus | ||
certificates.yaml | ||
configmap-bin-curator.yaml | ||
configmap-bin-elasticsearch.yaml | ||
configmap-etc-curator.yaml | ||
configmap-etc-elasticsearch.yaml | ||
cron-job-curator.yaml | ||
cron-job-verify-repositories.yaml | ||
deployment-client.yaml | ||
deployment-gateway.yaml | ||
ingress-elasticsearch.yaml | ||
job-elasticsearch-template.yaml | ||
job-image-repo-sync.yaml | ||
job-s3-bucket.yaml | ||
job-s3-user.yaml | ||
network-policy.yaml | ||
pod-helm-tests.yaml | ||
secret-elasticsearch.yaml | ||
secret-environment.yaml | ||
secret-ingress-tls.yaml | ||
secret-s3-user.yaml | ||
service-data.yaml | ||
service-discovery.yaml | ||
service-gateway.yaml | ||
service-ingress-elasticsearch.yaml | ||
service-logging.yaml | ||
statefulset-data.yaml | ||
statefulset-master.yaml |