b1a247e7f5
If labels are not specified on a Job, kubernetes defaults them to include the labels of their underlying Pod template. Helm 3 injects metadata into all resources [0] including a `app.kubernetes.io/managed-by: Helm` label. Thus when kubernetes sees a Job's labels they are no longer empty and thus do not get defaulted to the underlying Pod template's labels. This is a problem since Job labels are depended on by - Armada pre-upgrade delete hooks - Armada wait logic configurations - kubernetes-entrypoint dependencies Thus for each Job template this adds labels matching the underlying Pod template to retain the same labels that were present with Helm 2. [0]: https://github.com/helm/helm/pull/7649 Change-Id: I3b6b25fcc6a1af4d56f3e2b335615074e2f04b6d |
||
---|---|---|
.. | ||
bin | ||
configmap-bin.yaml | ||
configmap-dashboards-alertmanager.yaml | ||
configmap-dashboards.yaml | ||
configmap-etc.yaml | ||
deployment.yaml | ||
ingress-grafana.yaml | ||
job-db-init-session.yaml | ||
job-db-init.yaml | ||
job-db-session-sync.yaml | ||
job-image-repo-sync.yaml | ||
job-set-admin-user.yaml | ||
network_policy.yaml | ||
pod-helm-tests.yaml | ||
secret-admin-creds.yaml | ||
secret-db-session.yaml | ||
secret-db.yaml | ||
secret-ingress-tls.yaml | ||
secret-prom-creds.yaml | ||
service-ingress.yaml | ||
service.yaml |