This change adds the ability to use the k8s (and friends) drivers to create pods with custom specs. This will allow nodepool admins to define labels that create pods with options not otherwise supported by Nodepool, as well as pods with multiple containers. This can be used to implement the versatile sidecar pattern, which, in a system where it is difficult to background a system process (such as a database server or container runtime) is useful to run jobs with such requirements. It is still the case that a single resource is returned to Zuul, so a single pod will be added to the inventory. Therefore, the expectation that it should be possible to shell into the first container in the pod is documented. Change-Id: I4a24a953a61239a8a52c9e7a2b68a7ec779f7a3d
15 KiB
zuul
Openshift Pods Driver
Selecting the openshift pods driver adds the following options to the
providers
section of
the configuration.
providers.[openshiftpods]
The Openshift Pods driver is similar to the Openshift driver, but it only supports pod label. This enables using an unprivileged service account that doesn't require the self-provisioner role.
Example:
providers:
- name: cluster
driver: openshiftpods
context: unprivileged-context-name
pools:
- name: main
labels:
- name: openshift-pod
image: docker.io/fedora:28
context
Name of the context configured in kube/config
.
Before using the driver, Nodepool services need a
kube/config
file manually installed. Make sure the context
is present in oc config get-contexts
command output.
launch-retries
The number of times to retry launching a pod before considering the job failed.
max-pods
An alias for max-servers.
max-cores
Maximum number of cores usable from this provider's pools by default. This can be used to limit usage of the openshift backend. If not defined nodepool can use all cores up to the limit of the backend.
max-servers
Maximum number of pods spawnable from this provider's pools by default. This can be used to limit the number of pods. If not defined nodepool can create as many servers the openshift backend allows.
max-ram
Maximum ram usable from this provider's pools by default. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the openshift backend allows.
max-resources
A dictionary of other quota resource limits applicable to this
provider's pools by default. Arbitrary limits may be supplied with the
providers.[openshiftpods].pools.labels.extra-resources
attribute.
pools
A pool defines a group of resources from an Openshift provider.
name
The project's (namespace) name that will be used to create the pods.
priority
The priority of this provider pool (a lesser number is a higher priority). Nodepool launchers will yield requests to other provider pools with a higher priority as long as they are not paused. This means that in general, higher priority pools will reach quota first before lower priority pools begin to be used.
This setting may be specified at the provider level in order to apply to all pools within that provider, or it can be overridden here for a specific pool.
node-attributes
A dictionary of key-value pairs that will be stored with the node data in ZooKeeper. The keys and values can be any arbitrary string.
max-cores
Maximum number of cores usable from this pool. This can be used to limit usage of the kubernetes backend. If not defined nodepool can use all cores up to the limit of the backend.
max-servers
Maximum number of pods spawnable from this pool. This can be used to limit the number of pods. If not defined nodepool can create as many servers the kubernetes backend allows.
max-ram
Maximum ram usable from this pool. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the kubernetes backend allows.
max-resources
A dictionary of other quota resource limits applicable to this pool.
Arbitrary limits may be supplied with the providers.[openshiftpods].pools.labels.extra-resources
attribute.
default-label-cpu
Specifies specifies a default value for providers.[openshiftpods].pools.labels.cpu
for all
labels of this pool that do not set their own value.
default-label-memory
Specifies a default value in MiB for providers.[openshiftpods].pools.labels.memory
for all
labels of this pool that do not set their own value.
default-label-storage
Specifies a default value in MB for providers.[openshiftpods].pools.labels.storage
for
all labels of this pool that do not set their own value.
default-label-cpu-limit
Specifies specifies a default value for providers.[openshiftpods].pools.labels.cpu-limit
for
all labels of this pool that do not set their own value.
default-label-memory-limit
Specifies a default value in MiB for providers.[openshiftpods].pools.labels.memory-limit
for all labels of this pool that do not set their own value.
default-label-storage-limit
Specifies a default value in MB for providers.[openshiftpods].pools.labels.storage-limit
for all labels of this pool that do not set their own value.
labels
Each entry in a pool`s labels section indicates that the corresponding label is available for use in this pool.
Each entry is a dictionary with the following keys
name
Identifier for this label; references an entry in the labels
section.
image
The image name.
image-pull
The ImagePullPolicy, can be IfNotPresent, Always or Never.
image-pull-secrets
The imagePullSecrets needed to pull container images from a private registry.
Example:
labels:
- name: openshift-pod
type: pod
image: docker.io/fedora:28
image-pull-secrets:
- name: registry-secret
labels
A dictionary of additional values to be added to the namespace or pod metadata. The value of this field is added to the metadata.labels field in OpenShift. Note that this field contains arbitrary key/value pairs and is unrelated to the concept of labels in Nodepool.
dynamic-labels
Similar to providers.[openshiftpods].pools.labels.labels
, but is
interpreted as a format string with the following values available:
- request: Information about the request which prompted the creation
of this node (note that the node may ultimately be used for a different
request and in that case this information will not be updated).
- id: The request ID.
- labels: The list of labels in the request.
- requestor: The name of the requestor.
- requestor_data: Key/value information from the requestor.
- relative_priority: The relative priority of the request.
- event_id: The external event ID of the request.
- created_time: The creation time of the request.
- tenant_name: The name of the tenant associated with the request.
For example:
labels:
- name: pod-fedora
dynamic-labels:
request_info: "{request.id}"
annotations
A dictionary of additional values to be added to the pod metadata. The value of this field is added to the metadata.annotations field in OpenShift. This field contains arbitrary key/value pairs that can be accessed by tools and libraries. E.g custom schedulers can make use of this metadata.
cpu
Specifies the number of cpu to request for the pod. If no limit is specified, this will also be used as the limit.
memory
Specifies the amount of memory in MiB to request for the pod. If no limit is specified, this will also be used as the limit.
storage
Specifies the amount of ephemeral-storage in MB to request for the pod. If no limit is specified, this will also be used as the limit.
extra-resources
Specifies any extra resources that Nodepool should consider in its quota calculation other than the resources described above (cpu, memory, storage).
cpu-limit
Specifies the cpu limit for the pod.
memory-limit
Specifies the memory limit in MiB for the pod.
storage-limit
Specifies the ephemeral-storage limit in MB for the pod.
gpu
Specifies the amount of gpu allocated to the pod. This will be used to set both requests and limits to the same value, based on how kubernetes assigns gpu resources: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/.
gpu-resource
Specifies the custom schedulable resource associated with the installed gpu that is available in the cluster.
python-path
The path of the default python interpreter. Used by Zuul to set
ansible_python_interpreter
. The special value
auto
will direct Zuul to use inbuilt Ansible logic to
select the interpreter on Ansible >=2.8, and default to
/usr/bin/python2
for earlier versions.
shell-type
The shell type of the node's default shell executable. Used by Zuul
to set ansible_shell_type
. This setting should only be
used
- For a windows pod with the experimental connection-type
ssh
, in which casecmd
orpowershell
should be set and reflect the node'sDefaultShell
configuration. - If the default shell is not Bourne compatible (sh), but instead e.g.
csh
orfish
, and the user is aware that there is a long-standing issue withansible_shell_type
in combination withbecome
env
A list of environment variables to pass to the Pod.
name
The name of the environment variable passed to the Pod.
value
The value of the environment variable passed to the Pod.
node-selector
A map of key-value pairs to ensure the OpenShift scheduler places the Pod on a node with specific node labels.
scheduler-name
Sets the schedulerName field on the container. Normally left unset for the OpenShift default.
privileged
Sets the securityContext.privileged flag on the container. Normally left unset for the OpenShift default.
volumes
Sets the volumes field on the pod. If supplied, this should be a list of OpenShift Pod Volume definitions.
volume-mounts
Sets the volumeMounts flag on the container. If supplied, this should be a list of OpenShift Container VolumeMount definitions.
spec
This attribute is exclusive with all other label attributes except
providers.[openshiftpods].pools.labels.name
providers.[openshiftpods].pools.labels.annotations
,
providers.[openshiftpods].pools.labels.labels
and
providers.[openshiftpods].pools.labels.dynamic-labels
.
If a spec is provided, then Nodepool will
supply the contents of this value verbatim to OpenShift as the
spec
attribute of the OpenShift Pod
definition. No other Nodepool attributes are used, including any default
values set at the provider level (such as default-label-cpu and similar).
This attribute allows for the creation of arbitrary complex pod definitions but the user is responsible for ensuring that they are suitable. The first container in the pod is expected to be a long-running container that hosts a shell environment for running commands. The following minimal definition matches what Nodepool itself normally creates and is recommended as a starting point:
labels:
- name: custom-pod
spec:
containers:
- name: custom-pod
image: ubuntu:jammy
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args: ["while true; do sleep 30; done;"]