the openshiftpods driver had some duplicated code from the openshift driver that wasn't working as expected; the main issues being: 1. the 'max-pods' config field wasn't taken into account 2. the 'max-servers' config field didn't pass the config validator despite being correct This refactor fixes both issues and removes redundant code, which should improve maintainability in the future. Add validation to ensure aliases are mutually exclusive in these providers' configurations. Add testing to validate this behavior. Change-Id: Iea8c898c4c2ce61d138d280bd2fadd9cbe8e8b61
19 KiB
zuul
Openshift Driver
Selecting the openshift driver adds the following options to the
providers
section of
the configuration.
providers.[openshift]
An Openshift provider's resources are partitioned into groups called
pool (see providers.[openshift].pools
for details), and within
a pool, the node types which are to be made available are listed (see
providers.[openshift].pools.labels
for details).
Note
For documentation purposes the option names are prefixed
providers.[openshift]
to disambiguate from other drivers,
but [openshift]
is not required in the configuration (e.g.
below providers.[openshift].pools
refers to the
pools
key in the providers
section when the
openshift
driver is selected).
Example:
providers:
- name: cluster
driver: openshift
context: context-name
pools:
- name: main
labels:
- name: openshift-project
type: project
- name: openshift-pod
type: pod
image: docker.io/fedora:28
context
Name of the context configured in kube/config
.
Before using the driver, Nodepool services need a
kube/config
file manually installed with self-provisioner
(the service account needs to be able to create projects) context. Make
sure the context is present in oc config get-contexts
command output.
launch-retries
The number of times to retry launching a node before considering the job failed.
max-projects
An alias for max-servers. Note that using max-servers and max-projects at the same time in configuration will result in an error.
max-cores
Maximum number of cores usable from this provider's pools by default. This can be used to limit usage of the openshift backend. If not defined nodepool can use all cores up to the limit of the backend.
max-servers
Maximum number of projects spawnable from this provider's pools by default. This can be used to limit the number of projects. If not defined nodepool can create as many servers the openshift backend allows. Note that using max-servers and max-projects at the same time in configuration will result in an error.
max-ram
Maximum ram usable from this provider's pools by default. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the openshift backend allows.
max-resources
A dictionary of other quota resource limits applicable to this
provider's pools by default. Arbitrary limits may be supplied with the
providers.[openshift].pools.labels.extra-resources
attribute.
pools
A pool defines a group of resources from an Openshift provider.
name
Project's name are prefixed with the pool's name.
priority
The priority of this provider pool (a lesser number is a higher priority). Nodepool launchers will yield requests to other provider pools with a higher priority as long as they are not paused. This means that in general, higher priority pools will reach quota first before lower priority pools begin to be used.
This setting may be specified at the provider level in order to apply to all pools within that provider, or it can be overridden here for a specific pool.
node-attributes
A dictionary of key-value pairs that will be stored with the node data in ZooKeeper. The keys and values can be any arbitrary string.
max-cores
Maximum number of cores usable from this pool. This can be used to limit usage of the kubernetes backend. If not defined nodepool can use all cores up to the limit of the backend.
max-servers
Maximum number of pods spawnable from this pool. This can be used to limit the number of pods. If not defined nodepool can create as many servers the kubernetes backend allows.
max-ram
Maximum ram usable from this pool. This can be used to limit the amount of ram allocated by nodepool. If not defined nodepool can use as much ram as the kubernetes backend allows.
max-resources
A dictionary of other quota resource limits applicable to this pool.
Arbitrary limits may be supplied with the providers.[openshift].pools.labels.extra-resources
attribute.
default-label-cpu
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies specifies a default value for providers.[openshift].pools.labels.cpu
for all labels
of this pool that do not set their own value.
default-label-memory
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies a default value in MiB for providers.[openshift].pools.labels.memory
for all
labels of this pool that do not set their own value.
default-label-storage
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies a default value in MB for providers.[openshift].pools.labels.storage
for all
labels of this pool that do not set their own value.
default-label-cpu-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies specifies a default value for providers.[openshift].pools.labels.cpu-limit
for all
labels of this pool that do not set their own value.
default-label-memory-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies a default value in MiB for providers.[openshift].pools.labels.memory-limit
for
all labels of this pool that do not set their own value.
default-label-storage-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies a default value in MB for providers.[openshift].pools.labels.storage-limit
for
all labels of this pool that do not set their own value.
labels
Each entry in a pool`s labels section indicates that the corresponding label is available for use in this pool.
Each entry is a dictionary with the following keys
name
Identifier for this label; references an entry in the labels
section.
type
The Openshift provider supports two types of labels:
project
Project labels provide an empty project configured with a service account that can create pods, services, configmaps, etc.
pod
Pod labels provide a new dedicated project with a single pod created
using the providers.[openshift].pools.labels.image
parameter
and it is configured with a service account that can exec and get the
logs of the pod.
image
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the image name used by the pod.
image-pull
The ImagePullPolicy, can be IfNotPresent, Always or Never.
image-pull-secrets
The imagePullSecrets needed to pull container images from a private registry.
Example:
labels:
- name: openshift-pod
image: docker.io/fedora:28
image-pull-secrets:
- name: registry-secret
labels
A dictionary of additional values to be added to the namespace or pod metadata. The value of this field is added to the metadata.labels field in OpenShift. Note that this field contains arbitrary key/value pairs and is unrelated to the concept of labels in Nodepool.
dynamic-labels
Similar to providers.[openshift].pools.labels.labels
, but is
interpreted as a format string with the following values available:
- request: Information about the request which prompted the creation
of this node (note that the node may ultimately be used for a different
request and in that case this information will not be updated).
- id: The request ID.
- labels: The list of labels in the request.
- requestor: The name of the requestor.
- requestor_data: Key/value information from the requestor.
- relative_priority: The relative priority of the request.
- event_id: The external event ID of the request.
- created_time: The creation time of the request.
- tenant_name: The name of the tenant associated with the request.
For example:
labels:
- name: pod-fedora
dynamic-labels:
request_info: "{request.id}"
annotations
A dictionary of additional values to be added to the pod metadata. The value of this field is added to the metadata.annotations field in OpenShift. This field contains arbitrary key/value pairs that can be accessed by tools and libraries. E.g custom schedulers can make use of this metadata.
python-path
The path of the default python interpreter. Used by Zuul to set
ansible_python_interpreter
. The special value
auto
will direct Zuul to use inbuilt Ansible logic to
select the interpreter on Ansible >=2.8, and default to
/usr/bin/python2
for earlier versions.
shell-type
The shell type of the node's default shell executable. Used by Zuul
to set ansible_shell_type
. This setting should only be
used
- For a windows image with the experimental connection-type
ssh
, in which casecmd
orpowershell
should be set and reflect the node'sDefaultShell
configuration. - If the default shell is not Bourne compatible (sh), but instead e.g.
csh
orfish
, and the user is aware that there is a long-standing issue withansible_shell_type
in combination withbecome
cpu
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the number of cpu to request for the pod. If no limit is
specified, this will also be used as the limit.
memory
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the amount of memory in MiB to request for the pod. If
no limit is specified, this will also be used as the limit.
storage
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the amount of ephemeral-storage in MB to request for the
pod. If no limit is specified, this will also be used as the limit.
extra-resources
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies any extra resources that Nodepool should consider in its
quota calculation other than the resources described above (cpu, memory,
storage).
cpu-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the cpu limit for the pod.
memory-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the memory limit in MiB for the pod.
storage-limit
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the ephemeral-storage limit in MB for the pod.
gpu
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the amount of gpu allocated to the pod. This will be
used to set both requests and limits to the same value, based on how
kubernetes assigns gpu resources: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/.
gpu-resource
Only used by the providers.[openshift].pools.labels.type.pod
label
type; specifies the custom schedulable resource associated with the
installed gpu that is available in the cluster.
env
Only used by the providers.[openshift].pools.labels.type.pod
label
type; A list of environment variables to pass to the Pod.
name
The name of the environment variable passed to the Pod.
value
The value of the environment variable passed to the Pod.
node-selector
Only used by the providers.[openshift].pools.labels.type.pod
label
type; A map of key-value pairs to ensure the OpenShift scheduler places
the Pod on a node with specific node labels.
scheduler-name
Only used by the providers.[openshift].pools.labels.type.pod
label
type. Sets the schedulerName field on the
container. Normally left unset for the OpenShift default.
privileged
Only used by the providers.[openshift].pools.labels.type.pod
label
type. Sets the securityContext.privileged
flag on the container. Normally left unset for the OpenShift
default.
volumes
Only used by the providers.[openshift].pools.labels.type.pod
label
type. Sets the volumes field on the pod.
If supplied, this should be a list of OpenShift Pod Volume
definitions.
volume-mounts
Only used by the providers.[openshift].pools.labels.type.pod
label
type. Sets the volumeMounts flag on the
container. If supplied, this should be a list of OpenShift Container
VolumeMount definitions.
spec
This attribute is exclusive with all other label attributes except
providers.[openshift].pools.labels.name
and providers.[openshift].pools.labels.type
, providers.[openshift].pools.labels.annotations
, providers.[openshift].pools.labels.labels
and providers.[openshift].pools.labels.dynamic-labels
. If
a spec is provided, then Nodepool will
supply the contents of this value verbatim to OpenShift as the
spec
attribute of the OpenShift Pod
definition. No other Nodepool attributes are used, including any default
values set at the provider level (such as default-label-cpu and similar).
This attribute allows for the creation of arbitrary complex pod definitions but the user is responsible for ensuring that they are suitable. The first container in the pod is expected to be a long-running container that hosts a shell environment for running commands. The following minimal definition matches what Nodepool itself normally creates and is recommended as a starting point:
labels:
- name: custom-pod
type: pod
spec:
containers:
- name: custom-pod
image: ubuntu:jammy
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c"]
args: ["while true; do sleep 30; done;"]