zuul-operator/doc/source/index.rst
Tristan Cacqueray 8c6ad5f431 Update CRD apiVersion to v1 (from v1beta)
This change enables installing the zuul-operator on a recent cluster,
where CRD are no longer beta:

- Update apiVersion in the CRD
- Update cert-manager to v1.8.2
- Update pxc to v1.10.0
- Add openAPIV3Schema to zuul crd (from https://review.opendev.org/c/zuul/zuul-operator/+/800302)

Change-Id: I12ac02d609ea6a2806c734ca00023e4d1059af37
2022-06-30 18:45:23 +00:00

19 KiB

Zuul Operator

This is a Kubernetes Operator for the Zuul Project Gating System.

Zuul has a number of components and depencencies, and this operator is designed to simplify creating and maintaining Zuul systems in Kubernetes.

Somewhat unusually, this operator offers the ability to completely manage Zuul's operational dependencies, to the point of even installing other operators upon which it relies. Be sure to read about deployment options if you want to perform some of these tasks yourself.

Simple Example

The quickest way to get a running Zuul is to allow the operator to manage all of the dependencies for you. In this case, the operator will:

  • Install cert-manager and set up a self-signed cluster issuer
  • Install the Percona XtraDB operator and create a three-node PXC database cluster
  • Create a three-node ZooKeeper cluster
  • And of course, create a Zuul system

Note

Installing other operators requires a high level of access, so when used in this manner, zuul-operator runs with cluster admin privileges. If you would like the operator to run with reduced privileges, see Managing Operator Dependencies.

From the root of the zuul-operator repo, run:

kubectl apply -f deploy/crds/zuul-ci_v1alpha1_zuul_crd.yaml
kubectl apply -f deploy/rbac-admin.yaml
kubectl apply -f deploy/operator.yaml

You probably want a namespace, so go ahead and create one with:

kubectl create namespace zuul

You will need to prepare two config files for Zuul: the Nodepool config file and the Zuul tenant config file. See the Zuul and Nodepool manuals for how to prepare those. When they are ready, add them to Kubernetes with commands like:

kubectl -n zuul create secret generic zuul-nodepool-config --from-file=nodepool.yaml
kubectl -n zuul create secret generic zuul-tenant-config --from-file=main.yaml

Then create a file called zuul.yaml which looks like:

---
apiVersion: operator.zuul-ci.org/v1alpha2
kind: Zuul
metadata:
  name: zuul
spec:
  executor:
    count: 1
    sshkey:
      secretName: gerrit-secrets
  scheduler:
    config:
      secretName: zuul-tenant-config
  launcher:
    config:
      secretName: zuul-nodepool-config
  web:
    count: 1
  connections:
    opendev:
      driver: git
      baseurl: https://opendev.org

This will create the most basic of Zuul installations, with one each of the zuul-executor, zuul-scheduler, and zuul-web processes. It will also create a Nodepool launcher for each of the providers listed in your nodepool.yaml. If your Zuul tenant config file requires more connections, be sure to add them here.

Managing Operator Dependencies

You may not want zuul-operator to install other operators (for example, if your cluster has other users and you don't want cert-manager or pxc-operator to be tied to a Zuul installation, or if you would prefer to avoid granting zuul-operator cluster admin privileges). In that case, you may install the other operators yourself and still allow zuul-operator to use those other operators. It can still create a PXC cluster for you as long as the pxc-operator is present.

To use this mode of operation, make sure the following dependencies are installed before using zuul-operator:

  • Cert-manager (at least version 1.2.0)
  • Percona-xtradb-cluster-operator (at least version 1.7.0)

With these installed, you may install zuul-operator with reduced privileges:

kubectl apply -f deploy/crds/zuul-ci_v1alpha1_zuul_crd.yaml
kubectl apply -f deploy/rbac.yaml
kubectl apply -f deploy/operator.yaml

After this point, usage is the same as other methods.

Externally Managed Zuul Dependencies

If you want zuul-operator to do even less work, you can have it avoid managing either ZooKeeper or the SQL database.

Externally Managed ZooKeeper

If you already have a ZooKeeper instance you would like Zuul to use, add the following to the Zuul spec:

---
apiVersion: operator.zuul-ci.org/v1alpha2
kind: Zuul
spec:
  zookeeper:
     connectionString: ...
     secretName: ...

The connectionString field should be a standard ZooKeeper connection string, and the secretName field should be a Kubernetes TLS secret with the client cert for Zuul to use when connecting to ZooKeeper. TLS is required.

Externally Managed Database

If you would like to use an existing database, add the following to the Zuul spec:

---
apiVersion: operator.zuul-ci.org/v1alpha2
kind: Zuul
spec:
  database:
     dburi: ...

The dburi field should contain a Python db-api URI; it corresponds to the dburi entry in zuul.conf.

Secrets

The operator uses Kubernetes secrets as input values for several aspects of operation. There are some general rules about how secrets are used:

  • For configuration files, secret keys are expected to be the typical filename (for example, nodepool.yaml for the Nodepool config file).
  • For Zuul connection entries, secret keys correspond to the configuration file attributes for that section (e.g., app_key for the github driver).

See the reference documentation for the specific secretName entry for details.

Zuul Preview

The operator has optional support for deploying a zuul-preview service. This is an experimental add-on for Zuul to serve Zuul artifacts from the root of a domain (this can be useful for serving static HTML/Javascript sites). If you enable this, the operator will configure a zuul-preview service to which you may route an Ingress or LoadBalancer.

Zuul Registry

The operator has optional support for deploying a zuul-registry service. This is an experimental add-on for Zuul to act as an intermediate registry for the container image jobs in zuul-jobs.

If you enable this, the operator will, by default, configure a zuul-registry service in a manner appropriate for access from within the cluster only. If you need to access the registry from outside the cluster, you will need to additionally add an Ingress or LoadBalancer, as well as provide TLS certs with the appropriate hostname. Currently, zuul-registry performs its own TLS termination.

If you usue this, you will also need to provide a registry.yaml config file in a secret. You only need to provide the users and, if you are accessing the registry outside the cluster, the public-url setting (omit it if you are accessing it from within the cluster only).

registry:
  users:
    - name: testuser
      pass: testpass
      access: write

Specification Reference

This is a fully populated example (with the exception of connection entries which can contain zuul.conf attributes passed through verbatim):

apiVersion: zuul-ci.org/v1alpha2
kind: Zuul
spec:
  imagePrefix: docker.io/zuul
  imagePullSecrets:
    - name: my-docker-secret
  zuulImageVersion: latest
  zuulPreviewImageVersion: latest
  zuulRegistryImageVersion: latest
  nodepoolImageVersion: latest
  database:
    secretName: mariadbSecret
  zookeeper:
    hosts: zk.example.com:2282
    secretName: zookeeperTLS
  merger:
    count: 5
    git_user_email: zuul@example.org
    git_user_name: Example Zuul
  executor:
    count: 5
    manage_ansible: false
  web:
    count: 1
    status_url: https://zuul.example.org
  fingergw:
    count: 1
  scheduler:
    count: 1
  connections:
    gerrit:
      driver: gerrit
      server: gerrit.example.com
      secretName: gerritSecrets
      user: zuul
      baseurl: http://gerrit.example.com:8080
    github:
      driver: github
      secretName: githubSecrets
      rate_limit_logging: false
      app_id: 1234
  jobVolumes:
    - context: trusted
      access: ro
      path: /authdaemon/token
      volume:
        name: gcp-auth
        hostPath:
          path: /var/authdaemon/executor
          type: DirectoryOrCreate

Zuul

The Zuul kind is currently the only resource directly handled by the operator. It holds a complete description of a Zuul system (though at least partly via secrets referenced by this resource).

spec

imagePrefix

The prefix to use for images. The image names are fixed (zuul-executor, etc). However, changing the prefix will allow you to use custom images or private registries.

imagePullSecrets

If supplied, this value is passed through to Kubernetes. It should be a list of secrets.

name

The name of the image pull secret.

zuulImageVersion

The image tag to append to the Zuul images.

zuulPreviewImageVersion

The image tag to append to the Zuul Preview images.

zuulRegistryImageVersion

The image tag to append to the Zuul Registry images.

nodepoolImageVersion

The image tag to append to the Nodepool images.

database

This is not required unless you want to manage the database yourself. If you omit this section, zuul-operator will create a Percona XtraDB cluster for you.

You may add any attribute corresponding to the database section of zuul.conf here. The dburi attribute will come from the secret below.

secretName

The name of a secret containing connection information for the database.

The key name in the secret should be dburi.

allowUnsafeConfig

If you are running in a resource constrained environment (such as minikube), the requested resource values for the Percona XtraDB may be too large. Set this to True to override the default values and construct the cluster regardless. Only use this for testing.

zookeeper

This is not required unless you want to manage the ZooKeeper cluster yourself. If you omit this section, zuul-operator will create a ZooKeeper cluster for you

You may add any attribute corresponding to the zookeeper section of zuul.conf here. The hosts and TLS attributes will come from the secret below.

hosts

A standard ZooKeeper connection string.

secretName

The name of a secret containing a TLS client certificate and key for ZooKeeper. This should be (or the format should match) a standard Kubernetes TLS secret.

The key names in the secret should be:

  • ca.crt
  • tls.crt
  • tls.key

env

A list of environment variables. This will be passed through to the Pod specifications for the scheduler, launcher, and web. This may be used to set http_proxy environment variables.

scheduler

config

secretName

The name of a secret containing the Zuul tenant config file.

The key name in the secret should be main.yaml.

launcher

config

secretName

The name of a secret containing the Nodepool config file.

The key name in the secret should be nodepool.yaml.

executor

count

How many executors to manage. This is a required component and should be at least 1.

sshkey

secretName

The name of a secret containing the SSH private key that executors should use when logging into Nodepool nodes. You will need to arrange for the public half of this key to be installed on those nodes via whatever mechanism provided by your cloud.

The key name in the secret should be sshkey.

terminationGracePeriodSeconds

When performing a rolling restart of the executors, wait this long for jobs to finish normally. If an executor takes longer than this amount of time, it will be hard-stopped (and jobs will abort and retry). The default is 6 hours, but depending on the workload, a higher or lower value may be appropriate.

merger

count

How many mergers to manage. Executors also act as mergers so this is not required. They may be useful on a busy system.

web

count

How many Zuul webservers to manage. This is a required component and should be at least 1.

fingergw

count

How many Zuul finger gateway servers to manage.

connections

This is a mapping designed to match the connections entries in the main Zuul config file (zuul.conf). Each key in the mapping is the name of a connection (this is the name you will use in the tenant config file), and the values are key/value pairs that are directly added to that connectien entry in zuul.conf. In the case of keys which are typically files (for example, SSH keys), the values will be written to disk for you (so you should include the full values here and not the path).

You may provide any of these values directly in this resource or using the secret described below. You may use both, and the values will be combined (for example, you may include all the values here except a private key which you include in a secret).

Example:

connections:
  opendev:
    driver: git
    baseurl: https://opendev.org
  gerrit:
    driver: git
    baseurl: https://gerrit.examplec.mo
    secretName: gerrit-secrets

<name>

The name of the connection. You will use this is the tenant config file. All of the attributes describing this connection should be included underneath this key.

secretName

The name of a secret describing this connection. All of the keys and values in this secret will be merged with the keys and values described in this connection entry. If you need to provide a file (for example, the sshkey attribute of a Gerrit connection), include the contents as the value of the sshkey attribute in the secret.

externalConfig

A mapping of secrets for specific Nodepool drivers. Some Nodepool drivers use external files for configuration (e.g., clouds.yaml for OpenStack). To provide these to Nodepool, add them to a secret and specify the name of that secret in an entry in externalConfig.

For example, a secret for OpenStack might look like:

apiVersion: v1
kind: Secret
metadata:
  name: openstack-secret
stringData:
  clouds.yaml: "..."

To use that with Nodepool, add the following to the Operator resource definition:

externalConfig:
  openstack:
    secretName: openstack-secret

This will cause a clouds.yaml file to be created at /etc/openstack/clouds.yaml.

Some Nodepool drivers may need environment variables set in order to use these secrets. See Zuul.spec.env to add those.

The keys in this mapping will become directories under /etc/, and the secrets referenced will be mounted in those directories.

<name>

The directory to mount under /etc.

SecretName

The name of a secret that should be mounted at /etc/<name>.

jobVolumes

A list of Kubernetes volumes to be bind mounted into the executor's execution context. These correspond to the trusted_ro_paths, untrusted_ro_paths, trusted_rw_paths, and untrusted_ro_paths entries in zuul.conf.

The first part of each entry describes how the volume should appear in the executor, and the volume attribute describes the Kubernetes volume.

context

One of the following values:

trusted

To be mounted in the trusted execution context.

untrusted

To be mounted in the untrusted execution context.

access

One of the following values:

rw

To be mounted read/write.

ro

To be mounted read-only.

path

The mount point within the execution context.

volume

A mapping corresponding to a Kubernetes volume.

preview

count

How many Zuul Preview servers to manage.

registry

count

How many Zuul Registry servers to manage.

volumeSize

The requested size of the registry storage volume.

tls

secretName

The name of a secret containing a TLS client certificate and key for Zuul Registry. This should be (or the format should match) a standard Kubernetes TLS secret.

If you omit this, the operator will create a secret for you.

config

secretName

The name of a secret containing a registry configuration file. The key in the secret should be registry.yaml. Only provide the users and, if exposing the registry outside the cluster, the public-url entries.