A declarative operator for libvirt configuration
Go to file
Sean Eagan 3185f685c3 Don't log reconcile errors
Since controller-runtime already logs reconcile errors, there is
no need to log them ourselves, and we instead wrap errors as
needed to provide extra context.

Change-Id: I4fcfd1cf1e8bf2829efc1877884c46166f9927c0
2021-01-22 20:05:42 +00:00
config Status Condition updates 2021-01-22 13:58:47 -06:00
hack kubebuilder init 2020-11-19 15:28:17 -05:00
nodelabeler Add image build jobs 2021-01-15 15:08:09 +00:00
pkg Don't log reconcile errors 2021-01-22 20:05:42 +00:00
playbooks Add image build jobs 2021-01-15 15:08:09 +00:00
roles Minor adjustment to publish images job 2021-01-15 12:24:21 -05:00
tools Status Condition updates 2021-01-22 13:58:47 -06:00
zuul.d Update github mirroring secret 2021-01-20 14:47:32 -06:00
.gitignore Fix linting issues 2020-12-09 13:21:55 -06:00
.gitreview Add a gitreview 2021-01-06 12:07:46 -06:00
.golangci.yaml Add linters 2020-12-09 13:19:59 -06:00
Dockerfile Move go packages to vino/pkg 2021-01-15 10:58:02 -06:00
go.mod Status Condition updates 2021-01-22 13:58:47 -06:00
go.sum Status Condition updates 2021-01-22 13:58:47 -06:00
LICENSE kubebuilder init 2020-11-19 15:28:17 -05:00
main.go Various reconciliation fixes 2021-01-20 09:47:36 -06:00
Makefile Resolving make target typo for nodelabeler img 2021-01-19 09:48:51 -05:00
PROJECT [CPVYGR-34] Add crd files generated from kubebuilder 2020-12-02 21:11:32 +00:00
README.md Update README.md with latest dev env setup 2021-01-22 18:18:16 +00:00

ViNO Cluster Operator

Docker Repository on Quay

Overview

The lifecycle of the Virtual Machines and their relationship to the Kubernetes cluster will be managed using two operators: vNode-Operator(ViNO), and the Support Infra Provider Operator (SIP).

Description

ViNO is responsible for setting up VM infrastructure, such as:

  • per-node vino pod:
    • libvirt init, e.g.
      • setup vm-infra bridge
      • provisioning tftp/dhcp definition
    • libvirt launch
    • sushi pod
  • libvirt domains
  • networking
  • bmh objects, with labels:
    • location - i.e. rack: 8 and node: rdm8r008c002 - should follow k8s semi-standard
    • vm role - i.e. node-type: worker
    • vm flavor - i.e node-flavor: foobar
    • networks - i.e. networks: [foo, bar] and the details for ViNO can be found here

The Cluster Support Infrastructure Provider, or SIP, is responsible for the lifecycle of:

  • identifying the correct BareMetalHost resources to label (or unlabel) based on scheduling constraints.
  • extract IP address information from BareMetalHost objects to use in the creation of supporting infrastructure.
  • creating support infra for the tenant k8s cluster:
    • load balancers (for tenant Kubernetes API)
    • jump pod to access the cluster and nodes via ssh
    • an OIDC provider for the tenant cluster, i.e. Dex
    • potentially more in the future

Development Environment

Pre-requisites

Install Golang 1.15+

ViNO is a project written in Go, and the make targets used to deploy ViNO leverage both Go and Kustomize commands which require Golang be installed.

For detailed installation instructions, please see the Golang installation guide.

Install Kustomize v3.2.3+

In order to apply manifests to your cluster via Make targets we suggest the use of Kustomize.

For detailed installation instructions, please see the Kustomize installation guide.

Proxy Setup

If your organization requires development behind a proxy server, you will need to define the following environment variables with your organization's information:

HTTP_PROXY=http://username:password@host:port
HTTPS_PROXY=http://username:password@host:port
NO_PROXY="localhost,127.0.0.1,10.96.0.0/12"
PROXY=http://username:password@host:port
USE_PROXY=true

10.96.0.0/12 is the Kubernetes service CIDR.

Deploy ViNO

Airship projects often have to deploy Kubernetes, with common requirements such as supporting network policies or working behind corporate proxies. To that end the community maintains a Kubernetes deployment script and is the suggested way of deploying your Kubernetes cluster for development purposes.

Deploy Kubernetes

# curl -Lo deploy-k8s.sh https://opendev.org/airship/charts/raw/branch/master/tools/gate/deploy-k8s.sh
# chmod +x deploy-k8s.sh
# sudo ./deploy-k8s.sh

Deploy ViNO

Once your cluster is up and running, you'll need to build the ViNO image to use, and to deploy the operator on your cluster:

# make docker-build-controller
# make deploy

Once these steps are completed, you should have a working cluster with ViNO deployed on top of it:

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7985fc4dd6-6q5l4    1/1     Running   0          3h7m
kube-system   calico-node-lqzxp                           1/1     Running   0          3h7m
kube-system   coredns-f9fd979d6-gbdzl                     1/1     Running   0          3h7m
kube-system   etcd-ubuntu-virtualbox                      1/1     Running   0          3h8m
kube-system   kube-apiserver-ubuntu-virtualbox            1/1     Running   0          3h8m
kube-system   kube-controller-manager-ubuntu-virtualbox   1/1     Running   0          3h8m
kube-system   kube-proxy-ml4gd                            1/1     Running   0          3h7m
kube-system   kube-scheduler-ubuntu-virtualbox            1/1     Running   0          3h8m
kube-system   storage-provisioner                         1/1     Running   0          3h8m
vino-system   vino-controller-manager-788b994c74-sbf26    2/2     Running   0          25m

Test basic functionality

# kubectl apply -f config/samples/vino_cr.yaml
# kubectl get pods
# kubectl get ds

delete vino CR and make sure DaemonSet is deleted as well

# kubectl delete vino vino-test-cr
# kubectl get ds
# kubectl get cm

Get in Touch

For any questions on the ViNo, or other Airship projects, we encourage you to join the community on Slack/IRC or by participating in the mailing list. Please see this Wiki for contact information, and the community meeting schedules.