Merge "Integrate openstack provider(capo) with airshipctl"

This commit is contained in:
Zuul 2020-10-09 21:47:44 +00:00 committed by Gerrit Code Review
commit 3866d358b8
64 changed files with 3175 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

View File

@ -33,6 +33,7 @@ Welcome to airshipctl's Documentation!
hardware_profile hardware_profile
Commands <cli/airshipctl> Commands <cli/airshipctl>
providers/cluster_api_docker providers/cluster_api_docker
providers/cluster_api_openstack
.. toctree:: .. toctree::
:caption: Airship Project Documentation :caption: Airship Project Documentation

View File

@ -0,0 +1,827 @@
# Airshipctl integration with Cluster API Openstack
## Overview
This document provides instructions on the usage of airshipctl, to perform the
following operations with openstack as infrastructure provider:
- Initialize the management cluster with cluster api, and cluster api openstack
provider components
- Create a target workload cluster with controlplane and worker machines on an openstack
cloud environment
## Workflow
A simple workflow that can be tested involves the following operations:
Initialize a management cluster with cluster api and openstack provider
components:
*`$ airshipctl cluster init`* or *`$ airshipctl cluster init --debug`*
Create a target workload cluster with control plane and worker nodes:
*`$ airshipctl phase apply controlplane`*
*`$ airshipctl phase apply workers`*
Note: `airshipctl phase apply initinfra` is not used because all the provider
components are initialized using `airshipctl cluster init`.
## Common Prerequisite
- Install [Docker](https://www.docker.com/)
- Install and setup [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- Install [Kind](https://kind.sigs.k8s.io/)
- Install [Kustomize](https://kubernetes-sigs.github.io/kustomize/installation/binaries/)
- Install [Airshipctl](https://docs.airshipit.org/airshipctl/developers.html)
## Openstack Prerequisites
### Credentials
In order to comunicate with openstack cloud environment, following set of credentials are
needed to be generated.
The [env.rc](<https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/docs/env.rc>) script
sets the required environment variables related to credentials.
`source env.rc <path/to/clouds.yaml> <cloud>`
The following variables are set.
```bash
OPENSTACK_CLOUD: The cloud name which is used as second argument in the command above
OPENSTACK_CLOUD_YAML_B64: The secret used by capo to access OpenStack cloud
OPENSTACK_CLOUD_PROVIDER_CONF_B64: The content of cloud.conf used by OpenStack cloud
OPENSTACK_CLOUD_CACERT_B64: (Optional) The content of custom CA file which can be specified in the clouds.yaml
```
### SSH key pair
An ssh key-pair must be specified by setting the `OPENSTACK_SSH_KEY_NAME` environment variable.
A key-pair can be created by executing the following command
`openstack keypair create [--public-key <file> | --private-key <file>] <name>`
### Availability zone
The availability zone must be set as an environment variable `OPENSTACK_FAILURE_DOMAIN`.
### DNS server
The DNS servers must be set as an environment variable `OPENSTACK_DNS_NAMESERVERS`.
### External network
The openstack environment should have an external network already present.
The external network id can be specified by setting the `spec.externalNetworkId` of `OpenStackCluster` CRD of the cluster template.
The public network id can be obtained by using command
```bash
openstack network list --external
```
### Floating IP
A floating IP is automatically created and associated with the load balancer or controller node, however floating IP can also be
specified explicitly by setting the `spec.apiServerLoadBalancerFlotingIP` of `OpenStackCluster` CRD.
Floating ip can be created using `openstack floating ip create <public network>` command.
Note: Only user with admin role can create a floating IP with specific IP address.
### Operating system image
A cluster api compatible image is required for creating workload kubernetes clusters. The kubeadm bootstrap provider that capo uses
depends on some pre-installed software like a container runtime, kubelet, kubeadm and also on an up-to-date version of cloud-init.
The image can be referenced by setting an environment variable `OPENSTACK_IMAGE_NAME`.
#### Install Packer
$ mkdir packer
$ cd packer
$ wget <https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip>
$ unzip packer_1.6.0_linux_amd64.zip
$ sudo mv packer /usr/local/bin/
#### Install Ansible
$ sudo apt update
$ sudo apt upgrade
$ sudo apt install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt update
$ sudo apt install ansible
#### Build Cluster API Compliant VM Image
```bash
$ sudo -i
# apt install qemu-kvm libvirt-bin qemu-utils
$ sudo usermod -a -G kvm `<yourusername>`
$ sudo chown root:kvm /dev/kvm
```
Exit and log back in to make the change take place.
```bash
git clone <https://github.com/kubernetes-sigs/image-builder.git> image-builder
cd image-builder/images/capi/
vim packer/qemu/qemu-ubuntu-1804.json
```
Update the iso_url to `http://cdimage.ubuntu.com/releases/18.04/release/ubuntu-18.04.5-server-amd64.iso`
Make sure to use the correct checksum value from
[ubuntu-releases](http://cdimage.ubuntu.com/releases/18.04.5/release/SHA256SUMS)
$ make build-qemu-ubuntu-1804
#### Upload Images to Openstack
$ openstack image create --container-format bare --disk-format qcow2 --file ubuntu-1804-kube-v1.16.14 ubuntu-1804-kube-v1.16.4
``` bash
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 10e31af1-5414-4bae-9500-922db677e695 | amphora-x64-haproxy | active |
| 61bf8071-5e00-4806-83e0-612f8da03bf8 | cirros-0.5.1-x86_64-disk | active |
| 4fd894c7-9964-461b-bc9f-2e90fdade505 | ubuntu-1804-kube-v1.16.4 | active |
+--------------------------------------+--------------------------+--------+
```
## Getting Started
Kind is used to setup a kubernetes cluster, that will later be transformed
into a management cluster using airshipctl. The kind kubernetes cluster will be
initialized with cluster API and Cluster API openstack(CAPO) provider components.
$ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
$ kind create cluster --name capi-openstack
```bash
Creating cluster "capi-openstack" ...
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
WARNING: Here be dragons! This is not supported currently.
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-capi-openstack"
You can now use your cluster with:
kubectl cluster-info --context kind-capi-openstack
```
Check if all the pods are up.
$ kubectl get pods -A
```bash
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-2thc2 1/1 Running 0 2m43s
kube-system coredns-66bff467f8-4qbvk 1/1 Running 0 2m43s
kube-system etcd-capi-openstack-control-plane 1/1 Running 0 2m58s
kube-system kindnet-xwp2x 1/1 Running 0 2m43s
kube-system kube-apiserver-capi-openstack-control-plane 1/1 Running 0 2m58s
kube-system kube-controller-manager-capi-openstack-control-plane 1/1 Running 0 2m58s
kube-system kube-proxy-khhvd 1/1 Running 0 2m43s
kube-system kube-scheduler-capi-openstack-control-plane 1/1 Running 0 2m58s
local-path-storage local-path-provisioner-bd4bb6b75-qnbjk 1/1 Running 0 2m43s
```
## Create airshipctl configuration
$ mkdir ~/.airship
$ airshipctl config init
Run the below command to configure openstack manifest, and add it to airship config
$ airshipctl config set-manifest openstack_manifest --repo primary --url \
<https://opendev.org/airship/airshipctl> --branch master --primary \
--sub-path manifests/site/openstack-test-site --target-path /tmp/airship/
$ airshipctl config set-context kind-capi-openstack --manifest openstack_manifest
```bash
Context "kind-capi-openstack" created.
```
$ cp ~/.kube/config ~/.airship/kubeconfig
$ airshipctl config get-context
```bash
Context: kind-capi-openstack
contextKubeconf: kind-capi-openstack_target
manifest: openstack_manifest
LocationOfOrigin: /home/stack/.airship/kubeconfig
cluster: kind-capi-openstack_target
user: kind-capi-openstack
```
$ airshipctl config use-context kind-capi-openstack
$ airshipctl document pull --debug
```bash
[airshipctl] 2020/09/10 23:19:32 Reading current context manifest information from /home/stack/.airship/config
[airshipctl] 2020/09/10 23:19:32 Downloading primary repository airshipctl from https://opendev.org/airship/airshipctl into /tmp/airship/
[airshipctl] 2020/09/10 23:19:32 Attempting to download the repository airshipctl
[airshipctl] 2020/09/10 23:19:32 Attempting to clone the repository airshipctl from https://opendev.org/airship/airshipctl
[airshipctl] 2020/09/10 23:19:32 Attempting to open repository airshipctl
[airshipctl] 2020/09/10 23:19:32 Attempting to checkout the repository airshipctl from branch refs/heads/master
```
$ airshipctl config set-manifest openstack_manifest --target-path /tmp/airship/airshipctl
## Initialize Management cluster
Execute the following command to initialize the Management cluster with CAPI and CAPO components.
$ airshipctl cluster init --debug
```bash
[airshipctl] 2020/09/10 23:36:23 Starting cluster-api initiation
Fetching providers
[airshipctl] 2020/09/10 23:36:23 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
[airshipctl] 2020/09/10 23:36:23 Setting up airshipctl provider Components client
Provider type: CoreProvider, name: cluster-api
[airshipctl] 2020/09/10 23:36:23 Getting airshipctl provider components, skipping variable substitution: true.
Provider type: CoreProvider, name: cluster-api
...
```
Wait for all the pods to be up.
$ kubectl get pods -A
```bash
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-dfdf9877b-g44hd 2/2 Running 0 59s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-76c847457b-z2jtr 2/2 Running 0 58s
capi-system capi-controller-manager-7c7978f565-rk7qk 2/2 Running 0 59s
capi-webhook-system capi-controller-manager-748c57d64d-wjbnj 2/2 Running 0 60s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-65f979767f-bv6dr 2/2 Running 0 59s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7f5d88dcf9-k6kpf 2/2 Running 1 58s
capi-webhook-system capo-controller-manager-7d76dc9ddc-b9xhw 2/2 Running 0 57s
capo-system capo-controller-manager-79445d5984-k9fmc 2/2 Running 0 57s
cert-manager cert-manager-77d8f4d85f-nkg58 1/1 Running 0 71s
cert-manager cert-manager-cainjector-75f88c9f56-fcrc6 1/1 Running 0 72s
cert-manager cert-manager-webhook-56669d7fcb-cbzfn 1/1 Running 1 71s
kube-system coredns-66bff467f8-2thc2 1/1 Running 0 29m
kube-system coredns-66bff467f8-4qbvk 1/1 Running 0 29m
kube-system etcd-capi-openstack-control-plane 1/1 Running 0 29m
kube-system kindnet-xwp2x 1/1 Running 0 29m
kube-system kube-apiserver-capi-openstack-control-plane 1/1 Running 0 29m
kube-system kube-controller-manager-capi-openstack-control-plane 1/1 Running 0 29m
kube-system kube-proxy-khhvd 1/1 Running 0 29m
kube-system kube-scheduler-capi-openstack-control-plane 1/1 Running 0 29m
local-path-storage local-path-provisioner-bd4bb6b75-qnbjk 1/1 Running 0 29m
```
At this point, the management cluster is initialized with cluster api and cluster api openstack provider components.
## Create control plane and worker nodes
$ airshipctl phase apply controlplane --debug
```bash
[airshipctl] 2020/09/11 00:11:44 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/openstack-test-site/target/controlplane
[airshipctl] 2020/09/11 00:11:44 Getting infos for bundle, inventory id is kind-capi-openstack-target-controlplane
[airshipctl] 2020/09/11 00:11:44 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/11 00:11:44 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-openstack-target-controlplane"},"name":"airshipit-kind-capi-openstack-target-controlplane","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/11 00:11:44 Making sure that inventory object namespace airshipit exists
secret/ostgt-cloud-config created
cluster.cluster.x-k8s.io/ostgt created
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/ostgt-control-plane created
openstackcluster.infrastructure.cluster.x-k8s.io/ostgt created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-control-plane created
5 resource(s) applied. 5 created, 0 unchanged, 0 configured
secret/ostgt-cloud-config is NotFound: Resource not found
cluster.cluster.x-k8s.io/ostgt is NotFound: Resource not found
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/ostgt-control-plane is NotFound: Resource not found
openstackcluster.infrastructure.cluster.x-k8s.io/ostgt is NotFound: Resource not found
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-control-plane is NotFound: Resource not found
secret/ostgt-cloud-config is Current: Resource is always ready
cluster.cluster.x-k8s.io/ostgt is Current: Resource is current
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/ostgt-control-plane is Current: Resource is current
openstackcluster.infrastructure.cluster.x-k8s.io/ostgt is Current: Resource is current
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-control-plane is Current: Resource is current
all resources has reached the Current status
```
$ airshipctl phase apply workers --debug
```bash
[airshipctl] 2020/09/11 00:12:19 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/openstack-test-site/target/workers
[airshipctl] 2020/09/11 00:12:19 Getting infos for bundle, inventory id is kind-capi-openstack-target-workers
[airshipctl] 2020/09/11 00:12:19 Inventory Object config Map not found, auto generating Invetory object
[airshipctl] 2020/09/11 00:12:19 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-openstack-target-workers"},"name":"airshipit-kind-capi-openstack-target-workers","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
[airshipctl] 2020/09/11 00:12:19 Making sure that inventory object namespace airshipit exists
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ostgt-md-0 created
machinedeployment.cluster.x-k8s.io/ostgt-md-0 created
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-md-0 created
3 resource(s) applied. 3 created, 0 unchanged, 0 configured
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ostgt-md-0 is NotFound: Resource not found
machinedeployment.cluster.x-k8s.io/ostgt-md-0 is NotFound: Resource not found
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-md-0 is NotFound: Resource not found
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/ostgt-md-0 is Current: Resource is current
machinedeployment.cluster.x-k8s.io/ostgt-md-0 is Current: Resource is current
openstackmachinetemplate.infrastructure.cluster.x-k8s.io/ostgt-md-0 is Current: Resource is current
all resources has reached the Current status
```
$ kubectl get po -A
```bash
NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-dfdf9877b-g44hd 2/2 Running 0 36m
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-76c847457b-z2jtr 2/2 Running 0 36m
capi-system capi-controller-manager-7c7978f565-rk7qk 2/2 Running 0 36m
capi-webhook-system capi-controller-manager-748c57d64d-wjbnj 2/2 Running 0 36m
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-65f979767f-bv6dr 2/2 Running 0 36m
capi-webhook-system capi-kubeadm-control-plane-controller-manager-7f5d88dcf9-k6kpf 2/2 Running 1 36m
capi-webhook-system capo-controller-manager-7d76dc9ddc-b9xhw 2/2 Running 0 36m
capo-system capo-controller-manager-79445d5984-k9fmc 2/2 Running 0 36m
cert-manager cert-manager-77d8f4d85f-nkg58 1/1 Running 0 36m
cert-manager cert-manager-cainjector-75f88c9f56-fcrc6 1/1 Running 0 36m
cert-manager cert-manager-webhook-56669d7fcb-cbzfn 1/1 Running 1 36m
kube-system coredns-66bff467f8-2thc2 1/1 Running 0 64m
kube-system coredns-66bff467f8-4qbvk 1/1 Running 0 64m
kube-system etcd-capi-openstack-control-plane 1/1 Running 0 64m
kube-system kindnet-xwp2x 1/1 Running 0 64m
kube-system kube-apiserver-capi-openstack-control-plane 1/1 Running 0 64m
kube-system kube-controller-manager-capi-openstack-control-plane 1/1 Running 0 64m
kube-system kube-proxy-khhvd 1/1 Running 0 64m
kube-system kube-scheduler-capi-openstack-control-plane 1/1 Running 0 64m
local-path-storage local-path-provisioner-bd4bb6b75-qnbjk 1/1 Running 0 64m
```
To check logs run the below command
$ kubectl logs capo-controller-manager-79445d5984-k9fmc -n capo-system --all-containers=true -f
```bash
I0910 23:36:54.768316 1 listener.go:44] controller-runtime/metrics "msg"="metrics server is starting to listen" "addr"="127.0.0.1:8080"
I0910 23:36:54.768890 1 main.go:235] setup "msg"="starting manager"
I0910 23:36:54.769149 1 leaderelection.go:242] attempting to acquire leader lease capo-system/controller-leader-election-capo...
I0910 23:36:54.769199 1 internal.go:356] controller-runtime/manager "msg"="starting metrics server" "path"="/metrics"
I0910 23:36:54.853723 1 leaderelection.go:252] successfully acquired lease capo-system/controller-leader-election-capo
I0910 23:36:54.854706 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="openstackcluster" "source"={" Type":{"metadata":{"creationTimestamp":null},"spec":{"cloudsSecret":null,"cloudName":"","network":{},"subnet":{},"managedAPIServerLoadBalancer":false,"m anagedSecurityGroups":false,"caKeyPair":{},"etcdCAKeyPair":{},"frontProxyCAKeyPair":{},"saKeyPair":{},"controlPlaneEndpoint":{"host":"","port":0}},"stat us":{"ready":false}}}
I0910 23:36:54.854962 1 controller.go:164] controller-runtime/controller "msg"="Starting EventSource" "controller"="openstackmachine" "source"={" Type":{"metadata":{"creationTimestamp":null},"spec":{"cloudsSecret":null,"cloudName":"","flavor":"","image":""},"status":{"ready":false}}}
```
$ kubectl get machines
```bash
NAME PROVIDERID PHASE
ostgt-control-plane-cggt7 openstack://a6da4363-9419-4e14-b67a-3ce86da198c4 Running
ostgt-md-0-6b564d74b8-8h8d8 openstack://23fd5b75-e3f4-4e89-b900-7a6873a146c2 Running
ostgt-md-0-6b564d74b8-pj4lm openstack://9b8323a2-757f-4905-8006-4514862fde75 Running
ostgt-md-0-6b564d74b8-wnw8l openstack://1a8f10da-5d12-4c50-a60d-f2e24a387611 Running
```
$ kubectl get secrets
```bash
NAME TYPE DATA AGE
default-token-vfcm7 kubernetes.io/service-account-token 3 114m
ostgt-ca Opaque 2 47m
ostgt-cloud-config Opaque 2 51m
ostgt-control-plane-gd2gq cluster.x-k8s.io/secret 1 47m
ostgt-etcd Opaque 2 47m
ostgt-kubeconfig Opaque 1 47m
ostgt-md-0-j76jg cluster.x-k8s.io/secret 1 44m
ostgt-md-0-kdjsv cluster.x-k8s.io/secret 1 44m
ostgt-md-0-q4vmn cluster.x-k8s.io/secret 1 44m
ostgt-proxy Opaque 2 47m
ostgt-sa Opaque 2 47m
```
$ kubectl --namespace=default get secret/ostgt-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./ostgt.kubeconfig
$ kubectl get pods -A --kubeconfig ~/ostgt.kubeconfig
```bash
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7865ff46b6-8pbnq 1/1 Running 0 47m
kube-system calico-node-7kpjb 1/1 Running 0 44m
kube-system calico-node-d8dcc 1/1 Running 0 45m
kube-system calico-node-mdwnt 1/1 Running 0 47m
kube-system calico-node-n2qr8 1/1 Running 0 45m
kube-system coredns-6955765f44-dkvwq 1/1 Running 0 47m
kube-system coredns-6955765f44-p4mbh 1/1 Running 0 47m
kube-system etcd-ostgt-control-plane-vpmqg 1/1 Running 0 47m
kube-system kube-apiserver-ostgt-control-plane-vpmqg 1/1 Running 0 47m
kube-system kube-controller-manager-ostgt-control-plane-vpmqg 1/1 Running 0 47m
kube-system kube-proxy-j6msn 1/1 Running 0 44m
kube-system kube-proxy-kgxvq 1/1 Running 0 45m
kube-system kube-proxy-lfmlf 1/1 Running 0 45m
kube-system kube-proxy-zq26j 1/1 Running 0 47m
kube-system kube-scheduler-ostgt-control-plane-vpmqg 1/1 Running 0 47m
```
$ kubectl get nodes --kubeconfig ~/ostgt.kubeconfig
```bash
NAME STATUS ROLES AGE VERSION
ostgt-control-plane-vpmqg Ready master 49m v1.17.3
ostgt-md-0-6p2f9 Ready <none> 46m v1.17.3
ostgt-md-0-h8hn9 Ready <none> 47m v1.17.3
ostgt-md-0-k9k66 Ready <none> 46m v1.17.3
```
$ kubectl get cs --kubeconfig ~/ostgt.kubeconfig
```bash
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
```
Now, the control plane and worker node are created on openstack.
![Machines](../img/openstack-machines.png)
## Tear Down Clusters
In order to delete the cluster run the below command. This will delete
the control plane, workers and all other resources
associated with the cluster on openstack.
```bash
$ airshipctl phase render controlplane -k Cluster | kubectl delete -f -
cluster.cluster.x-k8s.io "ostgt" deleted
```
$ kind delete cluster --name capi-openstack
## Reference
### Installation Using Devstack
- Install [Devstack](https://docs.openstack.org/devstack/latest/guides/devstack-with-lbaas-v2.html)
- Create `ubuntu-1910-kube-v1.17.3.qcow2` image in the devstack.
Download a capi compatible image for ubuntu OS.
```bash
wget https://github.com/sbueringer/image-builder/releases/download/v1.17.3-04/ubuntu-1910-kube-v1.17.3.qcow2
openstack image create --container-format bare --disk-format qcow2 --file ubuntu-1910-kube-v1.17.3.qcow2 ubuntu-1910-kube-v1.17.3
```
Check if the image status is `active`
```bash
stack@stackdev-ev:/opt/stack/devstack$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 83002c1d-436d-4007-bea1-3ffc94fa193b | amphora-x64-haproxy | active |
| a801c914-a0b9-485a-ba5f-246e912cb656 | cirros-0.5.1-x86_64-disk | active |
| 8e8fc7a8-cfe0-4251-bdde-8600838f2ed8 | ubuntu-1910-kube-v1.17.3 | active |
```
- Generate credentials
In devstack environment, normally the `clouds.yaml` file is found at `etc/openstack/` location.
Execute the following command to generate the cloud credentials for devstack
```bash
wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc /etc/openstack/clouds.yaml devstack
```
A snippet of sample clouds.yaml file can be seen below for cloud `devstack`.
```bash
clouds:
devstack:
auth:
auth_url: http://10.0.4.4/identity
project_id: f3deb6e94bee4addaed3ba42d6ffaeba
user_domain_name: Default
username: demo
password: pass
region_name: RegionOne
```
The list of project_id-s can be retrieved by `openstack project list` in the devstack environment.
- Ensure that `demo` user has `admin` rights so that floating ip-s can be created at the time of
workload cluster deployment.
```bash
cd /opt/stack/devstack
export OS_USERNAME=admin
$ . ./openrc
$ openstack role add --project demo --user demo admin
```
- Create Floating IP
To create floating ip, following command can be used
`openstack floating ip create public --floating-ip-address $FLOATING_IP_ADDRESS`
where `FLOATING_IP_ADDRESS` is the specified ip address and `public` is the name of
the external network in devstack.
`openstack floating ip list` command shows the list of all floating ip-s.
- Allow ssh access to controlplane and worker nodes
Cluster api creates following security groups if `spec.managedSecurityGroups` of
`OpenStackCluster` CRD is set to true.
- k8s-cluster-default-`<CLUSTER-NAME>`-secgroup-controlplane (for control plane)
- k8s-cluster-default-`<CLUSTER-NAME>`-secgroup-worker (for worker nodes)
These security group rules include the kubeadm's
[Check required ports](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports)
so that each node can not be logged in through ssh by default.
If ssh access to the nodes is required then follow the below steps -
Create a security group allowing ssh access
```bash
openstack security group create --project demo --project-domain Default allow-ssh
openstack security group rule create allow-ssh --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
```
Add the security group to `OpenStackMachineTemplate` CRD as below
```bash
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
spec:
template:
spec:
securityGroups:
- name: allow-ssh
```
### Provider Manifests
Provider Configuration for Capo is referenced from
[Config](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/tree/master/config)
$ tree airshipctl/manifests/function/capo
```bash
└── v0.3.1
├── certmanager
│ ├── certificate.yaml
│ ├── kustomization.yaml
│ └── kustomizeconfig.yaml
├── crd
│ ├── bases
│ │ ├── infrastructure.cluster.x-k8s.io_openstackclusters.yaml
│ │ ├── infrastructure.cluster.x-k8s.io_openstackmachines.yaml
│ │ └── infrastructure.cluster.x-k8s.io_openstackmachinetemplates.yaml
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── patches
│ ├── cainjection_in_openstackclusters.yaml
│ ├── cainjection_in_openstackmachines.yaml
│ ├── cainjection_in_openstackmachinetemplates.yaml
│ ├── webhook_in_openstackclusters.yaml
│ ├── webhook_in_openstackmachines.yaml
│ └── webhook_in_openstackmachinetemplates.yaml
├── default
│ ├── kustomization.yaml
│ ├── manager_role_aggregation_patch.yaml
│ └── namespace.yaml
├── kustomization.yaml
├── manager
│ ├── kustomization.yaml
│ ├── manager.yaml
│ ├── manager_auth_proxy_patch.yaml
│ ├── manager_image_patch.yaml
│ └── manager_pull_policy.yaml
├── patch_crd_webhook_namespace.yaml
├── rbac
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role.yaml
│ ├── leader_election_role_binding.yaml
│ ├── role.yaml
│ └── role_binding.yaml
└── webhook
├── kustomization.yaml
├── kustomizeconfig.yaml
├── manager_webhook_patch.yaml
├── manifests.yaml
├── service.yaml
└── webhookcainjection_patch.yaml
```
### Cluster Templates
airshipctl/manifests/function/k8scontrol-capo contains cluster.yaml, controlplane.yaml templates.
```bash
cluster.yaml: Contains CRDs Cluster, OpenstackCluster, Secret
controlplane.yaml: Contains CRDs KubeadmControlPlane, OpenstackMachineTemplate
```
$ tree airshipctl/manifests/function/k8scontrol-capo
```bash
airshipctl/manifests/function/k8scontrol-capo
├── cluster.yaml
├── controlplane.yaml
└── kustomization.yaml
```
airshipctl/manifests/function/workers-capo contains workers.yaml
```bash
workers.yaml: Contains CRDs Cluster, OpenstackCluster, Secret
```
$ tree airshipctl/manifests/function/workers-capo
```bash
airshipctl/manifests/function/workers-capo
.
├── kustomization.yaml
└── workers.yaml
```
### Test Site Manifests
#### openstack-test-site/shared
airshipctl cluster init uses
airshipctl/manifests/site/openstack-test-site/shared/clusterctl to initialize
management cluster with defined provider components and version.
$ tree airshipctl/manifests/site/openstack-test-site/shared
```bash
└── clusterctl
├── clusterctl.yaml
└── kustomization.yaml
```
#### openstack-test-site/target
There are 3 phases currently available in openstack-test-site/target
```bash
controlplane - Patches templates in manifests/function/k8scontrol-capo
workers - Patches template in manifests/function/workers-capo
initinfra - Simply calls `openstack-test-site/shared/clusterctl
```
Note: `airshipctl cluster init` initializes all the provider components
including the openstack infrastructure provider component. As a result, `airshipctl
phase apply initinfra` is not used.
#### Patch Merge Strategy
Json and strategic merge patches are applied on templates in `manifests/function/k8scontrol-capo`
from `airshipctl/manifests/site/openstack-test-site/target/controlplane` when
`airshipctl phase apply controlplane` command is executed
Json and strategic merge patches are applied on templates in `manifests/function/workers-capo`
from `airshipctl/manifests/site/openstack-test-site/target/workers` when
`airshipctl phase apply workers` command is executed
```bash
controlplane/control_plane_ip.json: patches control plane ip in template function/k8scontrol-capo/cluster.yaml
controlplane/dns_servers.json: patches dns servers in template function/k8scontrol-capo/cluster.yaml
controlplane/external_network_id.json: patches external network id in template function/k8scontrol-capo/cluster.yaml
cluster_clouds_yaml_patch.yaml: patches clouds.yaml configuration in template function/k8scontrol-capo/cluster.yaml
controlplane/control_plane_ip_patch.yaml: patches controlplane ip in template function/k8scontrol-capo/controlplane.yaml
controlplane/control_plane_config_patch.yaml: patches cloud configuration in template function/k8scontrol-capo/controlplane.yaml
controlplane/ssh_key_patch.yaml: patches ssh key in template function/k8scontrol-capo/controlplane.yaml
controlplane/control_plane_machine_count_patch.yaml: patches controlplane replica count in template function/k8scontrol-capo/controlplane.yaml
controlplane/control_plane_machine_flavor_patch.yaml: patches controlplane machine flavor in template function/k8scontrol-capo/controlplane.yaml
workers/workers_cloud_conf_patch.yaml: patches cloud configuration in template function/workers-capo/workers.yaml
workers/workers_machine_count_patch.yaml: patches worker replica count in template function/workers-capo/workers.yaml
workers/workers_machine_flavor_patch.yaml: patches worker machine flavor in template function/workers-capo/workers.yaml
workers/workers_ssh_key_patch.yaml: patches ssh key in template function/workers-capo/workers.yaml
```
## Software Version Information
All the instructions provided in the document have been tested using the software and
version, provided in this section.
### Docker
```bash
$ docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:36 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:07 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
```
### Kind
```bash
$ kind version
kind v0.8.1 go1.14.2 linux/amd64
```
### Kubectl
```bash
$ kubectl version --short
Client Version: v1.17.4
Server Version: v1.18.2
```
### Go
```bash
$ go version
go version go1.10.4 linux/amd64
```
### Kustomize
```bash
$ kustomize version
{Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
```
### OS
```bash
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
```
## Virtual Machine Specification
All the instructions in the document were perfomed on VM(with nested virtualization enabled)
with 16 vCPUs, 64 GB RAM.

View File

@ -0,0 +1,24 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: selfsigned-issuer
namespace: system
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
namespace: system
spec:
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
issuerRef:
kind: Issuer
name: selfsigned-issuer
secretName: $(SERVICE_NAME)-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- certificate.yaml
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,19 @@
# This configuration is for teaching kustomize how to update name ref and var substitution
nameReference:
- kind: Issuer
group: cert-manager.io
fieldSpecs:
- kind: Certificate
group: cert-manager.io
path: spec/issuerRef/name
varReference:
- kind: Certificate
group: cert-manager.io
path: spec/commonName
- kind: Certificate
group: cert-manager.io
path: spec/dnsNames
- kind: Certificate
group: cert-manager.io
path: spec/secretName

View File

@ -0,0 +1,558 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.9
creationTimestamp: null
name: openstackclusters.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: OpenStackCluster
listKind: OpenStackClusterList
plural: openstackclusters
singular: openstackcluster
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this OpenStackCluster belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: Cluster infrastructure is ready for OpenStack instances
jsonPath: .status.ready
name: Ready
type: string
- description: Network the cluster is using
jsonPath: .status.network.id
name: Network
type: string
- description: Subnet the cluster is using
jsonPath: .status.network.subnet.id
name: Subnet
type: string
- description: API Endpoint
jsonPath: .status.network.apiServerLoadBalancer.ip
name: Endpoint
priority: 1
type: string
name: v1alpha3
schema:
openAPIV3Schema:
description: OpenStackCluster is the Schema for the openstackclusters API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: OpenStackClusterSpec defines the desired state of OpenStackCluster
properties:
apiServerLoadBalancerAdditionalPorts:
description: APIServerLoadBalancerAdditionalPorts adds additional
ports to the APIServerLoadBalancer
items:
type: integer
type: array
apiServerLoadBalancerFloatingIP:
description: APIServerLoadBalancerFloatingIP is the floatingIP which
will be associated to the APIServer loadbalancer. The floatingIP
will be created if it not already exists.
type: string
apiServerLoadBalancerPort:
description: APIServerLoadBalancerPort is the port on which the listener
on the APIServer loadbalancer will be created
type: integer
caKeyPair:
description: CAKeyPair is the key pair for ca certs.
properties:
cert:
description: base64 encoded cert and key
format: byte
type: string
key:
format: byte
type: string
type: object
cloudName:
description: The name of the cloud to use from the clouds secret
type: string
cloudsSecret:
description: The name of the secret containing the openstack credentials
properties:
name:
description: Name is unique within a namespace to reference a
secret resource.
type: string
namespace:
description: Namespace defines the space within which the secret
name must be unique.
type: string
type: object
controlPlaneAvailabilityZones:
description: ControlPlaneAvailabilityZones is the az to deploy control
plane to
items:
type: string
type: array
controlPlaneEndpoint:
description: ControlPlaneEndpoint represents the endpoint used to
communicate with the control plane.
properties:
host:
description: The hostname on which the API server is serving.
type: string
port:
description: The port on which the API server is serving.
format: int32
type: integer
required:
- host
- port
type: object
disablePortSecurity:
description: DisablePortSecurity disables the port security of the
network created for the Kubernetes cluster, which also disables
SecurityGroups
type: boolean
disableServerTags:
description: 'Default: True. In case of server tag errors, set to
False'
type: boolean
dnsNameservers:
description: DNSNameservers is the list of nameservers for OpenStack
Subnet being created. Set this value when you need create a new
network/subnet while the access through DNS is required.
items:
type: string
type: array
etcdCAKeyPair:
description: EtcdCAKeyPair is the key pair for etcd.
properties:
cert:
description: base64 encoded cert and key
format: byte
type: string
key:
format: byte
type: string
type: object
externalNetworkId:
description: ExternalNetworkID is the ID of an external OpenStack
Network. This is necessary to get public internet to the VMs.
type: string
externalRouterIPs:
description: ExternalRouterIPs is an array of externalIPs on the respective
subnets. This is necessary if the router needs a fixed ip in a specific
subnet.
items:
properties:
fixedIP:
description: The FixedIP in the corresponding subnet
type: string
subnet:
description: The subnet in which the FixedIP is used for the
Gateway of this router
properties:
filter:
description: Filters for optional network query
properties:
cidr:
type: string
description:
type: string
enableDhcp:
type: boolean
gateway_ip:
type: string
id:
type: string
ipVersion:
type: integer
ipv6AddressMode:
type: string
ipv6RaMode:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
networkId:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
subnetpoolId:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
uuid:
description: The UUID of the network. Required if you omit
the port attribute.
type: string
type: object
required:
- subnet
type: object
type: array
frontProxyCAKeyPair:
description: FrontProxyCAKeyPair is the key pair for FrontProxyKeyPair.
properties:
cert:
description: base64 encoded cert and key
format: byte
type: string
key:
format: byte
type: string
type: object
managedAPIServerLoadBalancer:
description: 'ManagedAPIServerLoadBalancer defines whether a LoadBalancer
for the APIServer should be created. If set to true the following
properties are mandatory: APIServerLoadBalancerFloatingIP, APIServerLoadBalancerPort'
type: boolean
managedSecurityGroups:
description: 'ManagedSecurityGroups defines that kubernetes manages
the OpenStack security groups for now, that means that we''ll create
security group allows traffic to/from machines belonging to that
group based on Calico CNI plugin default network requirements: BGP
and IP-in-IP for master node(s) and worker node(s) respectively.
In the future, we could make this more flexible.'
type: boolean
network:
description: If NodeCIDR cannot be set this can be used to detect
an existing network.
properties:
adminStateUp:
type: boolean
description:
type: string
id:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
shared:
type: boolean
sortDir:
type: string
sortKey:
type: string
status:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
nodeCidr:
description: NodeCIDR is the OpenStack Subnet to be created. Cluster
actuator will create a network, a subnet with NodeCIDR, and a router
connected to this subnet. If you leave this empty, no network will
be created.
type: string
saKeyPair:
description: SAKeyPair is the service account key pair.
properties:
cert:
description: base64 encoded cert and key
format: byte
type: string
key:
format: byte
type: string
type: object
subnet:
description: If NodeCIDR cannot be set this can be used to detect
an existing subnet.
properties:
cidr:
type: string
description:
type: string
enableDhcp:
type: boolean
gateway_ip:
type: string
id:
type: string
ipVersion:
type: integer
ipv6AddressMode:
type: string
ipv6RaMode:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
networkId:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
subnetpoolId:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
tags:
description: Tags for all resources in cluster
items:
type: string
type: array
useOctavia:
description: UseOctavia is weather LoadBalancer Service is Octavia
or not
type: boolean
type: object
status:
description: OpenStackClusterStatus defines the observed state of OpenStackCluster
properties:
controlPlaneSecurityGroup:
description: 'ControlPlaneSecurityGroups contains all the information
about the OpenStack Security Group that needs to be applied to control
plane nodes. TODO: Maybe instead of two properties, we add a property
to the group?'
properties:
id:
type: string
name:
type: string
rules:
items:
description: SecurityGroupRule represent the basic information
of the associated OpenStack Security Group Role.
properties:
description:
type: string
direction:
type: string
etherType:
type: string
name:
type: string
portRangeMax:
type: integer
portRangeMin:
type: integer
protocol:
type: string
remoteGroupID:
type: string
remoteIPPrefix:
type: string
securityGroupID:
type: string
required:
- description
- direction
- etherType
- name
- portRangeMax
- portRangeMin
- protocol
- remoteGroupID
- remoteIPPrefix
- securityGroupID
type: object
type: array
required:
- id
- name
- rules
type: object
failureDomains:
additionalProperties:
description: FailureDomainSpec is the Schema for Cluster API failure
domains. It allows controllers to understand how many failure
domains a cluster can optionally span across.
properties:
attributes:
additionalProperties:
type: string
description: Attributes is a free form map of attributes an
infrastructure provider might use or require.
type: object
controlPlane:
description: ControlPlane determines if this failure domain
is suitable for use by control plane machines.
type: boolean
type: object
description: FailureDomains represent OpenStack availability zones
type: object
network:
description: Network contains all information about the created OpenStack
Network. It includes Subnets and Router.
properties:
apiServerLoadBalancer:
description: Be careful when using APIServerLoadBalancer, because
this field is optional and therefore not set in all cases
properties:
id:
type: string
internalIP:
type: string
ip:
type: string
name:
type: string
required:
- id
- internalIP
- ip
- name
type: object
id:
type: string
name:
type: string
router:
description: Router represents basic information about the associated
OpenStack Neutron Router
properties:
id:
type: string
name:
type: string
required:
- id
- name
type: object
subnet:
description: Subnet represents basic information about the associated
OpenStack Neutron Subnet
properties:
cidr:
type: string
id:
type: string
name:
type: string
required:
- cidr
- id
- name
type: object
required:
- id
- name
type: object
ready:
type: boolean
workerSecurityGroup:
description: WorkerSecurityGroup contains all the information about
the OpenStack Security Group that needs to be applied to worker
nodes.
properties:
id:
type: string
name:
type: string
rules:
items:
description: SecurityGroupRule represent the basic information
of the associated OpenStack Security Group Role.
properties:
description:
type: string
direction:
type: string
etherType:
type: string
name:
type: string
portRangeMax:
type: integer
portRangeMin:
type: integer
protocol:
type: string
remoteGroupID:
type: string
remoteIPPrefix:
type: string
securityGroupID:
type: string
required:
- description
- direction
- etherType
- name
- portRangeMax
- portRangeMin
- protocol
- remoteGroupID
- remoteIPPrefix
- securityGroupID
type: object
type: array
required:
- id
- name
- rules
type: object
required:
- ready
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,355 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.9
creationTimestamp: null
name: openstackmachines.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: OpenStackMachine
listKind: OpenStackMachineList
plural: openstackmachines
singular: openstackmachine
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: Cluster to which this OpenStackMachine belongs
jsonPath: .metadata.labels.cluster\.x-k8s\.io/cluster-name
name: Cluster
type: string
- description: OpenStack instance state
jsonPath: .status.instanceState
name: State
type: string
- description: Machine ready status
jsonPath: .status.ready
name: Ready
type: string
- description: OpenStack instance ID
jsonPath: .spec.providerID
name: InstanceID
type: string
- description: Machine object which owns with this OpenStackMachine
jsonPath: .metadata.ownerReferences[?(@.kind=="Machine")].name
name: Machine
type: string
name: v1alpha3
schema:
openAPIV3Schema:
description: OpenStackMachine is the Schema for the openstackmachines API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: OpenStackMachineSpec defines the desired state of OpenStackMachine
properties:
cloudName:
description: The name of the cloud to use from the clouds secret
type: string
cloudsSecret:
description: The name of the secret containing the openstack credentials
properties:
name:
description: Name is unique within a namespace to reference a
secret resource.
type: string
namespace:
description: Namespace defines the space within which the secret
name must be unique.
type: string
type: object
configDrive:
description: Config Drive support
type: boolean
flavor:
description: The flavor reference for the flavor for your server instance.
type: string
floatingIP:
description: The floatingIP which will be associated to the machine,
only used for master. The floatingIP should have been created and
haven't been associated.
type: string
image:
description: The name of the image to use for your server instance.
If the RootVolume is specified, this will be ignored and use rootVolume
directly.
type: string
networks:
description: A networks object. Required parameter when there are
multiple networks defined for the tenant. When you do not specify
the networks parameter, the server attaches to the only network
created for the current tenant.
items:
properties:
filter:
description: Filters for optional network query
properties:
adminStateUp:
type: boolean
description:
type: string
id:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
shared:
type: boolean
sortDir:
type: string
sortKey:
type: string
status:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
fixedIp:
description: A fixed IPv4 address for the NIC.
type: string
subnets:
description: Subnet within a network to use
items:
properties:
filter:
description: Filters for optional network query
properties:
cidr:
type: string
description:
type: string
enableDhcp:
type: boolean
gateway_ip:
type: string
id:
type: string
ipVersion:
type: integer
ipv6AddressMode:
type: string
ipv6RaMode:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
networkId:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
subnetpoolId:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
uuid:
description: The UUID of the network. Required if you
omit the port attribute.
type: string
type: object
type: array
uuid:
description: The UUID of the network. Required if you omit the
port attribute.
type: string
type: object
type: array
providerID:
description: ProviderID is the unique identifier as specified by the
cloud provider.
type: string
rootVolume:
description: The volume metadata to boot from
properties:
deviceType:
type: string
diskSize:
type: integer
sourceType:
type: string
sourceUUID:
type: string
type: object
securityGroups:
description: The names of the security groups to assign to the instance
items:
properties:
filter:
description: Filters used to query security groups in openstack
properties:
description:
type: string
id:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
name:
description: Security Group name
type: string
uuid:
description: Security Group UID
type: string
type: object
type: array
serverGroupID:
description: The server group to assign the machine to
type: string
serverMetadata:
additionalProperties:
type: string
description: Metadata mapping. Allows you to create a map of key value
pairs to add to the server instance.
type: object
sshKeyName:
description: The ssh key to inject in the instance
type: string
tags:
description: Machine tags Requires Nova api 2.52 minimum!
items:
type: string
type: array
trunk:
description: Whether the server instance is created on a trunk port
or not.
type: boolean
userDataSecret:
description: The name of the secret containing the user data (startup
script in most cases)
properties:
name:
description: Name is unique within a namespace to reference a
secret resource.
type: string
namespace:
description: Namespace defines the space within which the secret
name must be unique.
type: string
type: object
required:
- flavor
- image
type: object
status:
description: OpenStackMachineStatus defines the observed state of OpenStackMachine
properties:
addresses:
description: Addresses contains the OpenStack instance associated
addresses.
items:
description: NodeAddress contains information for the node's address.
properties:
address:
description: The node address.
type: string
type:
description: Node address type, one of Hostname, ExternalIP
or InternalIP.
type: string
required:
- address
- type
type: object
type: array
errorMessage:
description: "FailureMessage will be set in the event that there is
a terminal problem reconciling the Machine and will contain a more
verbose string suitable for logging and human consumption. \n This
field should not be set for transitive errors that a controller
faces that are expected to be fixed automatically over time (like
service outages), but instead indicate that something is fundamentally
wrong with the Machine's spec or the configuration of the controller,
and that manual intervention is required. Examples of terminal errors
would be invalid combinations of settings in the spec, values that
are unsupported by the controller, or the responsible controller
itself being critically misconfigured. \n Any transient errors that
occur during the reconciliation of Machines can be added as events
to the Machine object and/or logged in the controller's output."
type: string
errorReason:
description: Constants aren't automatically generated for unversioned
packages. Instead share the same constant for all versioned packages
type: string
instanceState:
description: InstanceState is the state of the OpenStack instance
for this machine.
type: string
ready:
description: Ready is true when the provider resource is ready.
type: boolean
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,305 @@
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.9
creationTimestamp: null
name: openstackmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
group: infrastructure.cluster.x-k8s.io
names:
categories:
- cluster-api
kind: OpenStackMachineTemplate
listKind: OpenStackMachineTemplateList
plural: openstackmachinetemplates
singular: openstackmachinetemplate
scope: Namespaced
versions:
- name: v1alpha3
schema:
openAPIV3Schema:
description: OpenStackMachineTemplate is the Schema for the openstackmachinetemplates
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: OpenStackMachineTemplateSpec defines the desired state of
OpenStackMachineTemplate
properties:
template:
description: OpenStackMachineTemplateResource describes the data needed
to create a OpenStackMachine from a template
properties:
spec:
description: Spec is the specification of the desired behavior
of the machine.
properties:
cloudName:
description: The name of the cloud to use from the clouds
secret
type: string
cloudsSecret:
description: The name of the secret containing the openstack
credentials
properties:
name:
description: Name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
type: object
configDrive:
description: Config Drive support
type: boolean
flavor:
description: The flavor reference for the flavor for your
server instance.
type: string
floatingIP:
description: The floatingIP which will be associated to the
machine, only used for master. The floatingIP should have
been created and haven't been associated.
type: string
image:
description: The name of the image to use for your server
instance. If the RootVolume is specified, this will be ignored
and use rootVolume directly.
type: string
networks:
description: A networks object. Required parameter when there
are multiple networks defined for the tenant. When you do
not specify the networks parameter, the server attaches
to the only network created for the current tenant.
items:
properties:
filter:
description: Filters for optional network query
properties:
adminStateUp:
type: boolean
description:
type: string
id:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
shared:
type: boolean
sortDir:
type: string
sortKey:
type: string
status:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
fixedIp:
description: A fixed IPv4 address for the NIC.
type: string
subnets:
description: Subnet within a network to use
items:
properties:
filter:
description: Filters for optional network query
properties:
cidr:
type: string
description:
type: string
enableDhcp:
type: boolean
gateway_ip:
type: string
id:
type: string
ipVersion:
type: integer
ipv6AddressMode:
type: string
ipv6RaMode:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
networkId:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
subnetpoolId:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
uuid:
description: The UUID of the network. Required
if you omit the port attribute.
type: string
type: object
type: array
uuid:
description: The UUID of the network. Required if you
omit the port attribute.
type: string
type: object
type: array
providerID:
description: ProviderID is the unique identifier as specified
by the cloud provider.
type: string
rootVolume:
description: The volume metadata to boot from
properties:
deviceType:
type: string
diskSize:
type: integer
sourceType:
type: string
sourceUUID:
type: string
type: object
securityGroups:
description: The names of the security groups to assign to
the instance
items:
properties:
filter:
description: Filters used to query security groups in
openstack
properties:
description:
type: string
id:
type: string
limit:
type: integer
marker:
type: string
name:
type: string
notTags:
type: string
notTagsAny:
type: string
projectId:
type: string
sortDir:
type: string
sortKey:
type: string
tags:
type: string
tagsAny:
type: string
tenantId:
type: string
type: object
name:
description: Security Group name
type: string
uuid:
description: Security Group UID
type: string
type: object
type: array
serverGroupID:
description: The server group to assign the machine to
type: string
serverMetadata:
additionalProperties:
type: string
description: Metadata mapping. Allows you to create a map
of key value pairs to add to the server instance.
type: object
sshKeyName:
description: The ssh key to inject in the instance
type: string
tags:
description: Machine tags Requires Nova api 2.52 minimum!
items:
type: string
type: array
trunk:
description: Whether the server instance is created on a trunk
port or not.
type: boolean
userDataSecret:
description: The name of the secret containing the user data
(startup script in most cases)
properties:
name:
description: Name is unique within a namespace to reference
a secret resource.
type: string
namespace:
description: Namespace defines the space within which
the secret name must be unique.
type: string
type: object
required:
- flavor
- image
type: object
required:
- spec
type: object
required:
- template
type: object
type: object
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

View File

@ -0,0 +1,30 @@
commonLabels:
cluster.x-k8s.io/v1alpha3: v1alpha3
# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize package.
# It should be run by config/
resources:
- bases/infrastructure.cluster.x-k8s.io_openstackclusters.yaml
- bases/infrastructure.cluster.x-k8s.io_openstackmachines.yaml
- bases/infrastructure.cluster.x-k8s.io_openstackmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizeresource
patchesStrategicMerge:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
- patches/webhook_in_openstackclusters.yaml
- patches/webhook_in_openstackmachines.yaml
- patches/webhook_in_openstackmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizewebhookpatch
# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
- patches/cainjection_in_openstackclusters.yaml
- patches/cainjection_in_openstackmachines.yaml
- patches/cainjection_in_openstackmachinetemplates.yaml
# +kubebuilder:scaffold:crdkustomizecainjectionpatch
# the following config is for teaching kustomize how to do kustomization for CRDs.
configurations:
- kustomizeconfig.yaml

View File

@ -0,0 +1,17 @@
# This file is for teaching kustomize how to substitute name and namespace reference in CRD
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: CustomResourceDefinition
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/name
namespace:
- kind: CustomResourceDefinition
group: apiextensions.k8s.io
path: spec/conversion/webhook/clientConfig/service/namespace
create: false
varReference:
- path: metadata/annotations

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: openstackclusters.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: openstackmachines.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,8 @@
# The following patch adds a directive for certmanager to inject CA into the CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
name: openstackmachinetemplates.infrastructure.cluster.x-k8s.io

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: openstackclusters.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: openstackmachines.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,19 @@
# The following patch enables conversion webhook for CRD
# CRD conversion requires k8s 1.13 or later.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: openstackmachinetemplates.infrastructure.cluster.x-k8s.io
spec:
conversion:
strategy: Webhook
webhook:
conversionReviewVersions: ["v1", "v1beta1"]
clientConfig:
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
caBundle: Cg==
service:
namespace: system
name: webhook-service
path: /convert

View File

@ -0,0 +1,11 @@
namespace: capo-system
resources:
- namespace.yaml
bases:
- ../rbac
- ../manager
patchesStrategicMerge:
- manager_role_aggregation_patch.yaml

View File

@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: manager-role
labels:
cluster.x-k8s.io/aggregate-to-manager: "true"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: capi-system-capi-aggregated-manager-role

View File

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: system

View File

@ -0,0 +1,29 @@
namePrefix: capo-
commonLabels:
cluster.x-k8s.io/provider: "infrastructure-openstack"
bases:
- crd
- webhook # Disable this if you're not using the webhook functionality.
- default
patchesJson6902:
- target:
group: apiextensions.k8s.io
version: v1
kind: CustomResourceDefinition
name: openstackclusters.infrastructure.cluster.x-k8s.io
path: patch_crd_webhook_namespace.yaml
- target:
group: apiextensions.k8s.io
version: v1
kind: CustomResourceDefinition
name: openstackmachines.infrastructure.cluster.x-k8s.io
path: patch_crd_webhook_namespace.yaml
- target:
group: apiextensions.k8s.io
version: v1
kind: CustomResourceDefinition
name: openstackmachinetemplates.infrastructure.cluster.x-k8s.io
path: patch_crd_webhook_namespace.yaml

View File

@ -0,0 +1,7 @@
resources:
- manager.yaml
patchesStrategicMerge:
- manager_image_patch.yaml
- manager_pull_policy.yaml
- manager_auth_proxy_patch.yaml

View File

@ -0,0 +1,45 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
labels:
control-plane: capo-controller-manager
spec:
selector:
matchLabels:
control-plane: capo-controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: capo-controller-manager
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- args:
- --enable-leader-election
image: controller:latest
imagePullPolicy: IfNotPresent
name: manager
ports:
- containerPort: 9440
name: healthz
protocol: TCP
- containerPort: 8080
name: metrics
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: healthz
livenessProbe:
httpGet:
path: /healthz
port: healthz
terminationGracePeriodSeconds: 10
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master

View File

@ -0,0 +1,25 @@
# This patch inject a sidecar container which is a HTTP proxy for the controller manager,
# it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: kube-rbac-proxy
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
ports:
- containerPort: 8443
name: https
- name: manager
args:
- "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"

View File

@ -0,0 +1,12 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
# Change the value of image field below to your controller image URL
- image: gcr.io/k8s-staging-capi-openstack/capi-openstack-controller-amd64:v20200707-v0.3.1
name: manager

View File

@ -0,0 +1,11 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
imagePullPolicy: IfNotPresent

View File

@ -0,0 +1,3 @@
- op: replace
path: "/spec/conversion/webhook/clientConfig/service/namespace"
value: capi-webhook-system

View File

@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-role
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxy-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/port: "8443"
prometheus.io/scheme: https
prometheus.io/scrape: "true"
labels:
control-plane: capo-controller-manager
name: controller-manager-metrics-service
namespace: system
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
control-plane: capo-controller-manager

View File

@ -0,0 +1,10 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- role.yaml
- role_binding.yaml
- leader_election_role.yaml
- leader_election_role_binding.yaml
- auth_proxy_service.yaml
- auth_proxy_role.yaml
- auth_proxy_role_binding.yaml

View File

@ -0,0 +1,26 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-election-role
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: leader-election-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,85 @@
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: manager-role
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- clusters
- clusters/status
verbs:
- get
- list
- watch
- apiGroups:
- cluster.x-k8s.io
resources:
- machines
- machines/status
verbs:
- get
- list
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- openstackclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- openstackclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- openstackmachines
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- openstackmachines/status
verbs:
- get
- patch
- update

View File

@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: default
namespace: system

View File

@ -0,0 +1,42 @@
namespace: capi-webhook-system
resources:
- manifests.yaml
- service.yaml
- ../certmanager
- ../manager
configurations:
- kustomizeconfig.yaml
patchesStrategicMerge:
- manager_webhook_patch.yaml
- webhookcainjection_patch.yaml # Disable this value if you don't have any defaulting or validation webhook. If you don't know, you can check if the manifests.yaml file in the same directory has any contents.
vars:
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
objref:
kind: Certificate
group: cert-manager.io
version: v1alpha2
name: serving-cert # this name should match the one in certificate.yaml
fieldref:
fieldpath: metadata.namespace
- name: CERTIFICATE_NAME
objref:
kind: Certificate
group: cert-manager.io
version: v1alpha2
name: serving-cert # this name should match the one in certificate.yaml
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service

View File

@ -0,0 +1,27 @@
# the following config is for teaching kustomize where to look at when substituting vars.
# It requires kustomize v2.1.0 or newer to work properly.
nameReference:
- kind: Service
version: v1
fieldSpecs:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/name
namespace:
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
- kind: ValidatingWebhookConfiguration
group: admissionregistration.k8s.io
path: webhooks/clientConfig/service/namespace
create: true
varReference:
- path: metadata/annotations
- kind: Deployment
path: spec/template/spec/volumes/secret/secretName

View File

@ -0,0 +1,26 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
template:
spec:
containers:
- name: manager
args:
- "--metrics-addr=127.0.0.1:8080"
- "--webhook-port=9443"
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
volumeMounts:
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
volumes:
- name: cert
secret:
defaultMode: 420
secretName: $(SERVICE_NAME)-cert

View File

@ -0,0 +1,46 @@
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: validating-webhook-configuration
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: webhook-service
namespace: system
path: /validate-infrastructure-cluster-x-k8s-io-v1alpha3-openstackmachine
failurePolicy: Fail
matchPolicy: Equivalent
name: validation.openstackmachine.infrastructure.cluster.x-k8s.io
rules:
- apiGroups:
- infrastructure.cluster.x-k8s.io
apiVersions:
- v1alpha3
operations:
- CREATE
- UPDATE
resources:
- openstackmachines
- clientConfig:
caBundle: Cg==
service:
name: webhook-service
namespace: system
path: /validate-infrastructure-cluster-x-k8s-io-v1alpha3-openstackmachinetemplate
failurePolicy: Fail
matchPolicy: Equivalent
name: validation.openstackmachinetemplate.infrastructure.x-k8s.io
rules:
- apiGroups:
- infrastructure.cluster.x-k8s.io
apiVersions:
- v1alpha3
operations:
- CREATE
- UPDATE
resources:
- openstackmachinetemplates

View File

@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: webhook-service
namespace: system
spec:
ports:
- port: 443
targetPort: webhook-server

View File

@ -0,0 +1,16 @@
# This patch add annotation to admission webhook config and
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
# uncomment the following lines to enable mutating webhook
#apiVersion: admissionregistration.k8s.io/v1beta1
#kind: MutatingWebhookConfiguration
#metadata:
# name: mutating-webhook-configuration
# annotations:
# cert-manager.k8s.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)

View File

@ -0,0 +1,50 @@
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
name: ostgt
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: ostgt-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackCluster
name: ostgt
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackCluster
metadata:
name: ostgt
namespace: default
spec:
apiServerLoadBalancerFloatingIP: ${OPENSTACK_CONTROLPLANE_IP}
apiServerLoadBalancerPort: 6443
cloudName: devstack
cloudsSecret:
name: ostgt-cloud-config
namespace: default
disablePortSecurity: false
disableServerTags: true
dnsNameservers:
- "${OPENSTACK_DNS_NAMESERVERS}"
externalNetworkId: "{OPENSTACK_EXTERNAL_NETWORK_ID}"
managedAPIServerLoadBalancer: true
managedSecurityGroups: true
nodeCidr: 10.6.0.0/24
useOctavia: true
---
apiVersion: v1
kind: Secret
metadata:
name: ostgt-cloud-config
namespace: default
data:
cacert: ${CLOUD_CERT_B64}
clouds.yaml: ${CLOUDS_YAML_B64}

View File

@ -0,0 +1,85 @@
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: ostgt-control-plane
namespace: default
spec:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
name: ostgt-control-plane
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
extraVolumes:
- hostPath: /etc/kubernetes/cloud.conf
mountPath: /etc/kubernetes/cloud.conf
name: cloud
readOnly: true
controlPlaneEndpoint: ${OPENSTACK_CONTROL_PLANE_IP}:6443
controllerManager:
extraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
extraVolumes:
- hostPath: /etc/kubernetes/cloud.conf
mountPath: /etc/kubernetes/cloud.conf
name: cloud
readOnly: true
- hostPath: /etc/certs/cacert
mountPath: /etc/certs/cacert
name: cacerts
readOnly: true
imageRepository: k8s.gcr.io
files:
- content: ${CLOUD_CONF_B64}
encoding: base64
owner: root
path: /etc/kubernetes/cloud.conf
permissions: "0600"
- content: ${CLOUD_CERT_B64}
encoding: base64
owner: root
path: /etc/certs/cacert
permissions: "0600"
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
name: '{{ local_hostname }}'
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
name: '{{ local_hostname }}'
postKubeadmCommands:
- sudo kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml
ntp:
servers: []
users:
- name: capo
sshAuthorizedKeys:
- ${OPENSTACK_SSH_KEY}
sudo: ALL=(ALL) NOPASSWD:ALL
replicas: 1
version: v1.17.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: ostgt-control-plane
namespace: default
spec:
template:
spec:
cloudName: devstack
cloudsSecret:
name: ostgt-cloud-config
namespace: default
flavor: ${CONTROLPLANE_MACHINE_FLAVOR}
image: ubuntu-1910-kube-v1.17.3

View File

@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- cluster.yaml
- controlplane.yaml

View File

@ -0,0 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- workers.yaml

View File

@ -0,0 +1,72 @@
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
name: ostgt-md-0
namespace: default
spec:
clusterName: ostgt
replicas: 0
selector:
matchLabels: null
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
name: ostgt-md-0
clusterName: ostgt
failureDomain: nova
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
name: ostgt-md-0
version: v1.17.3
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: ostgt-md-0
namespace: default
spec:
template:
spec:
cloudName: devstack
cloudsSecret:
name: ostgt-cloud-config
namespace: default
flavor: ${WORKER_MACHINE_FLAVOR}
image: ubuntu-1910-kube-v1.17.3
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: ostgt-md-0
namespace: default
spec:
template:
spec:
files:
- content: ${CLOUD_CONF_B64}
encoding: base64
owner: root
path: /etc/kubernetes/cloud.conf
permissions: "0600"
- content: ${CLOUD_CERT_B64}
encoding: base64
owner: root
path: /etc/certs/cacert
permissions: "0600"
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
cloud-config: /etc/kubernetes/cloud.conf
cloud-provider: openstack
name: '{{ local_hostname }}'
ntp:
servers: []
users:
- name: capo
sshAuthorizedKeys:
- ${OPENSTACK_SSH_KEY}
sudo: ALL=(ALL) NOPASSWD:ALL

View File

@ -0,0 +1,43 @@
apiVersion: airshipit.org/v1alpha1
kind: Clusterctl
metadata:
labels:
airshipit.org/deploy-k8s: "false"
name: clusterctl-v1
init-options:
core-provider: "cluster-api:v0.3.7"
bootstrap-providers:
- "kubeadm:v0.3.7"
infrastructure-providers:
- "openstack:v0.3.1"
control-plane-providers:
- "kubeadm:v0.3.7"
providers:
- name: "openstack"
type: "InfrastructureProvider"
versions:
v0.3.1: manifests/function/capo/v0.3.1
- name: "kubeadm"
type: "BootstrapProvider"
variable-substitution: true
versions:
v0.3.7: manifests/function/cabpk/v0.3.7
- name: "cluster-api"
type: "CoreProvider"
variable-substitution: true
versions:
v0.3.7: manifests/function/capi/v0.3.7
- name: "kubeadm"
type: "ControlPlaneProvider"
variable-substitution: true
versions:
v0.3.7: manifests/function/cacpk/v0.3.7
additional-vars:
CONTAINER_CAPM3_MANAGER: quay.io/metal3-io/cluster-api-provider-metal3:v0.3.2
CONTAINER_CACPK_MANAGER: us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-control-plane-controller:v0.3.7
CONTAINER_CABPK_MANAGER: us.gcr.io/k8s-artifacts-prod/cluster-api/kubeadm-bootstrap-controller:v0.3.7
CONTAINER_CAPI_MANAGER: us.gcr.io/k8s-artifacts-prod/cluster-api/cluster-api-controller:v0.3.7
CONTAINER_CAPM3_AUTH_PROXY: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
CONTAINER_CACPK_AUTH_PROXY: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
CONTAINER_CABPK_AUTH_PROXY: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
CONTAINER_CAPI_AUTH_PROXY: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1

View File

@ -0,0 +1,2 @@
resources:
- clusterctl.yaml

View File

@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: ostgt-cloud-config
namespace: default
data:
cacert: Cg==
clouds.yaml: Y2xvdWRzOgogIGRldnN0YWNrOgogICAgYXV0aDoKICAgICAgYXV0aF91cmw6IGh0dHA6Ly8xMC4wLjEuNC9pZGVudGl0eQogICAgICBwcm9qZWN0X2lkOiAyMThkMWNiYzMyOWM0YWUzYWNjODhjYTU5NTAwMTUwMQogICAgICB1c2VyX2RvbWFpbl9uYW1lOiBEZWZhdWx0CiAgICAgIHVzZXJuYW1lOiBkZW1vCiAgICAgIHBhc3N3b3JkOiBwYXNzCiAgICByZWdpb25fbmFtZTogUmVnaW9uT25lCg==

View File

@ -0,0 +1,24 @@
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: ostgt-control-plane
namespace: default
spec:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
name: ostgt-control-plane
kubeadmConfigSpec:
files:
- path: /etc/kubernetes/cloud.conf
content: W0dsb2JhbF0KYXV0aC11cmw9aHR0cDovLzEwLjAuMS40L2lkZW50aXR5CnVzZXJuYW1lPSJkZW1vIgpwYXNzd29yZD0icGFzcyIKdGVuYW50LWlkPSIyMThkMWNiYzMyOWM0YWUzYWNjODhjYTU5NTAwMTUwMSIKZG9tYWluLW5hbWU9IkRlZmF1bHQiCnJlZ2lvbj0iUmVnaW9uT25lIgo=
encoding: base64
owner: root
permissions: "0600"
- path: /etc/certs/cacert
content: Cg==
encoding: base64
owner: root
permissions: "0600"
postKubeadmCommands:
- sudo kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://docs.projectcalico.org/v3.15/manifests/calico.yaml

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/apiServerLoadBalancerFloatingIP","value": "172.24.4.120" }
]

View File

@ -0,0 +1,9 @@
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: ostgt-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
controlPlaneEndpoint: 172.24.4.120:6443

View File

@ -0,0 +1,7 @@
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: ostgt-control-plane
namespace: default
spec:
replicas: 1

View File

@ -0,0 +1,9 @@
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: ostgt-control-plane
namespace: default
spec:
template:
spec:
flavor: ds4G

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/dnsNameservers/0","value": "8.8.8.8" }
]

View File

@ -0,0 +1,3 @@
[
{ "op": "replace","path": "/spec/externalNetworkId","value": "da57dbbe-c923-4641-b00a-0060d52f6f95" }
]

View File

@ -0,0 +1,35 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../function/k8scontrol-capo
patchesJson6902:
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: OpenStackCluster
name: "ostgt"
path: control_plane_ip.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: OpenStackCluster
name: "ostgt"
path: dns_servers.json
- target:
group: infrastructure.cluster.x-k8s.io
version: v1alpha3
kind: OpenStackCluster
name: "ostgt"
path: external_network_id.json
patchesStrategicMerge:
- cluster_clouds_yaml_patch.yaml
- control_plane_ip_patch.yaml
- control_plane_config_patch.yaml
- ssh_key_patch.yaml
- control_plane_machine_count_patch.yaml
- control_plane_machine_flavor_patch.yaml

View File

@ -0,0 +1,11 @@
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: ostgt-control-plane
namespace: default
spec:
kubeadmConfigSpec:
users:
- name: capo
sshAuthorizedKeys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5CeROtz81ZJ/2wCUBnK0X67dL01s5KgOQhqlvn2PG1onGoEwKuwVgnUH3CtGJj1wjq4GqWduBoLvPrSt6qGL/tX8ZiVp2XS8SkDcxo69zo4QMoBdjTwfXwASalcEjQr7nRW8eMlgeI8+bRkhDuCBQLTYHoe6jQh/sWKhj25cgeAkU8eqe+bB9C8d7C/DeWBN6AJaBGjX7F2Azm8Fg6ArtxabuEDw3DYvdm+Y4GvLDfJ3MQR/S2etk8lwWaxlDWIfTwVeCDlG098rRRKKtiaF9LFq08+wEPPEBsyX1q9aNNfdcoCZnuSoWZ5ceBMxZSobA3vBy2jHUGz+slR4ADM7P stack@devstack-magnum

View File

@ -0,0 +1,4 @@
resources:
- ../../shared/clusterctl
commonLabels:
airshipit.org/stage: initinfra

View File

@ -0,0 +1,10 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../function/workers-capo
patchesStrategicMerge:
- workers_machine_count_patch.yaml
- workers_machine_flavor_patch.yaml
- workers_cloud_conf_patch.yaml
- workers_ssh_key_patch.yaml

View File

@ -0,0 +1,19 @@
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: ostgt-md-0
namespace: default
spec:
template:
spec:
files:
- content: W0dsb2JhbF0KYXV0aC11cmw9aHR0cDovLzEwLjAuMS40L2lkZW50aXR5CnVzZXJuYW1lPSJkZW1vIgpwYXNzd29yZD0icGFzcyIKdGVuYW50LWlkPSIyMThkMWNiYzMyOWM0YWUzYWNjODhjYTU5NTAwMTUwMSIKZG9tYWluLW5hbWU9IkRlZmF1bHQiCnJlZ2lvbj0iUmVnaW9uT25lIgo=
encoding: base64
owner: root
path: /etc/kubernetes/cloud.conf
permissions: "0600"
- content: Cg==
encoding: base64
owner: root
path: /etc/certs/cacert
permissions: "0600"

View File

@ -0,0 +1,8 @@
apiVersion: cluster.x-k8s.io/v1alpha3
kind: MachineDeployment
metadata:
name: ostgt-md-0
namespace: default
spec:
clusterName: ostgt
replicas: 3

View File

@ -0,0 +1,9 @@
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: ostgt-md-0
namespace: default
spec:
template:
spec:
flavor: m1.small

View File

@ -0,0 +1,12 @@
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: ostgt-md-0
namespace: default
spec:
template:
spec:
users:
- name: capo
sshAuthorizedKeys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5CeROtz81ZJ/2wCUBnK0X67dL01s5KgOQhqlvn2PG1onGoEwKuwVgnUH3CtGJj1wjq4GqWduBoLvPrSt6qGL/tX8ZiVp2XS8SkDcxo69zo4QMoBdjTwfXwASalcEjQr7nRW8eMlgeI8+bRkhDuCBQLTYHoe6jQh/sWKhj25cgeAkU8eqe+bB9C8d7C/DeWBN6AJaBGjX7F2Azm8Fg6ArtxabuEDw3DYvdm+Y4GvLDfJ3MQR/S2etk8lwWaxlDWIfTwVeCDlG098rRRKKtiaF9LFq08+wEPPEBsyX1q9aNNfdcoCZnuSoWZ5ceBMxZSobA3vBy2jHUGz+slR4ADM7P stack@devstack-magnum