Integrate docker provider (capd) with airshipctl
* add documentation for docker provider (capd) * add manifests for docker provider (capd) * add cluster templates for control plane and workers * add site definition to use docker provider (capd) with control plane and workers Change-Id: I36a643774ce2e468b70ab29568bface3da537d41
This commit is contained in:
parent
99d37b3907
commit
809b6234a5
@ -31,6 +31,7 @@ Welcome to airshipctl's Documentation!
|
||||
testing-guidelines
|
||||
virtual_redfish_bmc
|
||||
Commands <cli/airshipctl>
|
||||
providers/cluster_api_docker
|
||||
|
||||
.. toctree::
|
||||
:caption: Airship Project Documentation
|
||||
|
791
docs/source/providers/cluster_api_docker.md
Executable file
791
docs/source/providers/cluster_api_docker.md
Executable file
@ -0,0 +1,791 @@
|
||||
# Airshipctl and Cluster API Docker Integration
|
||||
|
||||
## Overview
|
||||
Airshipctl and cluster api docker integration facilitates usage of `airshipctl`
|
||||
to create cluster api management and workload clusters using `docker as
|
||||
infrastructure provider`.
|
||||
|
||||
## Workflow
|
||||
A simple workflow that can be tested involves the following operations:
|
||||
|
||||
**Initialize the management cluster with cluster api and docker provider
|
||||
components**
|
||||
|
||||
> airshipctl cluster init --debug
|
||||
|
||||
**Create a workload cluster, with control plane and worker nodes**
|
||||
|
||||
> airshipctl phase apply controlplane
|
||||
|
||||
> airshipctl phase apply workers
|
||||
|
||||
Note: `airshipctl phase apply initinfra` is not used because all the provider
|
||||
components are initialized using `airshipctl cluster init`
|
||||
|
||||
The phase `initinfra` is included just to get `validate docs` to
|
||||
pass.
|
||||
|
||||
## Common Pre-requisites
|
||||
|
||||
* Install [Docker](https://www.docker.com/)
|
||||
* Install [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
|
||||
* Install [Kind](https://kind.sigs.k8s.io/)
|
||||
* Install
|
||||
[Kustomize](https://kubernetes-sigs.github.io/kustomize/installation/binaries/)
|
||||
* Install [Airshipctl](https://docs.airshipit.org/airshipctl/developers.html)
|
||||
|
||||
Also, check [Software Version Information](#Software-Version-Information),
|
||||
[Special Instructions](#Special-Instructions) and [Virtual Machine
|
||||
Specification](#Virtual-Machine-Specification)
|
||||
|
||||
## Getting Started
|
||||
|
||||
Kind will be used to setup a kubernetes cluster, that will be later transformed
|
||||
into a management cluster using airshipctl. The kind kubernetes cluster will be
|
||||
initialized with cluster API and Cluster API docker provider components.
|
||||
|
||||
Run the following command to create a kind configuration file, that would mount
|
||||
the docker.sock file from the host operating system into the kind cluster. This
|
||||
is required by the management cluster to create machines on host as docker
|
||||
containers.
|
||||
|
||||
$ vim kind-cluster-with-extramounts.yaml
|
||||
|
||||
```
|
||||
kind: Cluster
|
||||
apiVersion: kind.sigs.k8s.io/v1alpha3
|
||||
nodes:
|
||||
- role: control-plane
|
||||
extraMounts:
|
||||
- hostPath: /var/run/docker.sock
|
||||
containerPath: /var/run/docker.sock
|
||||
|
||||
```
|
||||
|
||||
Save and exit.
|
||||
|
||||
$ export KIND_EXPERIMENTAL_DOCKER_NETWORK=bridge
|
||||
|
||||
$ kind create cluster --name capi-docker --config ~/kind-cluster-with-extramounts.yaml
|
||||
|
||||
```
|
||||
Creating cluster "capi-docker" ...
|
||||
WARNING: Overriding docker network due to KIND_EXPERIMENTAL_DOCKER_NETWORK
|
||||
WARNING: Here be dragons! This is not supported currently.
|
||||
✓ Ensuring node image (kindest/node:v1.18.2) 🖼
|
||||
✓ Preparing nodes 📦
|
||||
✓ Writing configuration 📜
|
||||
✓ Starting control-plane 🕹️
|
||||
✓ Installing CNI 🔌
|
||||
✓ Installing StorageClass 💾
|
||||
Set kubectl context to "kind-capi-docker"
|
||||
You can now use your cluster with:
|
||||
|
||||
kubectl cluster-info --context kind-capi-docker
|
||||
Check if all the pods are up.
|
||||
```
|
||||
|
||||
$ kubectl get pods -A
|
||||
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system coredns-6955765f44-fvg8p 1/1 Running 0 72s
|
||||
kube-system coredns-6955765f44-gm96d 1/1 Running 0 72s
|
||||
kube-system etcd-capi-docker-control-plane 1/1 Running 0 83s
|
||||
kube-system kindnet-7fqv6 1/1 Running 0 72s
|
||||
kube-system kube-apiserver-capi-docker-control-plane 1/1 Running 0 83s
|
||||
kube-system kube-controller-manager-capi-docker-control-plane 1/1 Running 0 83s
|
||||
kube-system kube-proxy-gqlnm 1/1 Running 0 72s
|
||||
kube-system kube-scheduler-capi-docker-control-plane 1/1 Running 0 83s
|
||||
local-path-storage local-path-provisioner-7745554f7f-2qcv7 1/1 Running 0 72s
|
||||
```
|
||||
|
||||
## Create airshipctl configuration files
|
||||
|
||||
$ mkdir ~/.airship
|
||||
|
||||
$ cp ~/.kube/config ~/.airship/kubeconfig
|
||||
|
||||
$ airshipctl config init
|
||||
|
||||
$ airshipctl config use-context kind-capi-docker
|
||||
|
||||
$ airshipctl config set-manifest docker_manifest --repo primary --url \
|
||||
https://opendev.org/airship/airshipctl --branch master --primary \
|
||||
--sub-path manifests/site/docker-test-site --target-path /tmp/airship
|
||||
|
||||
```
|
||||
Manifest "docker_manifest" created.
|
||||
```
|
||||
$ airshipctl config set-context kind-capi-docker --manifest docker_manifest
|
||||
```
|
||||
Context "kind-capi-docker" modified.
|
||||
```
|
||||
$ airshipctl config get-context
|
||||
|
||||
```
|
||||
Context: kind-capi-docker
|
||||
contextKubeconf: kind-capi-docker_target
|
||||
manifest: docker_manifest
|
||||
|
||||
LocationOfOrigin: /home/rishabh/.airship/kubeconfig
|
||||
cluster: kind-capi-docker_target
|
||||
user: kind-capi-docker
|
||||
```
|
||||
|
||||
$ airshipctl document pull --debug
|
||||
|
||||
```
|
||||
[airshipctl] 2020/08/12 14:07:13 Reading current context manifest information from /home/rishabh/.airship/config
|
||||
[airshipctl] 2020/08/12 14:07:13 Downloading primary repository airshipctl from https://review.opendev.org/airship/airshipctl into /tmp/airship
|
||||
[airshipctl] 2020/08/12 14:07:13 Attempting to download the repository airshipctl
|
||||
[airshipctl] 2020/08/12 14:07:13 Attempting to clone the repository airshipctl from https://review.opendev.org/airship/airshipctl
|
||||
[airshipctl] 2020/08/12 14:07:23 Attempting to checkout the repository airshipctl from branch refs/heads/master
|
||||
```
|
||||
|
||||
## Initialize the management cluster
|
||||
|
||||
$ airshipctl cluster init --debug
|
||||
|
||||
```
|
||||
[airshipctl] 2020/08/12 14:08:23 Starting cluster-api initiation
|
||||
Installing the clusterctl inventory CRD
|
||||
Creating CustomResourceDefinition="providers.clusterctl.cluster.x-k8s.io"
|
||||
Fetching providers
|
||||
[airshipctl] 2020/08/12 14:08:23 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
|
||||
[airshipctl] 2020/08/12 14:08:23 Setting up airshipctl provider Components client
|
||||
Provider type: CoreProvider, name: cluster-api
|
||||
[airshipctl] 2020/08/12 14:08:23 Getting airshipctl provider components, skipping variable substitution: true.
|
||||
Provider type: CoreProvider, name: cluster-api
|
||||
Fetching File="components.yaml" Provider="cluster-api" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:23 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capi/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:24 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
|
||||
[airshipctl] 2020/08/12 14:08:24 Setting up airshipctl provider Components client
|
||||
Provider type: BootstrapProvider, name: kubeadm
|
||||
[airshipctl] 2020/08/12 14:08:24 Getting airshipctl provider components, skipping variable substitution: true.
|
||||
Provider type: BootstrapProvider, name: kubeadm
|
||||
Fetching File="components.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:24 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cabpk/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:24 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
|
||||
[airshipctl] 2020/08/12 14:08:24 Setting up airshipctl provider Components client
|
||||
Provider type: ControlPlaneProvider, name: kubeadm
|
||||
[airshipctl] 2020/08/12 14:08:24 Getting airshipctl provider components, skipping variable substitution: true.
|
||||
Provider type: ControlPlaneProvider, name: kubeadm
|
||||
Fetching File="components.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:24 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cacpk/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:24 Creating arishipctl repository implementation interface for provider docker of type InfrastructureProvider
|
||||
[airshipctl] 2020/08/12 14:08:24 Setting up airshipctl provider Components client
|
||||
Provider type: InfrastructureProvider, name: docker
|
||||
[airshipctl] 2020/08/12 14:08:24 Getting airshipctl provider components, skipping variable substitution: true.
|
||||
Provider type: InfrastructureProvider, name: docker
|
||||
Fetching File="components.yaml" Provider="infrastructure-docker" Version="v0.3.7"
|
||||
[airshipctl] 2020/08/12 14:08:24 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capd/v0.3.7
|
||||
[airshipctl] 2020/08/12 14:08:24 Creating arishipctl repository implementation interface for provider cluster-api of type CoreProvider
|
||||
Fetching File="metadata.yaml" Provider="cluster-api" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:24 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capi/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:25 Creating arishipctl repository implementation interface for provider kubeadm of type BootstrapProvider
|
||||
Fetching File="metadata.yaml" Provider="bootstrap-kubeadm" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:25 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cabpk/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:25 Creating arishipctl repository implementation interface for provider kubeadm of type ControlPlaneProvider
|
||||
Fetching File="metadata.yaml" Provider="control-plane-kubeadm" Version="v0.3.3"
|
||||
[airshipctl] 2020/08/12 14:08:25 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/cacpk/v0.3.3
|
||||
[airshipctl] 2020/08/12 14:08:25 Creating arishipctl repository implementation interface for provider docker of type InfrastructureProvider
|
||||
Fetching File="metadata.yaml" Provider="infrastructure-docker" Version="v0.3.7"
|
||||
[airshipctl] 2020/08/12 14:08:25 Building cluster-api provider component documents from kustomize path at /tmp/airship/airshipctl/manifests/function/capd/v0.3.7
|
||||
Installing cert-manager
|
||||
Creating Namespace="cert-manager"
|
||||
Creating CustomResourceDefinition="challenges.acme.cert-manager.io"
|
||||
Creating CustomResourceDefinition="orders.acme.cert-manager.io"
|
||||
Creating CustomResourceDefinition="certificaterequests.cert-manager.io"
|
||||
Creating CustomResourceDefinition="certificates.cert-manager.io"
|
||||
Creating CustomResourceDefinition="clusterissuers.cert-manager.io"
|
||||
Creating CustomResourceDefinition="issuers.cert-manager.io"
|
||||
Creating ServiceAccount="cert-manager-cainjector" Namespace="cert-manager"
|
||||
Creating ServiceAccount="cert-manager" Namespace="cert-manager"
|
||||
Creating ServiceAccount="cert-manager-webhook" Namespace="cert-manager"
|
||||
Creating ClusterRole="cert-manager-cainjector"
|
||||
Creating ClusterRoleBinding="cert-manager-cainjector"
|
||||
Creating Role="cert-manager-cainjector:leaderelection" Namespace="kube-system"
|
||||
Creating RoleBinding="cert-manager-cainjector:leaderelection" Namespace="kube-system"
|
||||
Creating ClusterRoleBinding="cert-manager-webhook:auth-delegator"
|
||||
Creating RoleBinding="cert-manager-webhook:webhook-authentication-reader" Namespace="kube-system"
|
||||
Creating ClusterRole="cert-manager-webhook:webhook-requester"
|
||||
Creating Role="cert-manager:leaderelection" Namespace="kube-system"
|
||||
Creating RoleBinding="cert-manager:leaderelection" Namespace="kube-system"
|
||||
Creating ClusterRole="cert-manager-controller-issuers"
|
||||
Creating ClusterRole="cert-manager-controller-clusterissuers"
|
||||
Creating ClusterRole="cert-manager-controller-certificates"
|
||||
Creating ClusterRole="cert-manager-controller-orders"
|
||||
Creating ClusterRole="cert-manager-controller-challenges"
|
||||
Creating ClusterRole="cert-manager-controller-ingress-shim"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-issuers"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-clusterissuers"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-certificates"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-orders"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-challenges"
|
||||
Creating ClusterRoleBinding="cert-manager-controller-ingress-shim"
|
||||
Creating ClusterRole="cert-manager-view"
|
||||
Creating ClusterRole="cert-manager-edit"
|
||||
Creating Service="cert-manager" Namespace="cert-manager"
|
||||
Creating Service="cert-manager-webhook" Namespace="cert-manager"
|
||||
Creating Deployment="cert-manager-cainjector" Namespace="cert-manager"
|
||||
Creating Deployment="cert-manager" Namespace="cert-manager"
|
||||
Creating Deployment="cert-manager-webhook" Namespace="cert-manager"
|
||||
Creating APIService="v1beta1.webhook.cert-manager.io"
|
||||
Creating MutatingWebhookConfiguration="cert-manager-webhook"
|
||||
Creating ValidatingWebhookConfiguration="cert-manager-webhook"
|
||||
Waiting for cert-manager to be available...
|
||||
Installing Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
|
||||
Creating shared objects Provider="cluster-api" Version="v0.3.3"
|
||||
Creating Namespace="capi-webhook-system"
|
||||
Creating CustomResourceDefinition="clusters.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="machinedeployments.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="machinehealthchecks.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="machinepools.exp.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="machines.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="machinesets.cluster.x-k8s.io"
|
||||
Creating MutatingWebhookConfiguration="capi-mutating-webhook-configuration"
|
||||
Creating Service="capi-webhook-service" Namespace="capi-webhook-system"
|
||||
Creating Deployment="capi-controller-manager" Namespace="capi-webhook-system"
|
||||
Creating Certificate="capi-serving-cert" Namespace="capi-webhook-system"
|
||||
Creating Issuer="capi-selfsigned-issuer" Namespace="capi-webhook-system"
|
||||
Creating ValidatingWebhookConfiguration="capi-validating-webhook-configuration"
|
||||
Creating instance objects Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
|
||||
Creating Namespace="capi-system"
|
||||
Creating Role="capi-leader-election-role" Namespace="capi-system"
|
||||
Creating ClusterRole="capi-system-capi-aggregated-manager-role"
|
||||
Creating ClusterRole="capi-system-capi-manager-role"
|
||||
Creating ClusterRole="capi-system-capi-proxy-role"
|
||||
Creating RoleBinding="capi-leader-election-rolebinding" Namespace="capi-system"
|
||||
Creating ClusterRoleBinding="capi-system-capi-manager-rolebinding"
|
||||
Creating ClusterRoleBinding="capi-system-capi-proxy-rolebinding"
|
||||
Creating Service="capi-controller-manager-metrics-service" Namespace="capi-system"
|
||||
Creating Deployment="capi-controller-manager" Namespace="capi-system"
|
||||
Creating inventory entry Provider="cluster-api" Version="v0.3.3" TargetNamespace="capi-system"
|
||||
Installing Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
|
||||
Creating shared objects Provider="bootstrap-kubeadm" Version="v0.3.3"
|
||||
Creating CustomResourceDefinition="kubeadmconfigs.bootstrap.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io"
|
||||
Creating Service="capi-kubeadm-bootstrap-webhook-service" Namespace="capi-webhook-system"
|
||||
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-webhook-system"
|
||||
Creating Certificate="capi-kubeadm-bootstrap-serving-cert" Namespace="capi-webhook-system"
|
||||
Creating Issuer="capi-kubeadm-bootstrap-selfsigned-issuer" Namespace="capi-webhook-system"
|
||||
Creating instance objects Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
|
||||
Creating Namespace="capi-kubeadm-bootstrap-system"
|
||||
Creating Role="capi-kubeadm-bootstrap-leader-election-role" Namespace="capi-kubeadm-bootstrap-system"
|
||||
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-role"
|
||||
Creating ClusterRole="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-role"
|
||||
Creating RoleBinding="capi-kubeadm-bootstrap-leader-election-rolebinding" Namespace="capi-kubeadm-bootstrap-system"
|
||||
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-manager-rolebinding"
|
||||
Creating ClusterRoleBinding="capi-kubeadm-bootstrap-system-capi-kubeadm-bootstrap-proxy-rolebinding"
|
||||
Creating Service="capi-kubeadm-bootstrap-controller-manager-metrics-service" Namespace="capi-kubeadm-bootstrap-system"
|
||||
Creating Deployment="capi-kubeadm-bootstrap-controller-manager" Namespace="capi-kubeadm-bootstrap-system"
|
||||
Creating inventory entry Provider="bootstrap-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-bootstrap-system"
|
||||
Installing Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
|
||||
Creating shared objects Provider="control-plane-kubeadm" Version="v0.3.3"
|
||||
Creating CustomResourceDefinition="kubeadmcontrolplanes.controlplane.cluster.x-k8s.io"
|
||||
Creating MutatingWebhookConfiguration="capi-kubeadm-control-plane-mutating-webhook-configuration"
|
||||
Creating Service="capi-kubeadm-control-plane-webhook-service" Namespace="capi-webhook-system"
|
||||
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-webhook-system"
|
||||
Creating Certificate="capi-kubeadm-control-plane-serving-cert" Namespace="capi-webhook-system"
|
||||
Creating Issuer="capi-kubeadm-control-plane-selfsigned-issuer" Namespace="capi-webhook-system"
|
||||
Creating ValidatingWebhookConfiguration="capi-kubeadm-control-plane-validating-webhook-configuration"
|
||||
Creating instance objects Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
|
||||
Creating Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating Role="capi-kubeadm-control-plane-leader-election-role" Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating Role="capi-kubeadm-control-plane-manager-role" Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-role"
|
||||
Creating ClusterRole="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-role"
|
||||
Creating RoleBinding="capi-kubeadm-control-plane-leader-election-rolebinding" Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-manager-rolebinding"
|
||||
Creating ClusterRoleBinding="capi-kubeadm-control-plane-system-capi-kubeadm-control-plane-proxy-rolebinding"
|
||||
Creating Service="capi-kubeadm-control-plane-controller-manager-metrics-service" Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating Deployment="capi-kubeadm-control-plane-controller-manager" Namespace="capi-kubeadm-control-plane-system"
|
||||
Creating inventory entry Provider="control-plane-kubeadm" Version="v0.3.3" TargetNamespace="capi-kubeadm-control-plane-system"
|
||||
Installing Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
|
||||
Creating shared objects Provider="infrastructure-docker" Version="v0.3.7"
|
||||
Creating CustomResourceDefinition="dockerclusters.infrastructure.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="dockermachines.infrastructure.cluster.x-k8s.io"
|
||||
Creating CustomResourceDefinition="dockermachinetemplates.infrastructure.cluster.x-k8s.io"
|
||||
Creating ValidatingWebhookConfiguration="capd-validating-webhook-configuration"
|
||||
Creating instance objects Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
|
||||
Creating Namespace="capd-system"
|
||||
Creating Role="capd-leader-election-role" Namespace="capd-system"
|
||||
Creating ClusterRole="capd-system-capd-manager-role"
|
||||
Creating ClusterRole="capd-system-capd-proxy-role"
|
||||
Creating RoleBinding="capd-leader-election-rolebinding" Namespace="capd-system"
|
||||
Creating ClusterRoleBinding="capd-system-capd-manager-rolebinding"
|
||||
Creating ClusterRoleBinding="capd-system-capd-proxy-rolebinding"
|
||||
Creating Service="capd-controller-manager-metrics-service" Namespace="capd-system"
|
||||
Creating Service="capd-webhook-service" Namespace="capd-system"
|
||||
Creating Deployment="capd-controller-manager" Namespace="capd-system"
|
||||
Creating Certificate="capd-serving-cert" Namespace="capd-system"
|
||||
Creating Issuer="capd-selfsigned-issuer" Namespace="capd-system"
|
||||
Creating inventory entry Provider="infrastructure-docker" Version="v0.3.7" TargetNamespace="capd-system"
|
||||
```
|
||||
|
||||
Wait for all the pods to be up.
|
||||
|
||||
$ kubectl get pods -A
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
capd-system capd-controller-manager-75f5d546d7-frrm5 2/2 Running 0 77s
|
||||
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-5bb9bfdc46-mhbqz 2/2 Running 0 85s
|
||||
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-77466c7666-t69m5 2/2 Running 0 81s
|
||||
capi-system capi-controller-manager-5798474d9f-tp2c2 2/2 Running 0 89s
|
||||
capi-webhook-system capi-controller-manager-5d64dd9dfb-r6mb2 2/2 Running 1 91s
|
||||
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-7c78fff45-dmnlc 2/2 Running 0 88s
|
||||
capi-webhook-system capi-kubeadm-control-plane-controller-manager-58465bb88f-c6j5q 2/2 Running 0 84s
|
||||
cert-manager cert-manager-69b4f77ffc-8vchm 1/1 Running 0 117s
|
||||
cert-manager cert-manager-cainjector-576978ffc8-frsxg 1/1 Running 0 117s
|
||||
cert-manager cert-manager-webhook-c67fbc858-qxrcj 1/1 Running 1 117s
|
||||
kube-system coredns-6955765f44-f28p7 1/1 Running 0 3m12s
|
||||
kube-system coredns-6955765f44-nq5qk 1/1 Running 0 3m12s
|
||||
kube-system etcd-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kindnet-nxm6k 1/1 Running 0 3m12s
|
||||
kube-system kube-apiserver-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kube-controller-manager-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kube-proxy-5jmc5 1/1 Running 0 3m12s
|
||||
kube-system kube-scheduler-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
local-path-storage local-path-provisioner-7745554f7f-ms989 1/1 Running 0 3m12s
|
||||
```
|
||||
Now, the management cluster is initialized with cluster api and cluster api
|
||||
docker provider components.
|
||||
|
||||
$ kubectl get providers -A
|
||||
|
||||
```
|
||||
NAMESPACE NAME TYPE PROVIDER VERSION WATCH NAMESPACE
|
||||
capd-system infrastructure-docker InfrastructureProvider v0.3.7
|
||||
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider v0.3.3
|
||||
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider v0.3.3
|
||||
capi-system cluster-api CoreProvider v0.3.3
|
||||
```
|
||||
|
||||
$ docker ps
|
||||
|
||||
```
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
b9690cecdcf2 kindest/node:v1.18.2 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:32773->6443/tcp capi-docker-control-plane
|
||||
```
|
||||
|
||||
|
||||
## Create your first workload cluster
|
||||
|
||||
$ airshipctl phase apply controlplane --debug
|
||||
|
||||
```
|
||||
[airshipctl] 2020/08/12 14:10:12 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/docker-test-site/target/controlplane
|
||||
[airshipctl] 2020/08/12 14:10:12 Applying bundle, inventory id: kind-capi-docker-target-controlplane
|
||||
[airshipctl] 2020/08/12 14:10:12 Inventory Object config Map not found, auto generating Invetory object
|
||||
[airshipctl] 2020/08/12 14:10:12 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-docker-target-controlplane"},"name":"airshipit-kind-capi-docker-target-controlplane","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
|
||||
[airshipctl] 2020/08/12 14:10:12 Making sure that inventory object namespace airshipit exists
|
||||
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a created
|
||||
cluster.cluster.x-k8s.io/dtc created
|
||||
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 created
|
||||
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane created
|
||||
dockercluster.infrastructure.cluster.x-k8s.io/dtc created
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane created
|
||||
6 resource(s) applied. 6 created, 0 unchanged, 0 configured
|
||||
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 is NotFound: Resource not found
|
||||
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane is NotFound: Resource not found
|
||||
dockercluster.infrastructure.cluster.x-k8s.io/dtc is NotFound: Resource not found
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane is NotFound: Resource not found
|
||||
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a is NotFound: Resource not found
|
||||
cluster.cluster.x-k8s.io/dtc is NotFound: Resource not found
|
||||
configmap/airshipit-kind-capi-docker-target-controlplane-87efb53a is Current: Resource is always ready
|
||||
cluster.cluster.x-k8s.io/dtc is Current: Resource is current
|
||||
machinehealthcheck.cluster.x-k8s.io/dtc-mhc-0 is Current: Resource is current
|
||||
kubeadmcontrolplane.controlplane.cluster.x-k8s.io/dtc-control-plane is Current: Resource is current
|
||||
dockercluster.infrastructure.cluster.x-k8s.io/dtc is Current: Resource is current
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-control-plane is Current: Resource is current
|
||||
all resources has reached the Current status
|
||||
```
|
||||
|
||||
$ kubectl get pods -A
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
capd-system capd-controller-manager-75f5d546d7-frrm5 2/2 Running 0 77s
|
||||
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-5bb9bfdc46-mhbqz 2/2 Running 0 85s
|
||||
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-77466c7666-t69m5 2/2 Running 0 81s
|
||||
capi-system capi-controller-manager-5798474d9f-tp2c2 2/2 Running 0 89s
|
||||
capi-webhook-system capi-controller-manager-5d64dd9dfb-r6mb2 2/2 Running 1 91s
|
||||
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-7c78fff45-dmnlc 2/2 Running 0 88s
|
||||
capi-webhook-system capi-kubeadm-control-plane-controller-manager-58465bb88f-c6j5q 2/2 Running 0 84s
|
||||
cert-manager cert-manager-69b4f77ffc-8vchm 1/1 Running 0 117s
|
||||
cert-manager cert-manager-cainjector-576978ffc8-frsxg 1/1 Running 0 117s
|
||||
cert-manager cert-manager-webhook-c67fbc858-qxrcj 1/1 Running 1 117s
|
||||
kube-system coredns-6955765f44-f28p7 1/1 Running 0 3m12s
|
||||
kube-system coredns-6955765f44-nq5qk 1/1 Running 0 3m12s
|
||||
kube-system etcd-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kindnet-nxm6k 1/1 Running 0 3m12s
|
||||
kube-system kube-apiserver-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kube-controller-manager-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
kube-system kube-proxy-5jmc5 1/1 Running 0 3m12s
|
||||
kube-system kube-scheduler-capi-docker-control-plane 1/1 Running 0 3m25s
|
||||
local-path-storage local-path-provisioner-7745554f7f-ms989 1/1 Running 0 3m12s
|
||||
```
|
||||
|
||||
$ kubectl logs capd-controller-manager-75f5d546d7-frrm5 -n capd-system --all-containers=true -f
|
||||
|
||||
```
|
||||
I0812 21:11:24.761608 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="dockermachine" "name"="dtc-control-plane-zc5bw" "namespace"="default"
|
||||
I0812 21:11:25.189401 1 controller.go:272] controller-runtime/controller "msg"="Successfully Reconciled" "controller"="dockermachine" "name"="dtc-control-plane-zc5bw" "namespace"="default"
|
||||
I0812 21:11:26.219320 1 generic_predicates.go:38] controllers/DockerMachine "msg"="One of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="All"
|
||||
I0812 21:11:26.219774 1 cluster_predicates.go:143] controllers/DockerMachine "msg"="Cluster was not unpaused, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateUnpaused"
|
||||
I0812 21:11:26.222004 1 cluster_predicates.go:111] controllers/DockerMachine "msg"="Cluster infrastructure did not become ready, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateInfraReady"
|
||||
I0812 21:11:26.223003 1 generic_predicates.go:89] controllers/DockerMachine "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="Any"
|
||||
I0812 21:11:26.223239 1 generic_predicates.go:89] controllers/DockerMachine "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="Any"
|
||||
I0812 21:11:26.219658 1 cluster_predicates.go:143] controllers/DockerCluster "msg"="Cluster was not unpaused, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateUnpaused"
|
||||
I0812 21:11:26.229665 1 generic_predicates.go:89] controllers/DockerCluster "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpaused" "predicateAggregation"="Any"
|
||||
```
|
||||
|
||||
$ kubectl get machines
|
||||
```
|
||||
NAME PROVIDERID PHASE
|
||||
dtc-control-plane-p4fsx docker:////dtc-dtc-control-plane-p4fsx Running
|
||||
```
|
||||
|
||||
$ airshipctl phase apply workers --debug
|
||||
|
||||
```
|
||||
[airshipctl] 2020/08/12 14:11:55 building bundle from kustomize path /tmp/airship/airshipctl/manifests/site/docker-test-site/target/worker
|
||||
[airshipctl] 2020/08/12 14:11:55 Applying bundle, inventory id: kind-capi-docker-target-worker
|
||||
[airshipctl] 2020/08/12 14:11:55 Inventory Object config Map not found, auto generating Invetory object
|
||||
[airshipctl] 2020/08/12 14:11:55 Injecting Invetory Object: {"apiVersion":"v1","kind":"ConfigMap","metadata":{"creationTimestamp":null,"labels":{"cli-utils.sigs.k8s.io/inventory-id":"kind-capi-docker-target-worker"},"name":"airshipit-kind-capi-docker-target-worker","namespace":"airshipit"}}{nsfx:false,beh:unspecified} into bundle
|
||||
[airshipctl] 2020/08/12 14:11:55 Making sure that inventory object namespace airshipit exists
|
||||
configmap/airshipit-kind-capi-docker-target-worker-b56f83 created
|
||||
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 created
|
||||
machinedeployment.cluster.x-k8s.io/dtc-md-0 created
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 created
|
||||
4 resource(s) applied. 4 created, 0 unchanged, 0 configured
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
|
||||
configmap/airshipit-kind-capi-docker-target-worker-b56f83 is NotFound: Resource not found
|
||||
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
|
||||
machinedeployment.cluster.x-k8s.io/dtc-md-0 is NotFound: Resource not found
|
||||
configmap/airshipit-kind-capi-docker-target-worker-b56f83 is Current: Resource is always ready
|
||||
kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
|
||||
machinedeployment.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
|
||||
dockermachinetemplate.infrastructure.cluster.x-k8s.io/dtc-md-0 is Current: Resource is current
|
||||
```
|
||||
|
||||
$ kubectl get machines
|
||||
```
|
||||
NAME PROVIDERID PHASE
|
||||
dtc-control-plane-p4fsx docker:////dtc-dtc-control-plane-p4fsx Running
|
||||
dtc-md-0-94c79cf9c-8ct2g Provisioning
|
||||
```
|
||||
|
||||
$ kubectl logs capd-controller-manager-75f5d546d7-frrm5 -n capd-system --all-containers=true -f
|
||||
|
||||
```
|
||||
I0812 21:10:14.071166 1 cluster_predicates.go:111] controllers/DockerMachine "msg"="Cluster infrastructure did not become ready, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateInfraReady"
|
||||
I0812 21:10:14.071204 1 generic_predicates.go:89] controllers/DockerMachine "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="Any"
|
||||
I0812 21:10:14.071325 1 generic_predicates.go:89] controllers/DockerMachine "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="Any"
|
||||
I0812 21:10:14.082937 1 generic_predicates.go:38] controllers/DockerMachine "msg"="One of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpausedAndInfrastructureReady" "predicateAggregation"="All"
|
||||
I0812 21:10:14.082981 1 cluster_predicates.go:143] controllers/DockerMachine "msg"="Cluster was not unpaused, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateUnpaused"
|
||||
I0812 21:10:14.082994 1 cluster_predicates.go:143] controllers/DockerCluster "msg"="Cluster was not unpaused, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateUnpaused"
|
||||
I0812 21:10:14.083012 1 cluster_predicates.go:111] controllers/DockerMachine "msg"="Cluster infrastructure did not become ready, blocking further processing" "cluster"="dtc" "eventType"="update" "namespace"="default" "predicate"="ClusterUpdateInfraReady"
|
||||
I0812 21:10:14.083024 1 generic_predicates.go:89] controllers/DockerCluster "msg"="All of the provided predicates returned false, blocking further processing" "predicate"="ClusterUnpaused" "predicateAggregation"="Any"
|
||||
I0812 21:10:14.083036 1 generic_predicates.go:89] controllers/DockerMachine "msg"="All of the provided predicates returned false, blocking further processing"
|
||||
```
|
||||
$ kubectl get machines
|
||||
```
|
||||
NAME PROVIDERID PHASE
|
||||
dtc-control-plane-p4fsx docker:////dtc-dtc-control-plane-p4fsx Running
|
||||
dtc-md-0-94c79cf9c-8ct2g docker:////dtc-dtc-md-0-94c79cf9c-8ct2g Running
|
||||
```
|
||||
|
||||
$ kubectl --namespace=default get secret/dtc-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./dtc.kubeconfig
|
||||
|
||||
$ kubectl get nodes --kubeconfig dtc.kubeconfig
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
dtc-dtc-control-plane-p4fsx Ready master 5m45s v1.18.6
|
||||
dtc-dtc-md-0-94c79cf9c-8ct2g Ready <none> 4m45s v1.18.6
|
||||
```
|
||||
|
||||
$ kubectl get pods -A --kubeconfig dtc.kubeconfig
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
kube-system calico-kube-controllers-59b699859f-xp8dv 1/1 Running 0 5m40s
|
||||
kube-system calico-node-5drwf 1/1 Running 0 5m39s
|
||||
kube-system calico-node-bqw5j 1/1 Running 0 4m53s
|
||||
kube-system coredns-6955765f44-8kg27 1/1 Running 0 5m40s
|
||||
kube-system coredns-6955765f44-lqzzq 1/1 Running 0 5m40s
|
||||
kube-system etcd-dtc-dtc-control-plane-p4fsx 1/1 Running 0 5m49s
|
||||
kube-system kube-apiserver-dtc-dtc-control-plane-p4fsx 1/1 Running 0 5m49s
|
||||
kube-system kube-controller-manager-dtc-dtc-control-plane-p4fsx 1/1 Running 0 5m49s
|
||||
kube-system kube-proxy-cjcls 1/1 Running 0 5m39s
|
||||
kube-system kube-proxy-fkvpc 1/1 Running 0 4m53s
|
||||
kube-system kube-scheduler-dtc-dtc-control-plane-p4fsx 1/1 Running 0 5m49s
|
||||
```
|
||||
|
||||
## Reference
|
||||
|
||||
### Provider Manifests
|
||||
|
||||
Provider Configuration is referenced from
|
||||
[config](https://github.com/kubernetes-sigs/cluster-api/tree/master/test/infrastructure/docker/config)
|
||||
Cluster API does not support docker out of the box. Therefore, the metadata
|
||||
infromation is added using files in `airshipctl/manifests/function/capd/data`
|
||||
|
||||
$ tree airshipctl/manifests/function/capd
|
||||
|
||||
```
|
||||
airshipctl/manifests/function/capd
|
||||
└── v0.3.7
|
||||
├── certmanager
|
||||
│ ├── certificate.yaml
|
||||
│ ├── kustomization.yaml
|
||||
│ └── kustomizeconfig.yaml
|
||||
├── crd
|
||||
│ ├── bases
|
||||
│ │ ├── infrastructure.cluster.x-k8s.io_dockerclusters.yaml
|
||||
│ │ ├── infrastructure.cluster.x-k8s.io_dockermachines.yaml
|
||||
│ │ └── infrastructure.cluster.x-k8s.io_dockermachinetemplates.yaml
|
||||
│ ├── kustomization.yaml
|
||||
│ ├── kustomizeconfig.yaml
|
||||
│ └── patches
|
||||
│ ├── cainjection_in_dockerclusters.yaml
|
||||
│ ├── cainjection_in_dockermachines.yaml
|
||||
│ ├── webhook_in_dockerclusters.yaml
|
||||
│ └── webhook_in_dockermachines.yaml
|
||||
├── data
|
||||
│ ├── kustomization.yaml
|
||||
│ └── metadata.yaml
|
||||
├── default
|
||||
│ ├── kustomization.yaml
|
||||
│ └── namespace.yaml
|
||||
├── kustomization.yaml
|
||||
├── manager
|
||||
│ ├── kustomization.yaml
|
||||
│ ├── manager_auth_proxy_patch.yaml
|
||||
│ ├── manager_image_patch.yaml
|
||||
│ ├── manager_prometheus_metrics_patch.yaml
|
||||
│ ├── manager_pull_policy.yaml
|
||||
│ └── manager.yaml
|
||||
├── rbac
|
||||
│ ├── auth_proxy_role_binding.yaml
|
||||
│ ├── auth_proxy_role.yaml
|
||||
│ ├── auth_proxy_service.yaml
|
||||
│ ├── kustomization.yaml
|
||||
│ ├── leader_election_role_binding.yaml
|
||||
│ ├── leader_election_role.yaml
|
||||
│ ├── role_binding.yaml
|
||||
│ └── role.yaml
|
||||
└── webhook
|
||||
├── kustomization.yaml
|
||||
├── kustomizeconfig.yaml
|
||||
├── manager_webhook_patch.yaml
|
||||
├── manifests.yaml
|
||||
├── service.yaml
|
||||
└── webhookcainjection_patch.yaml
|
||||
|
||||
10 directories, 37 files
|
||||
```
|
||||
|
||||
### Cluster Templates
|
||||
|
||||
`manifests/function/k8scontrol-capd` contains cluster.yaml, controlplane.yaml
|
||||
templates referenced from
|
||||
[cluster-template](https://github.com/kubernetes-sigs/cluster-api/blob/master/test/e2e/data/infrastructure-docker/cluster-template.yaml)
|
||||
|
||||
|
||||
| Template Name | CRDs |
|
||||
| ----------------- | ---- |
|
||||
| cluster.yaml | Cluster, DockerCluster |
|
||||
| controlplane.yaml | KubeadmControlPlane, DockerMachineTemplate, MachineHealthCheck |
|
||||
|
||||
$ tree airshipctl/manifests/function/k8scontrol-capd
|
||||
|
||||
```
|
||||
airshipctl/manifests/function/k8scontrol-capd
|
||||
├── cluster.yaml
|
||||
├── controlplane.yaml
|
||||
└── kustomization.yaml
|
||||
```
|
||||
|
||||
`airshipctl/manifests/function/workers-capd` contains workers.yaml referenced
|
||||
from
|
||||
[cluster-template](https://github.com/kubernetes-sigs/cluster-api/blob/master/test/e2e/data/infrastructure-docker/cluster-template.yaml)
|
||||
|
||||
| Template Name | CRDs |
|
||||
| ----------------- | ---- |
|
||||
| workers.yaml | KubeadmConfigTemplate, MachineDeployment, DockerMachineTemplate |
|
||||
|
||||
$ tree airshipctl/manifests/function/workers-capd
|
||||
|
||||
```
|
||||
airshipctl/manifests/function/workers-capd
|
||||
├── kustomization.yaml
|
||||
└── workers.yaml
|
||||
```
|
||||
|
||||
### Test Site Manifests
|
||||
|
||||
#### docker-test-site/shared
|
||||
|
||||
`airshipctl cluster init` uses `airshipctl/manifests/site/docker-test-site/shared/clusterctl`
|
||||
to initialize management cluster with defined provider components and version.
|
||||
|
||||
`$ tree airshipctl/manifests/site/docker-test-site/shared`
|
||||
|
||||
```
|
||||
/tmp/airship/airshipctl/manifests/site/docker-test-site/shared
|
||||
└── clusterctl
|
||||
├── clusterctl.yaml
|
||||
└── kustomization.yaml
|
||||
|
||||
1 directory, 2 files
|
||||
```
|
||||
|
||||
#### docker-test-site/target
|
||||
|
||||
There are 3 phases currently available in `docker-test-site/target`.
|
||||
|
||||
|Phase Name | Purpose |
|
||||
|-----------|---------|
|
||||
| controlplane | Patches templates in manifests/function/k8scontrol-capd |
|
||||
| workers | Patches template in manifests/function/workers-capd | |
|
||||
| initinfra | Simply calls `docker-test-site/shared/clusterctl` |
|
||||
|
||||
Note: `airshipctl cluster init` initializes all the provider components
|
||||
including the docker infrastructure provider component. As a result, `airshipctl
|
||||
phase apply initinfra` is not used.
|
||||
|
||||
At the moment, `phase initinfra` is only present for two reasons:
|
||||
- `airshipctl` complains if the phase is not found
|
||||
- `validate site docs to pass`
|
||||
|
||||
#### Patch Merge Strategy
|
||||
|
||||
Json patch to patch `control plane machine count` is applied on templates in `manifests/function/k8scontrol-capd`
|
||||
from `airshipctl/manifests/site/docker-test-site/target/controlplane` when `airshipctl phase apply
|
||||
controlplane` is executed
|
||||
|
||||
Json patch to patch `workers machine count` is applied on templates in `manifests/function/workers-capd`
|
||||
from `airshipctl/manifests/site/docker-test-site/target/workers` when `airshipctl phase apply
|
||||
workers` is executed.
|
||||
|
||||
| Patch Name | Purpose |
|
||||
| ------------------------------- | ------------------------------------------------------------------ |
|
||||
| controlplane/machine_count.json | patches control plane machine count in template function/k8scontrol-capd |
|
||||
| workers/machine_count.json | patches workers machine count in template function/workers-capd |
|
||||
|
||||
$ tree airshipctl/manifests/site/docker-test-site/target/
|
||||
|
||||
```
|
||||
airshipctl/manifests/site/docker-test-site/target/
|
||||
├── controlplane
|
||||
│ ├── kustomization.yaml
|
||||
│ └── machine_count.json
|
||||
├── initinfra
|
||||
│ └── kustomization.yaml
|
||||
└── workers
|
||||
├── kustomization.yaml
|
||||
└── machine_count.json
|
||||
|
||||
3 directories, 7 files
|
||||
```
|
||||
|
||||
### Software Version Information
|
||||
|
||||
All the instructions provided in the document have been tested using the
|
||||
software and version, provided in this section.
|
||||
|
||||
#### Virtual Machine Specification
|
||||
|
||||
All the instructions in the document were perfomed on a Oracle Virtual Box(6.1)
|
||||
VM running Ubuntu 18.04.4 LTS (Bionic Beaver) with 16G of memory and 4 VCPUs
|
||||
|
||||
#### Docker
|
||||
|
||||
$ docker version
|
||||
|
||||
```
|
||||
Client: Docker Engine - Community
|
||||
Version: 19.03.9
|
||||
API version: 1.40
|
||||
Go version: go1.13.10
|
||||
Git commit: 9d988398e7
|
||||
Built: Fri May 15 00:25:18 2020
|
||||
OS/Arch: linux/amd64
|
||||
Experimental: false
|
||||
|
||||
Server: Docker Engine - Community
|
||||
Engine:
|
||||
Version: 19.03.9
|
||||
API version: 1.40 (minimum version 1.12)
|
||||
Go version: go1.13.10
|
||||
Git commit: 9d988398e7
|
||||
Built: Fri May 15 00:23:50 2020
|
||||
OS/Arch: linux/amd64
|
||||
Experimental: false
|
||||
containerd:
|
||||
Version: 1.2.13
|
||||
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
|
||||
runc:
|
||||
Version: 1.0.0-rc10
|
||||
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
|
||||
docker-init:
|
||||
Version: 0.18.0
|
||||
GitCommit: fec3683
|
||||
```
|
||||
|
||||
#### Kind
|
||||
|
||||
$ kind version
|
||||
|
||||
```
|
||||
kind v0.8.1 go1.14.2 linux/amd64
|
||||
```
|
||||
|
||||
#### Kubectl
|
||||
|
||||
$ kubectl version
|
||||
|
||||
```
|
||||
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
|
||||
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2020-01-14T00:09:19Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
|
||||
```
|
||||
|
||||
#### Go
|
||||
|
||||
$ go version
|
||||
|
||||
```
|
||||
go version go1.14.1 linux/amd64
|
||||
```
|
||||
|
||||
#### Kustomize
|
||||
|
||||
$ kustomize version
|
||||
|
||||
```
|
||||
{Version:kustomize/v3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T14:08:42Z GoOs:linux GoArch:amd64}
|
||||
```
|
||||
|
||||
#### OS
|
||||
|
||||
$ cat /etc/os-release
|
||||
|
||||
```
|
||||
NAME="Ubuntu"
|
||||
VERSION="18.04.4 LTS (Bionic Beaver)"
|
||||
ID=ubuntu
|
||||
ID_LIKE=debian
|
||||
PRETTY_NAME="Ubuntu 18.04.4 LTS"
|
||||
VERSION_ID="18.04"
|
||||
HOME_URL="https://www.ubuntu.com/"
|
||||
SUPPORT_URL="https://help.ubuntu.com/"
|
||||
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
|
||||
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
|
||||
VERSION_CODENAME=bionic
|
||||
UBUNTU_CODENAME=bionic
|
||||
```
|
||||
|
||||
## Special Instructions
|
||||
|
||||
Swap was disabled on the VM using `sudo swapoff -a`
|
24
manifests/function/capd/v0.3.7/certmanager/certificate.yaml
Normal file
24
manifests/function/capd/v0.3.7/certmanager/certificate.yaml
Normal file
@ -0,0 +1,24 @@
|
||||
# The following manifests contain a self-signed issuer CR and a certificate CR.
|
||||
# More document can be found at https://docs.cert-manager.io
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: Issuer
|
||||
metadata:
|
||||
name: selfsigned-issuer
|
||||
namespace: system
|
||||
spec:
|
||||
selfSigned: {}
|
||||
---
|
||||
apiVersion: cert-manager.io/v1alpha2
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml
|
||||
namespace: system
|
||||
spec:
|
||||
# $(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
|
||||
dnsNames:
|
||||
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc
|
||||
- $(SERVICE_NAME).$(SERVICE_NAMESPACE).svc.cluster.local
|
||||
issuerRef:
|
||||
kind: Issuer
|
||||
name: selfsigned-issuer
|
||||
secretName: $(SERVICE_NAME)-cert # this secret will not be prefixed, since it's not managed by kustomize
|
@ -0,0 +1,8 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
|
||||
resources:
|
||||
- certificate.yaml
|
||||
|
||||
configurations:
|
||||
- kustomizeconfig.yaml
|
@ -0,0 +1,19 @@
|
||||
# This configuration is for teaching kustomize how to update name ref and var substitution
|
||||
nameReference:
|
||||
- kind: Issuer
|
||||
group: cert-manager.io
|
||||
fieldSpecs:
|
||||
- kind: Certificate
|
||||
group: cert-manager.io
|
||||
path: spec/issuerRef/name
|
||||
|
||||
varReference:
|
||||
- kind: Certificate
|
||||
group: cert-manager.io
|
||||
path: spec/commonName
|
||||
- kind: Certificate
|
||||
group: cert-manager.io
|
||||
path: spec/dnsNames
|
||||
- kind: Certificate
|
||||
group: cert-manager.io
|
||||
path: spec/secretName
|
@ -0,0 +1,164 @@
|
||||
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.2.9
|
||||
creationTimestamp: null
|
||||
name: dockerclusters.infrastructure.cluster.x-k8s.io
|
||||
spec:
|
||||
group: infrastructure.cluster.x-k8s.io
|
||||
names:
|
||||
categories:
|
||||
- cluster-api
|
||||
kind: DockerCluster
|
||||
listKind: DockerClusterList
|
||||
plural: dockerclusters
|
||||
singular: dockercluster
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1alpha3
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: DockerCluster is the Schema for the dockerclusters API
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: DockerClusterSpec defines the desired state of DockerCluster.
|
||||
properties:
|
||||
controlPlaneEndpoint:
|
||||
description: ControlPlaneEndpoint represents the endpoint used to
|
||||
communicate with the control plane.
|
||||
properties:
|
||||
host:
|
||||
description: Host is the hostname on which the API server is serving.
|
||||
type: string
|
||||
port:
|
||||
description: Port is the port on which the API server is serving.
|
||||
type: integer
|
||||
required:
|
||||
- host
|
||||
- port
|
||||
type: object
|
||||
failureDomains:
|
||||
additionalProperties:
|
||||
description: FailureDomainSpec is the Schema for Cluster API failure
|
||||
domains. It allows controllers to understand how many failure
|
||||
domains a cluster can optionally span across.
|
||||
properties:
|
||||
attributes:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Attributes is a free form map of attributes an
|
||||
infrastructure provider might use or require.
|
||||
type: object
|
||||
controlPlane:
|
||||
description: ControlPlane determines if this failure domain
|
||||
is suitable for use by control plane machines.
|
||||
type: boolean
|
||||
type: object
|
||||
description: FailureDomains are not usulaly defined on the spec. The
|
||||
docker provider is special since failure domains don't mean anything
|
||||
in a local docker environment. Instead, the docker cluster controller
|
||||
will simply copy these into the Status and allow the Cluster API
|
||||
controllers to do what they will with the defined failure domains.
|
||||
type: object
|
||||
type: object
|
||||
status:
|
||||
description: DockerClusterStatus defines the observed state of DockerCluster.
|
||||
properties:
|
||||
conditions:
|
||||
description: Conditions defines current service state of the DockerCluster.
|
||||
items:
|
||||
description: Condition defines an observation of a Cluster API resource
|
||||
operational state.
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: Last time the condition transitioned from one status
|
||||
to another. This should be when the underlying condition changed.
|
||||
If that is not known, then using the time when the API field
|
||||
changed is acceptable.
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: A human readable message indicating details about
|
||||
the transition. This field may be empty.
|
||||
type: string
|
||||
reason:
|
||||
description: The reason for the condition's last transition
|
||||
in CamelCase. The specific API may choose whether or not this
|
||||
field is considered a guaranteed API. This field may not be
|
||||
empty.
|
||||
type: string
|
||||
severity:
|
||||
description: Severity provides an explicit classification of
|
||||
Reason code, so the users or machines can immediately understand
|
||||
the current situation and act accordingly. The Severity field
|
||||
MUST be set only when Status=False.
|
||||
type: string
|
||||
status:
|
||||
description: Status of the condition, one of True, False, Unknown.
|
||||
type: string
|
||||
type:
|
||||
description: Type of condition in CamelCase or in foo.example.com/CamelCase.
|
||||
Many .condition.type values are consistent across resources
|
||||
like Available, but because arbitrary conditions can be useful
|
||||
(see .node.status.conditions), the ability to deconflict is
|
||||
important.
|
||||
type: string
|
||||
required:
|
||||
- status
|
||||
- type
|
||||
type: object
|
||||
type: array
|
||||
failureDomains:
|
||||
additionalProperties:
|
||||
description: FailureDomainSpec is the Schema for Cluster API failure
|
||||
domains. It allows controllers to understand how many failure
|
||||
domains a cluster can optionally span across.
|
||||
properties:
|
||||
attributes:
|
||||
additionalProperties:
|
||||
type: string
|
||||
description: Attributes is a free form map of attributes an
|
||||
infrastructure provider might use or require.
|
||||
type: object
|
||||
controlPlane:
|
||||
description: ControlPlane determines if this failure domain
|
||||
is suitable for use by control plane machines.
|
||||
type: boolean
|
||||
type: object
|
||||
description: FailureDomains don't mean much in CAPD since it's all
|
||||
local, but we can see how the rest of cluster API will use this
|
||||
if we populate it.
|
||||
type: object
|
||||
ready:
|
||||
description: Ready denotes that the docker cluster (infrastructure)
|
||||
is ready.
|
||||
type: boolean
|
||||
required:
|
||||
- ready
|
||||
type: object
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
subresources:
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
@ -0,0 +1,167 @@
|
||||
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.2.9
|
||||
creationTimestamp: null
|
||||
name: dockermachines.infrastructure.cluster.x-k8s.io
|
||||
spec:
|
||||
group: infrastructure.cluster.x-k8s.io
|
||||
names:
|
||||
categories:
|
||||
- cluster-api
|
||||
kind: DockerMachine
|
||||
listKind: DockerMachineList
|
||||
plural: dockermachines
|
||||
singular: dockermachine
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1alpha3
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: DockerMachine is the Schema for the dockermachines API
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: DockerMachineSpec defines the desired state of DockerMachine
|
||||
properties:
|
||||
bootstrapped:
|
||||
description: Bootstrapped is true when the kubeadm bootstrapping has
|
||||
been run against this machine
|
||||
type: boolean
|
||||
customImage:
|
||||
description: CustomImage allows customizing the container image that
|
||||
is used for running the machine
|
||||
type: string
|
||||
extraMounts:
|
||||
description: ExtraMounts describes additional mount points for the
|
||||
node container These may be used to bind a hostPath
|
||||
items:
|
||||
description: Mount specifies a host volume to mount into a container.
|
||||
This is a simplified version of kind v1alpha4.Mount types
|
||||
properties:
|
||||
containerPath:
|
||||
description: Path of the mount within the container.
|
||||
type: string
|
||||
hostPath:
|
||||
description: Path of the mount on the host. If the hostPath
|
||||
doesn't exist, then runtimes should report error. If the hostpath
|
||||
is a symbolic link, runtimes should follow the symlink and
|
||||
mount the real destination to container.
|
||||
type: string
|
||||
readOnly:
|
||||
description: If set, the mount is read-only.
|
||||
type: boolean
|
||||
type: object
|
||||
type: array
|
||||
preLoadImages:
|
||||
description: PreLoadImages allows to pre-load images in a newly created
|
||||
machine. This can be used to speed up tests by avoiding e.g. to
|
||||
download CNI images on all the containers.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
providerID:
|
||||
description: ProviderID will be the container name in ProviderID format
|
||||
(docker:////<containername>)
|
||||
type: string
|
||||
type: object
|
||||
status:
|
||||
description: DockerMachineStatus defines the observed state of DockerMachine
|
||||
properties:
|
||||
addresses:
|
||||
description: Addresses contains the associated addresses for the docker
|
||||
machine.
|
||||
items:
|
||||
description: MachineAddress contains information for the node's
|
||||
address.
|
||||
properties:
|
||||
address:
|
||||
description: The machine address.
|
||||
type: string
|
||||
type:
|
||||
description: Machine address type, one of Hostname, ExternalIP
|
||||
or InternalIP.
|
||||
type: string
|
||||
required:
|
||||
- address
|
||||
- type
|
||||
type: object
|
||||
type: array
|
||||
conditions:
|
||||
description: Conditions defines current service state of the DockerMachine.
|
||||
items:
|
||||
description: Condition defines an observation of a Cluster API resource
|
||||
operational state.
|
||||
properties:
|
||||
lastTransitionTime:
|
||||
description: Last time the condition transitioned from one status
|
||||
to another. This should be when the underlying condition changed.
|
||||
If that is not known, then using the time when the API field
|
||||
changed is acceptable.
|
||||
format: date-time
|
||||
type: string
|
||||
message:
|
||||
description: A human readable message indicating details about
|
||||
the transition. This field may be empty.
|
||||
type: string
|
||||
reason:
|
||||
description: The reason for the condition's last transition
|
||||
in CamelCase. The specific API may choose whether or not this
|
||||
field is considered a guaranteed API. This field may not be
|
||||
empty.
|
||||
type: string
|
||||
severity:
|
||||
description: Severity provides an explicit classification of
|
||||
Reason code, so the users or machines can immediately understand
|
||||
the current situation and act accordingly. The Severity field
|
||||
MUST be set only when Status=False.
|
||||
type: string
|
||||
status:
|
||||
description: Status of the condition, one of True, False, Unknown.
|
||||
type: string
|
||||
type:
|
||||
description: Type of condition in CamelCase or in foo.example.com/CamelCase.
|
||||
Many .condition.type values are consistent across resources
|
||||
like Available, but because arbitrary conditions can be useful
|
||||
(see .node.status.conditions), the ability to deconflict is
|
||||
important.
|
||||
type: string
|
||||
required:
|
||||
- status
|
||||
- type
|
||||
type: object
|
||||
type: array
|
||||
loadBalancerConfigured:
|
||||
description: LoadBalancerConfigured denotes that the machine has been
|
||||
added to the load balancer
|
||||
type: boolean
|
||||
ready:
|
||||
description: Ready denotes that the machine (docker container) is
|
||||
ready
|
||||
type: boolean
|
||||
type: object
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
subresources:
|
||||
status: {}
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
@ -0,0 +1,107 @@
|
||||
|
||||
---
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
controller-gen.kubebuilder.io/version: v0.2.9
|
||||
creationTimestamp: null
|
||||
name: dockermachinetemplates.infrastructure.cluster.x-k8s.io
|
||||
spec:
|
||||
group: infrastructure.cluster.x-k8s.io
|
||||
names:
|
||||
categories:
|
||||
- cluster-api
|
||||
kind: DockerMachineTemplate
|
||||
listKind: DockerMachineTemplateList
|
||||
plural: dockermachinetemplates
|
||||
singular: dockermachinetemplate
|
||||
scope: Namespaced
|
||||
versions:
|
||||
- name: v1alpha3
|
||||
schema:
|
||||
openAPIV3Schema:
|
||||
description: DockerMachineTemplate is the Schema for the dockermachinetemplates
|
||||
API
|
||||
properties:
|
||||
apiVersion:
|
||||
description: 'APIVersion defines the versioned schema of this representation
|
||||
of an object. Servers should convert recognized schemas to the latest
|
||||
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
|
||||
type: string
|
||||
kind:
|
||||
description: 'Kind is a string value representing the REST resource this
|
||||
object represents. Servers may infer this from the endpoint the client
|
||||
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
|
||||
type: string
|
||||
metadata:
|
||||
type: object
|
||||
spec:
|
||||
description: DockerMachineTemplateSpec defines the desired state of DockerMachineTemplate
|
||||
properties:
|
||||
template:
|
||||
description: DockerMachineTemplateResource describes the data needed
|
||||
to create a DockerMachine from a template
|
||||
properties:
|
||||
spec:
|
||||
description: Spec is the specification of the desired behavior
|
||||
of the machine.
|
||||
properties:
|
||||
bootstrapped:
|
||||
description: Bootstrapped is true when the kubeadm bootstrapping
|
||||
has been run against this machine
|
||||
type: boolean
|
||||
customImage:
|
||||
description: CustomImage allows customizing the container
|
||||
image that is used for running the machine
|
||||
type: string
|
||||
extraMounts:
|
||||
description: ExtraMounts describes additional mount points
|
||||
for the node container These may be used to bind a hostPath
|
||||
items:
|
||||
description: Mount specifies a host volume to mount into
|
||||
a container. This is a simplified version of kind v1alpha4.Mount
|
||||
types
|
||||
properties:
|
||||
containerPath:
|
||||
description: Path of the mount within the container.
|
||||
type: string
|
||||
hostPath:
|
||||
description: Path of the mount on the host. If the hostPath
|
||||
doesn't exist, then runtimes should report error.
|
||||
If the hostpath is a symbolic link, runtimes should
|
||||
follow the symlink and mount the real destination
|
||||
to container.
|
||||
type: string
|
||||
readOnly:
|
||||
description: If set, the mount is read-only.
|
||||
type: boolean
|
||||
type: object
|
||||
type: array
|
||||
preLoadImages:
|
||||
description: PreLoadImages allows to pre-load images in a
|
||||
newly created machine. This can be used to speed up tests
|
||||
by avoiding e.g. to download CNI images on all the containers.
|
||||
items:
|
||||
type: string
|
||||
type: array
|
||||
providerID:
|
||||
description: ProviderID will be the container name in ProviderID
|
||||
format (docker:////<containername>)
|
||||
type: string
|
||||
type: object
|
||||
required:
|
||||
- spec
|
||||
type: object
|
||||
required:
|
||||
- template
|
||||
type: object
|
||||
type: object
|
||||
served: true
|
||||
storage: true
|
||||
status:
|
||||
acceptedNames:
|
||||
kind: ""
|
||||
plural: ""
|
||||
conditions: []
|
||||
storedVersions: []
|
30
manifests/function/capd/v0.3.7/crd/kustomization.yaml
Normal file
30
manifests/function/capd/v0.3.7/crd/kustomization.yaml
Normal file
@ -0,0 +1,30 @@
|
||||
commonLabels:
|
||||
cluster.x-k8s.io/v1alpha3: v1alpha3
|
||||
|
||||
# This kustomization.yaml is not intended to be run by itself,
|
||||
# since it depends on service name and namespace that are out of this kustomize package.
|
||||
# It should be run by config/
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- bases/infrastructure.cluster.x-k8s.io_dockermachines.yaml
|
||||
- bases/infrastructure.cluster.x-k8s.io_dockerclusters.yaml
|
||||
- bases/infrastructure.cluster.x-k8s.io_dockermachinetemplates.yaml
|
||||
# +kubebuilder:scaffold:crdkustomizeresource
|
||||
|
||||
patchesStrategicMerge: []
|
||||
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
|
||||
# patches here are for enabling the conversion webhook for each CRD
|
||||
#- patches/webhook_in_dockermachines.yaml
|
||||
#- patches/webhook_in_dockerclusters.yaml
|
||||
# +kubebuilder:scaffold:crdkustomizewebhookpatch
|
||||
|
||||
# [CERTMANAGER] To enable webhook, uncomment all the sections with [CERTMANAGER] prefix.
|
||||
# patches here are for enabling the CA injection for each CRD
|
||||
#- patches/cainjection_in_dockermachines.yaml
|
||||
#- patches/cainjection_in_dockerclusters.yaml
|
||||
# +kubebuilder:scaffold:crdkustomizecainjectionpatch
|
||||
|
||||
# the following config is for teaching kustomize how to do kustomization for CRDs.
|
||||
configurations:
|
||||
- kustomizeconfig.yaml
|
17
manifests/function/capd/v0.3.7/crd/kustomizeconfig.yaml
Normal file
17
manifests/function/capd/v0.3.7/crd/kustomizeconfig.yaml
Normal file
@ -0,0 +1,17 @@
|
||||
# This file is for teaching kustomize how to substitute name and namespace reference in CRD
|
||||
nameReference:
|
||||
- kind: Service
|
||||
version: v1
|
||||
fieldSpecs:
|
||||
- kind: CustomResourceDefinition
|
||||
group: apiextensions.k8s.io
|
||||
path: spec/conversion/webhook/clientConfig/service/name
|
||||
|
||||
namespace:
|
||||
- kind: CustomResourceDefinition
|
||||
group: apiextensions.k8s.io
|
||||
path: spec/conversion/webhook/clientConfig/service/namespace
|
||||
create: false
|
||||
|
||||
varReference:
|
||||
- path: metadata/annotations
|
@ -0,0 +1,8 @@
|
||||
# The following patch adds a directive for certmanager to inject CA into the CRD
|
||||
# CRD conversion requires k8s 1.13 or later.
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
|
||||
name: dockerclusters.infrastructure.cluster.x-k8s.io
|
@ -0,0 +1,8 @@
|
||||
# The following patch adds a directive for certmanager to inject CA into the CRD
|
||||
# CRD conversion requires k8s 1.13 or later.
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
annotations:
|
||||
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
|
||||
name: dockermachines.infrastructure.cluster.x-k8s.io
|
@ -0,0 +1,19 @@
|
||||
# The following patch enables conversion webhook for CRD
|
||||
# CRD conversion requires k8s 1.13 or later.
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: dockerclusters.infrastructure.cluster.x-k8s.io
|
||||
spec:
|
||||
conversion:
|
||||
strategy: Webhook
|
||||
webhook:
|
||||
conversionReviewVersions: ["v1", "v1beta1"]
|
||||
clientConfig:
|
||||
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
|
||||
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
|
||||
caBundle: Cg==
|
||||
service:
|
||||
namespace: system
|
||||
name: webhook-service
|
||||
path: /convert
|
@ -0,0 +1,19 @@
|
||||
# The following patch enables conversion webhook for CRD
|
||||
# CRD conversion requires k8s 1.13 or later.
|
||||
apiVersion: apiextensions.k8s.io/v1
|
||||
kind: CustomResourceDefinition
|
||||
metadata:
|
||||
name: dockermachines.infrastructure.cluster.x-k8s.io
|
||||
spec:
|
||||
conversion:
|
||||
strategy: Webhook
|
||||
webhook:
|
||||
conversionReviewVersions: ["v1", "v1beta1"]
|
||||
clientConfig:
|
||||
# this is "\n" used as a placeholder, otherwise it will be rejected by the apiserver for being blank,
|
||||
# but we're going to set it later using the cert-manager (or potentially a patch if not using cert-manager)
|
||||
caBundle: Cg==
|
||||
service:
|
||||
namespace: system
|
||||
name: webhook-service
|
||||
path: /convert
|
2
manifests/function/capd/v0.3.7/data/kustomization.yaml
Normal file
2
manifests/function/capd/v0.3.7/data/kustomization.yaml
Normal file
@ -0,0 +1,2 @@
|
||||
resources:
|
||||
- metadata.yaml
|
14
manifests/function/capd/v0.3.7/data/metadata.yaml
Normal file
14
manifests/function/capd/v0.3.7/data/metadata.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
|
||||
kind: Metadata
|
||||
metadata:
|
||||
name: repository-metadata
|
||||
labels:
|
||||
airshipit.org/deploy-k8s: "false"
|
||||
releaseSeries:
|
||||
- major: 0
|
||||
minor: 3
|
||||
contract: v1alpha3
|
||||
- major: 0
|
||||
minor: 2
|
||||
contract: v1alpha2
|
@ -0,0 +1,9 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: capd-system
|
||||
|
||||
resources:
|
||||
- namespace.yaml
|
||||
|
||||
bases:
|
||||
- ../rbac
|
6
manifests/function/capd/v0.3.7/default/namespace.yaml
Normal file
6
manifests/function/capd/v0.3.7/default/namespace.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
labels:
|
||||
control-plane: controller-manager
|
||||
name: system
|
10
manifests/function/capd/v0.3.7/kustomization.yaml
Normal file
10
manifests/function/capd/v0.3.7/kustomization.yaml
Normal file
@ -0,0 +1,10 @@
|
||||
namePrefix: capd-
|
||||
|
||||
commonLabels:
|
||||
cluster.x-k8s.io/provider: "infrastructure-docker"
|
||||
|
||||
resources:
|
||||
- crd
|
||||
- default
|
||||
- webhook
|
||||
- data
|
@ -0,0 +1,8 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- manager.yaml
|
||||
|
||||
patchesStrategicMerge:
|
||||
- manager_image_patch.yaml
|
||||
- manager_auth_proxy_patch.yaml
|
44
manifests/function/capd/v0.3.7/manager/manager.yaml
Normal file
44
manifests/function/capd/v0.3.7/manager/manager.yaml
Normal file
@ -0,0 +1,44 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
labels:
|
||||
control-plane: controller-manager
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
control-plane: controller-manager
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
control-plane: controller-manager
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- --enable-leader-election
|
||||
image: controller:latest
|
||||
name: manager
|
||||
ports:
|
||||
- containerPort: 9440
|
||||
name: healthz
|
||||
protocol: TCP
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readyz
|
||||
port: healthz
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: healthz
|
||||
volumeMounts:
|
||||
- mountPath: /var/run/docker.sock
|
||||
name: dockersock
|
||||
securityContext:
|
||||
privileged: true
|
||||
terminationGracePeriodSeconds: 10
|
||||
volumes:
|
||||
- name: dockersock
|
||||
hostPath:
|
||||
path: /var/run/docker.sock
|
@ -0,0 +1,25 @@
|
||||
# This patch inject a sidecar container which is a HTTP proxy for the controller manager,
|
||||
# it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-rbac-proxy
|
||||
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.0
|
||||
args:
|
||||
- "--secure-listen-address=0.0.0.0:8443"
|
||||
- "--upstream=http://127.0.0.1:8080/"
|
||||
- "--logtostderr=true"
|
||||
- "--v=10"
|
||||
ports:
|
||||
- containerPort: 8443
|
||||
name: https
|
||||
- name: manager
|
||||
args:
|
||||
- "--metrics-addr=0"
|
||||
- "-v=4"
|
@ -0,0 +1,12 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
# Change the value of image field below to your controller image URL
|
||||
- image: gcr.io/k8s-staging-cluster-api/capd-manager:master
|
||||
name: manager
|
@ -0,0 +1,19 @@
|
||||
# This patch enables Prometheus scraping for the manager pod.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/scrape: 'true'
|
||||
spec:
|
||||
containers:
|
||||
# Expose the prometheus metrics on default port
|
||||
- name: manager
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: metrics
|
||||
protocol: TCP
|
@ -0,0 +1,11 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: manager
|
||||
imagePullPolicy: Always
|
13
manifests/function/capd/v0.3.7/rbac/auth_proxy_role.yaml
Normal file
13
manifests/function/capd/v0.3.7/rbac/auth_proxy_role.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
name: proxy-role
|
||||
rules:
|
||||
- apiGroups: ["authentication.k8s.io"]
|
||||
resources:
|
||||
- tokenreviews
|
||||
verbs: ["create"]
|
||||
- apiGroups: ["authorization.k8s.io"]
|
||||
resources:
|
||||
- subjectaccessreviews
|
||||
verbs: ["create"]
|
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: proxy-rolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: proxy-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: system
|
18
manifests/function/capd/v0.3.7/rbac/auth_proxy_service.yaml
Normal file
18
manifests/function/capd/v0.3.7/rbac/auth_proxy_service.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
prometheus.io/port: "8443"
|
||||
prometheus.io/scheme: https
|
||||
prometheus.io/scrape: "true"
|
||||
labels:
|
||||
control-plane: controller-manager
|
||||
name: controller-manager-metrics-service
|
||||
namespace: system
|
||||
spec:
|
||||
ports:
|
||||
- name: https
|
||||
port: 8443
|
||||
targetPort: https
|
||||
selector:
|
||||
control-plane: controller-manager
|
13
manifests/function/capd/v0.3.7/rbac/kustomization.yaml
Normal file
13
manifests/function/capd/v0.3.7/rbac/kustomization.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- role.yaml
|
||||
- role_binding.yaml
|
||||
- leader_election_role.yaml
|
||||
- leader_election_role_binding.yaml
|
||||
# Comment the following 3 lines if you want to disable
|
||||
# the auth proxy (https://github.com/brancz/kube-rbac-proxy)
|
||||
# which protects your /metrics endpoint.
|
||||
- auth_proxy_service.yaml
|
||||
- auth_proxy_role.yaml
|
||||
- auth_proxy_role_binding.yaml
|
@ -0,0 +1,32 @@
|
||||
# permissions to do leader election.
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: leader-election-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- create
|
||||
- update
|
||||
- patch
|
||||
- delete
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps/status
|
||||
verbs:
|
||||
- get
|
||||
- update
|
||||
- patch
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- events
|
||||
verbs:
|
||||
- create
|
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: leader-election-rolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: leader-election-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: system
|
65
manifests/function/capd/v0.3.7/rbac/role.yaml
Normal file
65
manifests/function/capd/v0.3.7/rbac/role.yaml
Normal file
@ -0,0 +1,65 @@
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: manager-role
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- cluster.x-k8s.io
|
||||
resources:
|
||||
- clusters
|
||||
- machines
|
||||
verbs:
|
||||
- get
|
||||
- list
|
||||
- watch
|
||||
- apiGroups:
|
||||
- infrastructure.cluster.x-k8s.io
|
||||
resources:
|
||||
- dockerclusters
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- get
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
- watch
|
||||
- apiGroups:
|
||||
- infrastructure.cluster.x-k8s.io
|
||||
resources:
|
||||
- dockerclusters/status
|
||||
verbs:
|
||||
- get
|
||||
- patch
|
||||
- update
|
||||
- apiGroups:
|
||||
- infrastructure.cluster.x-k8s.io
|
||||
resources:
|
||||
- dockermachines
|
||||
verbs:
|
||||
- create
|
||||
- delete
|
||||
- get
|
||||
- list
|
||||
- patch
|
||||
- update
|
||||
- watch
|
||||
- apiGroups:
|
||||
- infrastructure.cluster.x-k8s.io
|
||||
resources:
|
||||
- dockermachines/status
|
||||
verbs:
|
||||
- get
|
||||
- patch
|
||||
- update
|
12
manifests/function/capd/v0.3.7/rbac/role_binding.yaml
Normal file
12
manifests/function/capd/v0.3.7/rbac/role_binding.yaml
Normal file
@ -0,0 +1,12 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: manager-rolebinding
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: manager-role
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: default
|
||||
namespace: system
|
45
manifests/function/capd/v0.3.7/webhook/kustomization.yaml
Normal file
45
manifests/function/capd/v0.3.7/webhook/kustomization.yaml
Normal file
@ -0,0 +1,45 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
namespace: capd-system
|
||||
|
||||
resources:
|
||||
- manifests.yaml
|
||||
- service.yaml
|
||||
- ../certmanager
|
||||
- ../manager
|
||||
|
||||
patchesStrategicMerge:
|
||||
- manager_webhook_patch.yaml
|
||||
- webhookcainjection_patch.yaml
|
||||
|
||||
configurations:
|
||||
- kustomizeconfig.yaml
|
||||
|
||||
vars:
|
||||
- name: SERVICE_NAMESPACE # namespace of the service
|
||||
objref:
|
||||
kind: Service
|
||||
version: v1
|
||||
name: webhook-service
|
||||
fieldref:
|
||||
fieldpath: metadata.namespace
|
||||
- name: SERVICE_NAME
|
||||
objref:
|
||||
kind: Service
|
||||
version: v1
|
||||
name: webhook-service
|
||||
# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
|
||||
- name: CERTIFICATE_NAMESPACE # namespace of the certificate CR
|
||||
objref:
|
||||
kind: Certificate
|
||||
group: cert-manager.io
|
||||
version: v1alpha2
|
||||
name: serving-cert # this name should match the one in certificate.yaml
|
||||
fieldref:
|
||||
fieldpath: metadata.namespace
|
||||
- name: CERTIFICATE_NAME
|
||||
objref:
|
||||
kind: Certificate
|
||||
group: cert-manager.io
|
||||
version: v1alpha2
|
||||
name: serving-cert # this name should match the one in certificate.yaml
|
20
manifests/function/capd/v0.3.7/webhook/kustomizeconfig.yaml
Normal file
20
manifests/function/capd/v0.3.7/webhook/kustomizeconfig.yaml
Normal file
@ -0,0 +1,20 @@
|
||||
# the following config is for teaching kustomize where to look at when substituting vars.
|
||||
# It requires kustomize v2.1.0 or newer to work properly.
|
||||
nameReference:
|
||||
- kind: Service
|
||||
version: v1
|
||||
fieldSpecs:
|
||||
- kind: ValidatingWebhookConfiguration
|
||||
group: admissionregistration.k8s.io
|
||||
path: webhooks/clientConfig/service/name
|
||||
|
||||
namespace:
|
||||
- kind: ValidatingWebhookConfiguration
|
||||
group: admissionregistration.k8s.io
|
||||
path: webhooks/clientConfig/service/namespace
|
||||
create: true
|
||||
|
||||
varReference:
|
||||
- path: metadata/annotations
|
||||
- kind: Deployment
|
||||
path: spec/template/spec/volumes/secret/secretName
|
@ -0,0 +1,23 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: controller-manager
|
||||
namespace: system
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- name: manager
|
||||
ports:
|
||||
- containerPort: 443
|
||||
name: webhook-server
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /tmp/k8s-webhook-server/serving-certs
|
||||
name: cert
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: cert
|
||||
secret:
|
||||
defaultMode: 420
|
||||
secretName: $(SERVICE_NAME)-cert # this secret will not be prefixed, since it's not managed by kustomize
|
27
manifests/function/capd/v0.3.7/webhook/manifests.yaml
Normal file
27
manifests/function/capd/v0.3.7/webhook/manifests.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: validating-webhook-configuration
|
||||
webhooks:
|
||||
- clientConfig:
|
||||
caBundle: Cg==
|
||||
service:
|
||||
name: webhook-service
|
||||
namespace: system
|
||||
path: /validate-infrastructure-cluster-x-k8s-io-v1alpha3-dockermachinetemplate
|
||||
failurePolicy: Fail
|
||||
matchPolicy: Equivalent
|
||||
name: validation.dockermachinetemplate.infrastructure.cluster.x-k8s.io
|
||||
rules:
|
||||
- apiGroups:
|
||||
- infrastructure.cluster.x-k8s.io
|
||||
apiVersions:
|
||||
- v1alpha3
|
||||
operations:
|
||||
- CREATE
|
||||
- UPDATE
|
||||
resources:
|
||||
- dockermachinetemplates
|
||||
sideEffects: None
|
11
manifests/function/capd/v0.3.7/webhook/service.yaml
Normal file
11
manifests/function/capd/v0.3.7/webhook/service.yaml
Normal file
@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: webhook-service
|
||||
namespace: system
|
||||
spec:
|
||||
ports:
|
||||
- port: 443
|
||||
targetPort: 443
|
||||
selector:
|
||||
control-plane: controller-manager
|
@ -0,0 +1,8 @@
|
||||
# This patch add annotation to admission webhook config and
|
||||
# the variables $(CERTIFICATE_NAMESPACE) and $(CERTIFICATE_NAME) will be substituted by kustomize.
|
||||
apiVersion: admissionregistration.k8s.io/v1beta1
|
||||
kind: ValidatingWebhookConfiguration
|
||||
metadata:
|
||||
name: validating-webhook-configuration
|
||||
annotations:
|
||||
cert-manager.io/inject-ca-from: $(CERTIFICATE_NAMESPACE)/$(CERTIFICATE_NAME)
|
26
manifests/function/k8scontrol-capd/cluster.yaml
Normal file
26
manifests/function/k8scontrol-capd/cluster.yaml
Normal file
@ -0,0 +1,26 @@
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
kind: DockerCluster
|
||||
metadata:
|
||||
name: "dtc"
|
||||
---
|
||||
apiVersion: cluster.x-k8s.io/v1alpha3
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: "dtc"
|
||||
spec:
|
||||
clusterNetwork:
|
||||
pods:
|
||||
cidrBlocks:
|
||||
- 172.17.0.0/16
|
||||
serviceDomain: cluster.local
|
||||
services:
|
||||
cidrBlocks:
|
||||
- 10.0.0.0/24
|
||||
infrastructureRef:
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
kind: DockerCluster
|
||||
name: "dtc"
|
||||
controlPlaneRef:
|
||||
kind: KubeadmControlPlane
|
||||
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
|
||||
name: "dtc-control-plane"
|
66
manifests/function/k8scontrol-capd/controlplane.yaml
Normal file
66
manifests/function/k8scontrol-capd/controlplane.yaml
Normal file
@ -0,0 +1,66 @@
|
||||
---
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
kind: DockerMachineTemplate
|
||||
metadata:
|
||||
name: "dtc-control-plane"
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
extraMounts:
|
||||
- containerPath: "/var/run/docker.sock"
|
||||
hostPath: "/var/run/docker.sock"
|
||||
---
|
||||
kind: KubeadmControlPlane
|
||||
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
|
||||
metadata:
|
||||
name: "dtc-control-plane"
|
||||
spec:
|
||||
replicas: ${ CONTROL_PLANE_MACHINE_COUNT }
|
||||
infrastructureTemplate:
|
||||
kind: DockerMachineTemplate
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
name: "dtc-control-plane"
|
||||
kubeadmConfigSpec:
|
||||
clusterConfiguration:
|
||||
apiServer:
|
||||
certSANs:
|
||||
- localhost
|
||||
- 127.0.0.1
|
||||
controllerManager:
|
||||
extraArgs:
|
||||
enable-hostpath-provisioner: "true"
|
||||
files:
|
||||
- path: /calico.sh
|
||||
owner: root:root
|
||||
permissions: "0755"
|
||||
content: |
|
||||
#!/bin/sh -x
|
||||
su - root -c "sleep 10; kubectl --kubeconfig /etc/kubernetes/admin.conf apply -f https://docs.projectcalico.org/v3.12/manifests/calico.yaml"
|
||||
initConfiguration:
|
||||
nodeRegistration:
|
||||
criSocket: /var/run/containerd/containerd.sock
|
||||
kubeletExtraArgs:
|
||||
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
|
||||
joinConfiguration:
|
||||
nodeRegistration:
|
||||
criSocket: /var/run/containerd/containerd.sock
|
||||
kubeletExtraArgs:
|
||||
eviction-hard: nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%
|
||||
postKubeadmCommands:
|
||||
- sh /calico.sh
|
||||
version: "v1.18.6"
|
||||
---
|
||||
apiVersion: cluster.x-k8s.io/v1alpha3
|
||||
kind: MachineHealthCheck
|
||||
metadata:
|
||||
name: "dtc-mhc-0"
|
||||
spec:
|
||||
clusterName: "dtc"
|
||||
maxUnhealthy: 100%
|
||||
selector:
|
||||
matchLabels:
|
||||
nodepool: "pool1"
|
||||
unhealthyConditions:
|
||||
- type: E2ENodeUnhealthy
|
||||
status: "True"
|
||||
timeout: 30s
|
3
manifests/function/k8scontrol-capd/kustomization.yaml
Normal file
3
manifests/function/k8scontrol-capd/kustomization.yaml
Normal file
@ -0,0 +1,3 @@
|
||||
resources:
|
||||
- cluster.yaml
|
||||
- controlplane.yaml
|
2
manifests/function/workers-capd/kustomization.yaml
Normal file
2
manifests/function/workers-capd/kustomization.yaml
Normal file
@ -0,0 +1,2 @@
|
||||
resources:
|
||||
- workers.yaml
|
49
manifests/function/workers-capd/workers.yaml
Normal file
49
manifests/function/workers-capd/workers.yaml
Normal file
@ -0,0 +1,49 @@
|
||||
---
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
kind: DockerMachineTemplate
|
||||
metadata:
|
||||
name: "dtc-md-0"
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
extraMounts:
|
||||
- containerPath: "/var/run/docker.sock"
|
||||
hostPath: "/var/run/docker.sock"
|
||||
---
|
||||
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
|
||||
kind: KubeadmConfigTemplate
|
||||
metadata:
|
||||
name: "dtc-md-0"
|
||||
spec:
|
||||
template:
|
||||
spec:
|
||||
joinConfiguration:
|
||||
nodeRegistration:
|
||||
criSocket: /var/run/containerd/containerd.sock
|
||||
kubeletExtraArgs: {eviction-hard: 'nodefs.available<0%,nodefs.inodesFree<0%,imagefs.available<0%'}
|
||||
---
|
||||
apiVersion: cluster.x-k8s.io/v1alpha3
|
||||
kind: MachineDeployment
|
||||
metadata:
|
||||
name: "dtc-md-0"
|
||||
spec:
|
||||
clusterName: "dtc"
|
||||
replicas: ${ WORKER_MACHINE_COUNT }
|
||||
selector:
|
||||
matchLabels:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
"nodepool": "pool1"
|
||||
spec:
|
||||
clusterName: "dtc"
|
||||
version: "v1.18.6"
|
||||
bootstrap:
|
||||
configRef:
|
||||
name: "dtc-md-0"
|
||||
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
|
||||
kind: KubeadmConfigTemplate
|
||||
infrastructureRef:
|
||||
name: "dtc-md-0"
|
||||
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
|
||||
kind: DockerMachineTemplate
|
31
manifests/site/docker-test-site/shared/clusterctl/clusterctl.yaml
Executable file
31
manifests/site/docker-test-site/shared/clusterctl/clusterctl.yaml
Executable file
@ -0,0 +1,31 @@
|
||||
apiVersion: airshipit.org/v1alpha1
|
||||
kind: Clusterctl
|
||||
metadata:
|
||||
labels:
|
||||
airshipit.org/deploy-k8s: "false"
|
||||
name: clusterctl-v1
|
||||
init-options:
|
||||
core-provider: "cluster-api:v0.3.3"
|
||||
bootstrap-providers:
|
||||
- "kubeadm:v0.3.3"
|
||||
infrastructure-providers:
|
||||
- "docker:v0.3.7"
|
||||
control-plane-providers:
|
||||
- "kubeadm:v0.3.3"
|
||||
providers:
|
||||
- name: "docker"
|
||||
type: "InfrastructureProvider"
|
||||
versions:
|
||||
v0.3.7: manifests/function/capd/v0.3.7
|
||||
- name: "kubeadm"
|
||||
type: "BootstrapProvider"
|
||||
versions:
|
||||
v0.3.3: manifests/function/cabpk/v0.3.3
|
||||
- name: "cluster-api"
|
||||
type: "CoreProvider"
|
||||
versions:
|
||||
v0.3.3: manifests/function/capi/v0.3.3
|
||||
- name: "kubeadm"
|
||||
type: "ControlPlaneProvider"
|
||||
versions:
|
||||
v0.3.3: manifests/function/cacpk/v0.3.3
|
2
manifests/site/docker-test-site/shared/clusterctl/kustomization.yaml
Executable file
2
manifests/site/docker-test-site/shared/clusterctl/kustomization.yaml
Executable file
@ -0,0 +1,2 @@
|
||||
resources:
|
||||
- clusterctl.yaml
|
@ -0,0 +1,12 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- ../../../../function/k8scontrol-capd
|
||||
|
||||
patchesJson6902:
|
||||
- target:
|
||||
group: controlplane.cluster.x-k8s.io
|
||||
version: v1alpha3
|
||||
kind: KubeadmControlPlane
|
||||
name: "dtc-control-plane"
|
||||
path: machine_count.json
|
@ -0,0 +1,3 @@
|
||||
[
|
||||
{ "op": "replace","path": "/spec/replicas","value": 1 }
|
||||
]
|
@ -0,0 +1,4 @@
|
||||
resources:
|
||||
- ../../shared/clusterctl
|
||||
commonLabels:
|
||||
airshipit.org/stage: initinfra
|
@ -0,0 +1,12 @@
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- ../../../../function/workers-capd
|
||||
|
||||
patchesJson6902:
|
||||
- target:
|
||||
group: cluster.x-k8s.io
|
||||
version: v1alpha3
|
||||
kind: MachineDeployment
|
||||
name: "dtc-md-0"
|
||||
path: machine_count.json
|
@ -0,0 +1,3 @@
|
||||
[
|
||||
{ "op": "replace","path": "/spec/replicas","value": 1 }
|
||||
]
|
Loading…
x
Reference in New Issue
Block a user