Refactor mariadb now that rbd PVCs can be leveraged

This large commit refactors mariadb and creates a utils and
openstack chart to facilitate installing all openstack
elemenets and supporting infrastructure.
This commit is contained in:
Alan Meadows 2016-11-23 13:26:08 -08:00
parent be8500e666
commit 564f9757fc
35 changed files with 395 additions and 70 deletions

View File

@ -1,12 +1,18 @@
.PHONY: ceph all clean base64
.PHONY: ceph mariadb all clean base64
B64_DIRS := ceph/secrets
B64_EXCLUDE := $(wildcard ceph/secrets/*.b64)
B64_DIRS := utils/secrets
B64_EXCLUDE := $(wildcard utils/secrets/*.b64)
all: base64 ceph
all: base64 utils ceph mariadb openstack
utils: build-utils
ceph: build-ceph
mariadb: build-mariadb
openstack: build-openstack
clean:
$(shell find . -name '*.b64' -exec rm {} \;)
$(shell find . -name '_partials.tpl' -exec rm {} \;)

View File

@ -2,6 +2,72 @@
This is a fully self-contained OpenStack deployment on Kubernetes. This collection is a work in progress so components will continue to be added over time.
## Requirements
The aic-helm project is fairly opinionated. We will work to generalize the configuration but since we are targeting a fully functional proof of concept end-to-end, we will have to limit the plugin like functionality within this project.
### helm
The entire aic-helm project is obviously helm driven. All components should work with 2.0.0-rc2 or later.
### baremetal provisioning
The aic-helm project assumes Canonical's MaaS as the foundational bootstrap provider. We create the MaaS service inside Kubernetes for ease of deployment and upgrades. This has a few requirements for external network connectivity to provide bootstrapping noted in the maas chart README.
### dynamic volume provisioning
At the moment, this is not optional. We will strive to make a non-1.5.0 requirement path in all charts using an alternative persistent storage approach but that currently, all charts make the assumption that dynamic volume provisioning is supported.
To support dynamic volume provisioning, the aic-helm project requires Kubernetes 1.5.0-beta1 in order to obtain rbd dynamic volume support. Although rbd volumes are supported in the stable v1.4 version, dynamic rbd volumes allowing PVCs are only supported in 1.5.0-beta.1 and beyond. Note that you can still use helm-2.0.0 with 1.5.0-beta.1, but you will not be able to use PetSets until the following helm [issue](https://github.com/kubernetes/helm/issues/1581) is resolved.
This can be accomplished with a [kubeadm](http://kubernetes.io/docs/getting-started-guides/kubeadm/) based cluster install:
```
kubeadm init --use-kubernetes-version v1.5.0-beta.1
```
Note that in addition to Kubernetes 1.5.0-beta.1, you will need to replace the kube-controller-manager container with one that supports the rbd utilities. We have made a convenient container that you can drop in as a replacement. It is an ubuntu based container with the ceph tools and the kube-controller-manager binary from the 1.5.0-beta.1 release available as a [Dockerfile](https://github.com/att-comdev/dockerfiles/tree/master/kube-controller-manager) or a quay.io image you can update in your kubeadm manifest ```/etc/kubernetes/manifests/kube-controller-manager.json``` directly with ```image: quay.io/attcomdev/kube-controller-manager```
The kubelet should pick up the change and restart the container.
Finally, for the kube-controller-manager to be able to talk to the ceph-mon instances, ensure it can resolve ceph-mon.ceph (assuming you install the ceph chart into the ceph namespace). This is done by ensuring that both the baremetal host running the kubelet process and the kube-controller-manager container have the SkyDNS address and the appropriate search string in /etc/resolv.conf. This is covered in more detail in the [ceph](ceph/README.md) but a typical resolv.conf would look like this:
```
nameserver 10.32.0.2 ### skydns instance ip
nameserver 8.8.8.8
nameserver 8.8.4.4
search svc.cluster.local
```
## QuickStart
You can start aic-helm fairly quickly. Assuming the above requirements are met, you can install the charts in a layered approach. Today, the openstack chart is only tied to the mariadb sub-chart. We will continue to add other OpenStack components into the openstack parent chart as they are validated.
Note that the openstack parent chart should always be used as it does some prepatory work for the openstack namespace for subcharts, such as ensuring ceph secrets are available to all subcharts.
```
# label all known nodes as candidates for pods
kubectl label nodes node-type=storage --all
kubectl label nodes openstack-control-plane=enabled --all
# build aic-helm
cd aic-helm
helm serve . &
make
# generate secrets (ceph, etc.)
export osd_cluster_network=10.32.0.0/12
export osd_public_network=10.32.0.0/12
cd utils/utils/generator
./generate_secrets.sh all `./generate_secrets.sh fsid`
cd ../../..
# install
helm install local/chef --namespace=ceph
helm install local/openstack --namespace=openstack
```
## Control Plane Charts
The following charts form the foundation to help establish an OpenStack control plane, including shared storage and bare metal provisioning:
- [ceph](ceph/README.md)
@ -10,6 +76,17 @@ The following charts form the foundation to help establish an OpenStack control
These charts, unlike the OpenStack charts below, are designed to run directly. They form the foundational layers necessary to bootstrap an environment in may run in separate namespaces. The intention is to layer them. Please see the direct links above as they become available for README instructions crafted for each chart. Please walk through each of these as some of them require build steps that should be done before running make.
The OpenStack charts under development will focus on container images leveraging the entrypoint model. This differs somewhat from the existing [openstack-helm](https://github.com/sapcc/openstack-helm) repository maintained by SAP right now although we have shamelessly "borrowed" many concepts from them. For these charts, we will be following the same region approach as openstack-helm, namely that these charts will not install and run directly. They are included in the "openstack" chart as requirements, the openstack chart is effectively an abstract region and is intended to be required by a concrete region chart. We will provide an example region chart as well as sample region specific settings and certificate generation instructions.
## Infrastructure Charts
- [mariadb](mariadb/README.md)
- rabbitmq (in progress)
- memcached (in progress)
## OpenStack Charts
- keystone (in progress)
The OpenStack charts under development will focus on container images leveraging the entrypoint model. This differs somewhat from the existing [openstack-helm](https://github.com/sapcc/openstack-helm) repository maintained by SAP right now although we have shamelessly "borrowed" many oncepts from them. For these charts, we will be following the same region approach as openstack-helm, namely that these charts will not install and run directly. They are included in the "openstack" chart as requirements, the openstack chart is effectively an abstract region and is intended to be required by a concrete region chart. We will provide an example region chart as well as sample region specific settings and certificate generation instructions.
Similar to openstack-helm, much of the 'make' complexity in this repository surrounds the fact that helm does not support directory based config maps or secrets. This will continue to be the case until (https://github.com/kubernetes/helm/issues/950) receives more attention.

1
ceph/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
secrets/*

View File

@ -84,8 +84,8 @@ kubectl label nodes node-type=storage --all
You will need to generate ceph keys and configuration. There is a simple to use utility that can do this quickly. Please note the generator utility (per ceph-docker) requires the sigil template framework: (https://github.com/gliderlabs/sigil) to be installed and on the current path.
```
cd ceph/utils/generator
./generate_secrets.sh `./generate_secrets.sh fsid`
cd utils/utils/generator
./generate_secrets.sh all `./generate_secrets.sh fsid`
cd ../../..
```

6
ceph/requirements.lock Normal file
View File

@ -0,0 +1,6 @@
dependencies:
- name: utils
repository: http://localhost:8879/charts
version: 0.1.0
digest: sha256:9054fd53dcc5ca45243141487390640dedd7d74aa773b814da975030fcb0e902
generated: 2016-11-23T10:08:51.239134703-08:00

13
ceph/requirements.yaml Normal file
View File

@ -0,0 +1,13 @@
dependencies:
# - name: memcached
# repository: http://localhost:8879/charts
# version: 0.1.0
# - name: rabbitmq
# repository: http://localhost:8879/charts
# version: 0.1.0
# - name: keystone
# repository: http://localhost:8879/charts
# version: 0.1.0
- name: utils
repository: http://localhost:8879/charts
version: 0.1.0

View File

@ -1,13 +1,18 @@
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-conf-combined-storageclass"
type: kubernetes.io/rbd
data:
key: {{ include "secrets/ceph-client-key.b64" . | quote }}
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-conf-combined"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.conf: |
@ -22,10 +27,6 @@ kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-rgw-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
@ -36,10 +37,6 @@ kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-mds-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
@ -50,10 +47,6 @@ kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-osd-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
@ -64,10 +57,6 @@ kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-client-key"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph-client-key: {{ include "secrets/ceph-client-key.b64" . | quote }}

View File

@ -0,0 +1,14 @@
---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: general
provisioner: kubernetes.io/rbd
parameters:
monitors: ceph-mon.ceph:6789
adminId: admin
adminSecretName: ceph-conf-combined-storageclass
adminSecretNamespace: ceph
pool: rbd
userId: admin
userSecretName: ceph-client-key

View File

@ -1,2 +1,13 @@
Please remember to label nodes with control_node_label from values.yaml
And remember that number of control nodes should be odd.
# aic-helm/mariadb
By default, this chart creates a 3-member mariadb galera cluster.
PetSets would be ideal to use for this purpose, but as they are going through a transition in 1.5.0-beta.1 and not supported by Helm 2.0.0 under their new name, StatefulSets, we have opted to leverage helms template generation ability to create Values.replicas * POD+PVC+PV resources. Essentially, we create a mariadb-0, mariadb-1, and mariadb-2 Pod and associated unique PersistentVolumeClaims for each. This removes the previous daemonset limitations in other mariadb approaches.
You must ensure that your control nodes that should receive mariadb instances are labeled with openstack-control-plane=enabled:
```
kubectl label nodes openstack-control-plane=enabled --all
```
We will continue to refine our labeling so that it is consistent throughout the project.

View File

@ -1,42 +1,94 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
---
apiVersion: v1
kind: Service
metadata:
name: mariadb
name: infra-db
spec:
ports:
- name: db
port: {{ .Values.network.port.mariadb }}
selector:
matchLabels:
galera: enabled
app: mariadb
{{- $root := . -}}
{{ range $k, $v := until (atoi .Values.replicas) }}
---
apiVersion: v1
kind: Service
metadata:
name: infra-db-{{$v}}
labels:
release: {{ $root.Release.Name | quote }}
chart: "{{ $root.Chart.Name }}-{{ $root.Chart.Version }}"
spec:
ports:
- name: db
port: {{ $root.Values.network.port.mariadb }}
- name: wsrep
port: {{ $root.Values.network.port.wsrep }}
selector:
app: mariadb
server-id: "{{$v}}"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mariadb-{{$v}}
annotations:
volume.beta.kubernetes.io/storage-class: "general"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ $root.Values.volume.size }}
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: mariadb
galera: enabled
server-id: "{{$v}}"
name: mariadb-{{$v}}
spec:
replicas: 1
template:
securityContext:
runAsUser: 0
metadata:
name: mariadb-{{$v}}
labels:
app: mariadb
galera: enabled
server-id: "{{$v}}"
annotations:
pod.beta.kubernetes.io/init-containers: '[
pod.beta.kubernetes.io/hostname: mariadb-{{$v}}
helm.sh/created: {{ $root.Release.Time.Seconds | quote }}
# alanmeadows: this soft requirement allows single
# host deployments to spawn several mariadb containers
# but in a larger environment, would attempt to spread
# them out
scheduler.alpha.kubernetes.io/affinity: >
{
"name": "init",
"image": "quay.io/stackanetes/kubernetes-entrypoint:v0.1.0",
"env": [
{
"name": "DEPENDENCY_JOBS",
"value": "mariadb-seed"
},
{
"name": "COMMAND",
"value": "echo Done"
}
]
}
]'
"podAntiAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values":["mariadb"]
}]
},
"topologyKey": "kubernetes.io/hostname",
"weight": 10
}]
}
}
spec:
nodeSelector:
{{ .Values.deployment.control_node_label }}: enabled
# TODO(DTadrzak): it must be removed in the future
securityContext:
runAsUser: 0
{{ $root.Values.labels.control_node_label }}: enabled
containers:
- name: mariadb
image: {{ .Values.deployment.image }}
- name: mariadb-{{$v}}
image: {{ $root.Values.images.mariadb }}
imagePullPolicy: Always
env:
- name: INTERFACE_NAME
@ -54,7 +106,8 @@ spec:
- name: DEPENDENCY_CONFIG
value: "/etc/my.cnf.d/wsrep.cnf"
ports:
- containerPort: {{ .Values.network.port.mariadb }}
- containerPort: {{ $root.Values.network.port.mariadb }}
- containerPort: {{ $root.Values.network.port.wsrep }}
readinessProbe:
exec:
command:
@ -95,14 +148,14 @@ spec:
subPath: tuning.cnf
- name: wsrep
mountPath: /configmaps/wsrep.cnf
- name: mysql
mountPath: /var/lib/mysql
- name: replicas
mountPath: /tmp/replicas.py
subPath: replicas.py
- name: readiness
mountPath: /mariadb-readiness.py
subPath: mariadb-readiness.py
- name: mysql-data
mountPath: /var/lib/mysql
volumes:
- name: mycnfd
emptyDir: {}
@ -145,6 +198,9 @@ spec:
- name: readiness
configMap:
name: mariadb-readiness
- name: mysql
hostPath:
path: /var/lib/mysql-openstack-{{ .Values.database.cluster_name }}
- name: mysql-data
persistentVolumeClaim:
matchLabels:
server-id: "{{$v}}"
claimName: mariadb-{{$v}}
{{ end }}

View File

@ -9,10 +9,10 @@ spec:
app: mariadb
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 10000
terminationGracePeriodSeconds: 30
containers:
- name: mariadb-init
image: {{ .Values.deployment.image }}
image: {{ .Values.images.mariadb }}
imagePullPolicy: Always
env:
- name: INTERFACE_NAME

View File

@ -11,7 +11,7 @@ data:
import sys
import urllib2
URL = ('https://kubernetes.default.svc.{{ .Values.network.dns.kubernetes_domain }}/apis/extensions/v1beta1/daemonsets')
URL = ('https://kubernetes.default.svc.{{ .Values.network.dns.kubernetes_domain }}/apis/extensions/v1beta1/deployments')
TOKEN_FILE = '/var/run/secrets/kubernetes.io/serviceaccount/token'
def create_ctx():
@ -20,7 +20,7 @@ data:
ctx.verify_mode = ssl.CERT_NONE
return ctx
def get_daemonsets():
def get_deployments():
url = URL.format()
try:
token = file(TOKEN_FILE, 'r').read()
@ -36,11 +36,11 @@ data:
return output
def main():
reply = get_daemonsets()
reply = get_deployments()
name = "mariadb"
namespace = "default" if not os.environ["NAMESPACE"] else os.environ["NAMESPACE"]
mariadb = filter(lambda d: d["metadata"]["namespace"] == namespace and d["metadata"]["name"] == name, reply["items"])
print mariadb[0]["status"]['desiredNumberScheduled']
mariadb = filter(lambda d: d["metadata"]["namespace"] == namespace and d["metadata"]["name"].startswith(name), reply["items"])
print len(mariadb)
if __name__ == "__main__":
main()

View File

@ -1,5 +1,10 @@
deployment:
image: quay.io/stackanetes/stackanetes-mariadb:newton
replicas: "3" # this must be quoted to deal with atoi
images:
mariadb: quay.io/stackanetes/stackanetes-mariadb:newton
ceph_rbd_job: quay.io/attcomdev/ceph-daemon:latest
volume:
size: 20Gi
labels:
control_node_label: openstack-control-plane
network:
port:

27
openstack/.helmignore Normal file
View File

@ -0,0 +1,27 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
bin/
etc/
patches/
*.py
Makefile

4
openstack/Chart.yaml Executable file
View File

@ -0,0 +1,4 @@
apiVersion: v1
description: A Helm chart for Kubernetes
name: openstack
version: 0.1.0

View File

@ -0,0 +1,6 @@
dependencies:
- name: mariadb
repository: http://localhost:8879/charts
version: 0.1.0
digest: sha256:4a2c3cbe5841ba5b4cefeb9b9929b5ebf52d7779b279a45c9f1bb229b1e358da
generated: 2016-11-23T10:08:51.688995889-08:00

View File

@ -0,0 +1,13 @@
dependencies:
# - name: memcached
# repository: http://localhost:8879/charts
# version: 0.1.0
# - name: rabbitmq
# repository: http://localhost:8879/charts
# version: 0.1.0
# - name: keystone
# repository: http://localhost:8879/charts
# version: 0.1.0
- name: mariadb
repository: http://localhost:8879/charts
version: 0.1.0

View File

@ -0,0 +1,9 @@
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-client-key"
type: kubernetes.io/rbd
data:
key: {{ include "secrets/ceph-client-key.b64" . | quote }}

14
openstack/values.yaml Normal file
View File

@ -0,0 +1,14 @@
# Default values for openstack.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
global:
# (alanmeadows) NOTE: these two items are not easily changeable yet
region: cluster
tld: local
images:
ceph_rbd_job: quay.io/attcomdev/ceph-daemon:latest
labels:
control_node_label: openstack-control-plane

1
utils/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
secrets/*

27
utils/.helmignore Normal file
View File

@ -0,0 +1,27 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
bin/
etc/
patches/
*.py
Makefile

4
utils/Chart.yaml Executable file
View File

@ -0,0 +1,4 @@
apiVersion: v1
description: A Helm chart for Kubernetes
name: utils
version: 0.1.0

7
utils/Makefile Normal file
View File

@ -0,0 +1,7 @@
EXCLUDE := templates/* charts/* Chart.yaml requirement* values.yaml Makefile utils/*
FILES := $(shell find * -type f $(foreach e,$(EXCLUDE), -not -path "$(e)") )
templates/_partials.tpl: Makefile $(FILES)
echo Generating $(CURDIR)/$@
rm -f $@
for i in $(FILES); do printf '{{ define "'$$i'" }}' >> $@; cat $$i >> $@; printf "{{ end }}\n" >> $@; done

View File

@ -0,0 +1,22 @@
{{define "common.sh"}}
#!/usr/bin/env bash
function start_application {
if [ "$DEBUG_CONTAINER" = "true" ]
then
tail -f /dev/null
else
_start_application
fi
}
CLUSTER_SCRIPT_PATH=/openstack-kube/openstack-kube/scripts
CLUSTER_CONFIG_PATH=/openstack-kube/openstack-kube/etc
export MY_IP=$(ip route get 1 | awk '{print $NF;exit}')
{{end}}

View File

@ -0,0 +1,9 @@
{{define "rabbitmq_host"}}rabbitmq.{{.Release.Namespace}}.svc.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "memcached_host"}}memcached.{{.Release.Namespace}}.svc.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "infra-db"}}infra-db.{{.Release.Namespace}}.svc.kubernetes.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "keystone_db_host"}}infra-db.{{.Release.Namespace}}.svc.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "keystone_api_endpoint_host_admin"}}keystone.{{.Release.Namespace}}.svc.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "keystone_api_endpoint_host_internal"}}keystone.{{.Release.Namespace}}.svc.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "keystone_api_endpoint_host_public"}}identity-3.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}
{{define "keystone_api_endpoint_host_admin_ext"}}identity-admin-3.{{.Values.global.region}}.{{.Values.global.tld}}{{end}}

4
utils/values.yaml Normal file
View File

@ -0,0 +1,4 @@
# Default values for utils.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value