docs: general updates to documentation (#247)

* doc mods and etcd-rabbitmq additions

* second pass doc updates

* removal of 0.1.0 references in minikube and gs docs
This commit is contained in:
Brandon B. Jozsa 2017-03-13 09:44:09 -04:00 committed by Alan Meadows
parent c7b518b0e2
commit 3e62cd54e2
5 changed files with 318 additions and 252 deletions

View File

@ -16,9 +16,9 @@ Install a recent version of [Kubernetes/Helm](https://github.com/kubernetes/helm
Helm Installation Quickstart: Helm Installation Quickstart:
``` ```
$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh chmod 700 get_helm.sh
$ ./get_helm.sh ./get_helm.sh
``` ```
# TLDR; # TLDR;
@ -32,8 +32,9 @@ git clone https://github.com/att-comdev/openstack-helm.git && cd openstack-helm
# Get a list of the current tags: # Get a list of the current tags:
git tag -l git tag -l
# Checkout the tag you want to work with (if desired, or use master for development): # Checkout the tag you want to work with (use master for development):
git checkout 0.1.0 # For stability and testing, checkout the latest stable branch.
git checkout 0.2.0
# Start a local Helm Server: # Start a local Helm Server:
helm serve & helm serve &
@ -64,6 +65,7 @@ kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
# Deploy each chart: # Deploy each chart:
helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack
helm install --name=memcached local/memcached --namespace=openstack helm install --name=memcached local/memcached --namespace=openstack
helm install --name=etcd-rabbitmq local/etcd --namespace=openstack
helm install --name=rabbitmq local/rabbitmq --namespace=openstack helm install --name=rabbitmq local/rabbitmq --namespace=openstack
helm install --name=keystone local/keystone --namespace=openstack helm install --name=keystone local/keystone --namespace=openstack
helm install --name=cinder local/cinder --namespace=openstack helm install --name=cinder local/cinder --namespace=openstack
@ -79,7 +81,7 @@ helm install --name=horizon local/horizon --namespace=openstack
After installation, start Minikube with the flags listed below. Ensure that you have supplied enough disk, memory, and the current version flag for Kubernetes during `minikube start`. More information can be found [HERE](https://github.com/kubernetes/minikube/blob/master/docs/minikube_start.md). After installation, start Minikube with the flags listed below. Ensure that you have supplied enough disk, memory, and the current version flag for Kubernetes during `minikube start`. More information can be found [HERE](https://github.com/kubernetes/minikube/blob/master/docs/minikube_start.md).
``` ```
$ minikube start \ minikube start \
--network-plugin=cni \ --network-plugin=cni \
--kubernetes-version v1.5.1 \ --kubernetes-version v1.5.1 \
--disk-size 40g \ --disk-size 40g \
@ -89,71 +91,80 @@ $ minikube start \
Next, deploy the [Calico](http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/hosted) manifest. This is not a requirement in cases where you want to use your own CNI-enabled SDN, however you are doing so at your own experience. Note which versions of Calico are recommended for the project in our [Installation Guide](https://github.com/att-comdev/openstack-helm/blob/master/docs/installation/getting-started.md#overview). Next, deploy the [Calico](http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/hosted) manifest. This is not a requirement in cases where you want to use your own CNI-enabled SDN, however you are doing so at your own experience. Note which versions of Calico are recommended for the project in our [Installation Guide](https://github.com/att-comdev/openstack-helm/blob/master/docs/installation/getting-started.md#overview).
``` ```
$ kubectl create -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml kubectl create -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml
``` ```
Wait for the environment to come up without error (like shown below). Wait for the environment to come up without error (like shown below).
``` ```
$ kubectl get pods -o wide --all-namespaces -w kubectl get pods -o wide --all-namespaces -w
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-node-r9b9s 2/2 Running 0 3m 192.168.99.100 minikube # NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-policy-controller-2974666449-hm0zr 1/1 Running 0 3m 192.168.99.100 minikube # kube-system calico-node-r9b9s 2/2 Running 0 3m 192.168.99.100 minikube
kube-system configure-calico-r6lnw 0/1 Completed 0 3m 192.168.99.100 minikube # kube-system calico-policy-controller-2974666449-hm0zr 1/1 Running 0 3m 192.168.99.100 minikube
kube-system kube-addon-manager-minikube 1/1 Running 0 7m 192.168.99.100 minikube # kube-system configure-calico-r6lnw 0/1 Completed 0 3m 192.168.99.100 minikube
kube-system kube-dns-v20-sh5gp 3/3 Running 0 7m 192.168.120.64 minikube # kube-system kube-addon-manager-minikube 1/1 Running 0 7m 192.168.99.100 minikube
kube-system kubernetes-dashboard-m24s8 1/1 Running 0 7m 192.168.120.65 minikube # kube-system kube-dns-v20-sh5gp 3/3 Running 0 7m 192.168.120.64 minikube
# kube-system kubernetes-dashboard-m24s8 1/1 Running 0 7m 192.168.120.65 minikube
``` ```
Next, initialize [Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md#easy-in-cluster-installation) (which includes deploying tiller). Next, initialize [Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md#easy-in-cluster-installation) (which includes deploying tiller).
``` ```
$ helm init helm init
Creating /Users/admin/.helm
Creating /Users/admin/.helm/repository
Creating /Users/admin/.helm/repository/cache
Creating /Users/admin/.helm/repository/local
Creating /Users/admin/.helm/plugins
Creating /Users/admin/.helm/starters
Creating /Users/admin/.helm/repository/repositories.yaml
Creating /Users/admin/.helm/repository/local/index.yaml
$HELM_HOME has been configured at $HOME/.helm.
Tiller (the helm server side component) has been installed into your Kubernetes Cluster. # Creating /Users/admin/.helm
Happy Helming! # Creating /Users/admin/.helm/repository
# Creating /Users/admin/.helm/repository/cache
# Creating /Users/admin/.helm/repository/local
# Creating /Users/admin/.helm/plugins
# Creating /Users/admin/.helm/starters
# Creating /Users/admin/.helm/repository/repositories.yaml
# Creating /Users/admin/.helm/repository/local/index.yaml
# $HELM_HOME has been configured at $HOME/.helm.
$ kubectl get pods -o wide --all-namespaces | grep tiller # Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
kube-system tiller-deploy-3299276078-n98ct 1/1 Running 0 39s 192.168.120.66 minikube # Happy Helming!
```
Ensure that Tiller is deployed successfully:
```
kubectl get pods -o wide --all-namespaces | grep tiller
# kube-system tiller-deploy-3299276078-n98ct 1/1 Running 0 39s 192.168.120.66 minikube
``` ```
With Helm installed, you will need to start a local [Helm server](https://github.com/kubernetes/helm/blob/7a15ad381eae794a36494084972e350306e498fd/docs/helm/helm_serve.md#helm-serve) (in the background), and point to a locally configured Helm [repository](https://github.com/kubernetes/helm/blob/7a15ad381eae794a36494084972e350306e498fd/docs/helm/helm_repo_index.md#helm-repo-index): With Helm installed, you will need to start a local [Helm server](https://github.com/kubernetes/helm/blob/7a15ad381eae794a36494084972e350306e498fd/docs/helm/helm_serve.md#helm-serve) (in the background), and point to a locally configured Helm [repository](https://github.com/kubernetes/helm/blob/7a15ad381eae794a36494084972e350306e498fd/docs/helm/helm_repo_index.md#helm-repo-index):
``` ```
$ helm serve & helm serve &
$ helm repo add local http://localhost:8879/charts helm repo add local http://localhost:8879/charts
"local" has been added to your repositories
# "local" has been added to your repositories
``` ```
Verify that the local repository is configured correctly: Verify that the local repository is configured correctly:
``` ```
$ helm repo list helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/ # NAME URL
local http://localhost:8879/charts # stable https://kubernetes-charts.storage.googleapis.com/
# local http://localhost:8879/charts
``` ```
Download the latest release of the project, preferably from `master` since you are following the "developer" instructions. Download the latest release of the project, preferably from `master` since you are following the "developer" instructions.
``` ```
$ git clone https://github.com/att-comdev/openstack-helm.git git clone https://github.com/att-comdev/openstack-helm.git
``` ```
Run `make` against the newly cloned project, which will automatically build secrets for the deployment and push the charts to your new local Helm repository: Run `make` against the newly cloned project, which will automatically build secrets for the deployment and push the charts to your new local Helm repository:
``` ```
$ cd openstack-helm cd openstack-helm
$ make make
``` ```
Perfect! Youre ready to install, develop, deploy, destroy, and repeat (when necessary)! Perfect! Youre ready to install, develop, deploy, destroy, and repeat (when necessary)!
@ -181,7 +192,7 @@ To deploy Openstack-Helm in development mode, ensure you've created a minikube-a
As a result of this guidence, we recommend creating the following for MariaDB like shown below. As a result of this guidence, we recommend creating the following for MariaDB like shown below.
``` ```
$ sudo mkdir -p /data/openstack-helm/mariadb sudo mkdir -p /data/openstack-helm/mariadb
``` ```
### Label Minikube Node ### Label Minikube Node
@ -189,7 +200,7 @@ $ sudo mkdir -p /data/openstack-helm/mariadb
Be sure to label your minikube node according to the documentation in our installation guide (this remains exactly the same). Be sure to label your minikube node according to the documentation in our installation guide (this remains exactly the same).
``` ```
$ kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
``` ```
***NOTE:*** *You do not need to label your minikube cluster for `ceph-storage`, since development mode uses hostPath.* ***NOTE:*** *You do not need to label your minikube cluster for `ceph-storage`, since development mode uses hostPath.*
@ -200,7 +211,7 @@ $ kubectl label nodes openstack-control-plane=enabled --all --namespace=openstac
Now you can deploy the MariaDB chart, which is required by all other child charts. Now you can deploy the MariaDB chart, which is required by all other child charts.
``` ```
$ helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack
``` ```
***IMPORTANT:*** *MariaDB seeding tasks run for quite a while. This is expected behavior, as several checks are completed prior to completion. Please wait for a few minutes for these jobs to finish.* ***IMPORTANT:*** *MariaDB seeding tasks run for quite a while. This is expected behavior, as several checks are completed prior to completion. Please wait for a few minutes for these jobs to finish.*
@ -210,15 +221,16 @@ $ helm install --name mariadb --set development.enabled=true local/mariadb --nam
Once MariaDB is deployed complete, deploy the other charts as needed. Once MariaDB is deployed complete, deploy the other charts as needed.
``` ```
$ helm install --name=memcached local/memcached --namespace=openstack helm install --name=memcached local/memcached --namespace=openstack
$ helm install --name=rabbitmq local/rabbitmq --namespace=openstack helm install --name=etcd-rabbitmq local/etcd --namespace=openstack
$ helm install --name=keystone local/keystone --namespace=openstack helm install --name=rabbitmq local/rabbitmq --namespace=openstack
$ helm install --name=horizon local/horizon --namespace=openstack helm install --name=keystone local/keystone --namespace=openstack
$ helm install --name=cinder local/cinder --namespace=openstack helm install --name=horizon local/horizon --namespace=openstack
$ helm install --name=glance local/glance --namespace=openstack helm install --name=cinder local/cinder --namespace=openstack
$ helm install --name=nova local/nova --namespace=openstack helm install --name=glance local/glance --namespace=openstack
$ helm install --name=neutron local/neutron --namespace=openstack helm install --name=nova local/nova --namespace=openstack
$ helm install --name=heat local/heat --namespace=openstack helm install --name=neutron local/neutron --namespace=openstack
helm install --name=heat local/heat --namespace=openstack
``` ```
# Horizon Management # Horizon Management
@ -226,7 +238,7 @@ $ helm install --name=heat local/heat --namespace=openstack
After each chart is deployed, you may wish to change the typical service endpoint for Horizon to a `nodePort` service endpoint (this is unique to Minikube deployments). Use the `kubectl edit` command to edit this service manually. After each chart is deployed, you may wish to change the typical service endpoint for Horizon to a `nodePort` service endpoint (this is unique to Minikube deployments). Use the `kubectl edit` command to edit this service manually.
``` ```
$ sudo kubectl edit svc horizon -n openstack sudo kubectl edit svc horizon -n openstack
``` ```
With the deployed manifest in edit mode, you can enable `nodePort` by replicating some of the fields below (specifically, the `nodePort` lines). With the deployed manifest in edit mode, you can enable `nodePort` by replicating some of the fields below (specifically, the `nodePort` lines).
@ -267,5 +279,3 @@ If you have any questions, comments, or find any bugs, please submit an issue so
# Troubleshooting # Troubleshooting
* [Openstack-Helm Minikube Troubleshooting](../troubleshooting/ts-minikube.md) * [Openstack-Helm Minikube Troubleshooting](../troubleshooting/ts-minikube.md)

View File

@ -6,10 +6,10 @@ In order to drive towards a production-ready Openstack solution, our goal is to
| | Version | Notes | | | Version | Notes |
|--- |--- |--- | |--- |--- |--- |
| **Kubernetes** | [v1.5.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v151) | [Custom Controller for RDB tools](https://quay.io/repository/attcomdev/kube-controller-manager?tab=tags) | | **Kubernetes** | [v1.5.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#v153) | [Custom Controller for RDB tools](https://quay.io/repository/attcomdev/kube-controller-manager?tab=tags) |
| **Helm** | [v2.1.3](https://github.com/kubernetes/helm/wiki/Roadmap#210-decided) | Planning for [v2.2.0](https://github.com/kubernetes/helm/wiki/Roadmap#220-open-for-small-additions) | | **Helm** | [v2.2.1](https://github.com/kubernetes/helm/releases/tag/v2.2.1) | Planning for [v2.3.0](https://github.com/kubernetes/helm/milestone/30) |
| **Calico** | [v2.0](http://docs.projectcalico.org/v2.0/releases/) | [`calicoctl` v1.0](https://github.com/projectcalico/calicoctl/releases) | | **Calico** | [v2.0](http://docs.projectcalico.org/v2.0/releases/) | [`calicoctl` v1.0](https://github.com/projectcalico/calicoctl/releases) |
| **Docker** | [v1.12.1](https://github.com/docker/docker/releases/tag/v1.12.1) | [Per kubeadm Instructions](http://kubernetes.io/docs/getting-started-guides/kubeadm/) | | | **Docker** | [v1.12.6](https://github.com/docker/docker/releases/tag/v1.12.1) | [Per kubeadm Instructions](http://kubernetes.io/docs/getting-started-guides/kubeadm/) | |
Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further. Other versions and considerations (such as other CNI SDN providers), config map data, and value overrides will be included in other documentation as we explore these options further.
@ -42,7 +42,7 @@ admin@kubenode01:~$
After an initial `kubeadmn` deployment has been scheduled, it is time to deploy a CNI-enabled SDN. We have selected **Calico**, but have also confirmed that this works for Weave, and Romana. For Calico version v2.0, you can apply the provided [Kubeadm Hosted Install](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/) manifest: After an initial `kubeadmn` deployment has been scheduled, it is time to deploy a CNI-enabled SDN. We have selected **Calico**, but have also confirmed that this works for Weave, and Romana. For Calico version v2.0, you can apply the provided [Kubeadm Hosted Install](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/) manifest:
``` ```
admin@kubenode01:~$ kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
``` ```
**PLEASE NOTE:** If you are using a 192.168.0.0/16 CIDR for your Kubernetes hosts, you will need to modify [line 42](https://gist.github.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694#file-calico-kubeadm-hosted-yml-L42) for the `cidr` declaration within the `ippool`. This must be a `/16` range or more, as the `kube-controller` will hand out `/24` ranges to each node. We have included a sample comparison of the changes [here](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml) and [here](https://gist.githubusercontent.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694/raw/c950eef1123a7dcc4b0dedca1a202e0c06248e9e/calico-kubeadm-hosted.yml). **PLEASE NOTE:** If you are using a 192.168.0.0/16 CIDR for your Kubernetes hosts, you will need to modify [line 42](https://gist.github.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694#file-calico-kubeadm-hosted-yml-L42) for the `cidr` declaration within the `ippool`. This must be a `/16` range or more, as the `kube-controller` will hand out `/24` ranges to each node. We have included a sample comparison of the changes [here](http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml) and [here](https://gist.githubusercontent.com/v1k0d3n/a152b1f5b8db5a8ae9c8c7da575a9694/raw/c950eef1123a7dcc4b0dedca1a202e0c06248e9e/calico-kubeadm-hosted.yml).
@ -76,7 +76,7 @@ Persistent storage is improving. Please check our current and/or resolved [issue
At some future point, we want to ensure that our solution is cloud-native, allowing installation on any host system without a package manager and only a container runtime (i.e. CoreOS). Until this happens, we will need to ensure that `ceph-common` is installed on each of our hosts. Using our Ubuntu example: At some future point, we want to ensure that our solution is cloud-native, allowing installation on any host system without a package manager and only a container runtime (i.e. CoreOS). Until this happens, we will need to ensure that `ceph-common` is installed on each of our hosts. Using our Ubuntu example:
``` ```
admin@kubenode01:~$ sudo apt-get install ceph-common -y sudo apt-get install ceph-common -y
``` ```
We will always attempt to keep host-specific requirements to a minimum, and we are working with the Ceph team (Sébastien Han) to quickly address this Ceph requirement. We will always attempt to keep host-specific requirements to a minimum, and we are working with the Ceph team (Sébastien Han) to quickly address this Ceph requirement.
@ -85,7 +85,7 @@ We will always attempt to keep host-specific requirements to a minimum, and we a
Another thing of interest is that our deployment assumes that you can generate secrets at the time of the container deployment. We require the [`sigil`](https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz) binary on your deployment host in order to perform this action. Another thing of interest is that our deployment assumes that you can generate secrets at the time of the container deployment. We require the [`sigil`](https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz) binary on your deployment host in order to perform this action.
``` ```
admin@kubenode01:~$ curl -L https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz | tar -zxC /usr/local/bin curl -L https://github.com/gliderlabs/sigil/releases/download/v0.4.0/sigil_0.4.0_Linux_x86_64.tgz | tar -zxC /usr/local/bin
``` ```
### Kubernetes Controller Manager ### Kubernetes Controller Manager
@ -94,8 +94,8 @@ Before deploying Ceph, you will need to re-deploy a custom Kubernetes Controller
To make these changes, export your Kubernetes version, and edit the `image` line of your `kube-controller-manager` json manifest on your Kubernetes Master: To make these changes, export your Kubernetes version, and edit the `image` line of your `kube-controller-manager` json manifest on your Kubernetes Master:
``` ```
admin@kubenode01:~$ export kube_version=v1.5.1 export kube_version=v1.5.3
admin@kubenode01:~$ sed -i "s|gcr.io/google_containers/kube-controller-manager-amd64:'$kube_version'|quay.io/attcomdev/kube-controller-manager:'$kube_version'|g" /etc/kubernetes/manifests/kube-controller-manager.json sed -i "s|gcr.io/google_containers/kube-controller-manager-amd64:'$kube_version'|quay.io/attcomdev/kube-controller-manager:'$kube_version'|g" /etc/kubernetes/manifests/kube-controller-manager.json
``` ```
Now you will want to `restart` your Kubernetes master server to continue. Now you will want to `restart` your Kubernetes master server to continue.
@ -142,42 +142,42 @@ nameserver 192.168.1.70
nameserver 8.8.8.8 nameserver 8.8.8.8
search svc.cluster.local jinkit.com search svc.cluster.local jinkit.com
EOF EOF
root@kubenode01:/# root@kubenode01:/#
``` ```
Now you can test your changes by deploying a service to your cluster, and resolving this from the controller. As an example, lets deploy something useful, like [Kubernetes dashboard](https://github.com/kubernetes/dashboard): Now you can test your changes by deploying a service to your cluster, and resolving this from the controller. As an example, lets deploy something useful, like [Kubernetes dashboard](https://github.com/kubernetes/dashboard):
``` ```
admin@kubenode01:~$ kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
``` ```
Note the `IP` field: Note the `IP` field:
``` ```
admin@kubenode01:~$ kubectl describe svc kubernetes-dashboard -n kube-system admin@kubenode01:~$ kubectl describe svc kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard Name: kubernetes-dashboard
Namespace: kube-system Namespace: kube-system
Labels: app=kubernetes-dashboard Labels: app=kubernetes-dashboard
Selector: app=kubernetes-dashboard Selector: app=kubernetes-dashboard
Type: NodePort Type: NodePort
IP: 10.110.207.144 IP: 10.110.207.144
Port: <unset> 80/TCP Port: <unset> 80/TCP
NodePort: <unset> 32739/TCP NodePort: <unset> 32739/TCP
Endpoints: 10.25.178.65:9090 Endpoints: 10.25.178.65:9090
Session Affinity: None Session Affinity: None
No events. No events.
admin@kubenode01:~$ admin@kubenode01:~$
``` ```
Now you should be able to resolve the host `kubernetes-dashboard.kube-system.svc.cluster.local`: Now you should be able to resolve the host `kubernetes-dashboard.kube-system.svc.cluster.local`:
``` ```
admin@kubenode01:~$ kubectl exec kube-controller-manager-kubenode01 -it -n kube-system -- ping kubernetes-dashboard.kube-system.svc.cluster.local admin@kubenode01:~$ kubectl exec kube-controller-manager-kubenode01 -it -n kube-system -- ping kubernetes-dashboard.kube-system.svc.cluster.local
PING kubernetes-dashboard.kube-system.svc.cluster.local (10.110.207.144) 56(84) bytes of data. PING kubernetes-dashboard.kube-system.svc.cluster.local (10.110.207.144) 56(84) bytes of data.
...
...
admin@kubenode01:~$
``` ```
(Note: This host example above has `iputils-ping` installed) (Note: This host example above has `iputils-ping` installed)
### Kubernetes Node DNS Resolution ### Kubernetes Node DNS Resolution
For each of the nodes to know exactly how to communicate with Ceph (and thus MariaDB) endpoints, each host much also have an entry for `kube-dns`. Since we are using Ubuntu for our example, place these changes in `/etc/network/interfaces` to ensure they remain after reboot. For each of the nodes to know exactly how to communicate with Ceph (and thus MariaDB) endpoints, each host much also have an entry for `kube-dns`. Since we are using Ubuntu for our example, place these changes in `/etc/network/interfaces` to ensure they remain after reboot.
Now we are ready to continue with the Openstack-Helm installation. Now we are ready to continue with the Openstack-Helm installation.
@ -190,6 +190,7 @@ Please ensure that you have verified and completed the steps above to prevent is
Although Ceph is mentioned throughout this guide, our deployment is flexible to allow you the option of bringing any type of persistent storage. Although most of these verification steps are the same, if not very similar, we will use Ceph as our example throughout this guide. Although Ceph is mentioned throughout this guide, our deployment is flexible to allow you the option of bringing any type of persistent storage. Although most of these verification steps are the same, if not very similar, we will use Ceph as our example throughout this guide.
## Node Labels ## Node Labels
First, we must label our nodes according to their role. Although we are labeling `all` nodes, you are free to label only the nodes you wish. You must have at least one, although a minimum of three are recommended. Nodes are labeled according to their Openstack roles: First, we must label our nodes according to their role. Although we are labeling `all` nodes, you are free to label only the nodes you wish. You must have at least one, although a minimum of three are recommended. Nodes are labeled according to their Openstack roles:
**Storage Nodes:** `ceph-storage` **Storage Nodes:** `ceph-storage`
@ -197,259 +198,168 @@ First, we must label our nodes according to their role. Although we are labeling
**Compute Nodes:** `openvswitch`, `openstack-compute-node` **Compute Nodes:** `openvswitch`, `openstack-compute-node`
``` ```
admin@kubenode01:~$ kubectl label nodes openstack-control-plane=enabled --all kubectl label nodes openstack-control-plane=enabled --all
admin@kubenode01:~$ kubectl label nodes ceph-storage=enabled --all kubectl label nodes ceph-storage=enabled --all
admin@kubenode01:~$ kubectl label nodes openvswitch=enabled --all kubectl label nodes openvswitch=enabled --all
admin@kubenode01:~$ kubectl label nodes openstack-compute-node=enabled --all kubectl label nodes openstack-compute-node=enabled --all
``` ```
## Obtaining the Project ## Obtaining the Project
Download the latest copy of Openstack-Helm: Download the latest copy of Openstack-Helm:
``` ```
admin@kubenode01:~$ git clone https://github.com/att-comdev/openstack-helm.git git clone https://github.com/att-comdev/openstack-helm.git
admin@kubenode01:~$ cd openstack-helm cd openstack-helm
``` ```
## Ceph Preparation and Installation ## Ceph Preparation and Installation
Ceph must be aware of the OSX cluster and public networks. These CIDR ranges are the exact same ranges you used earlier in your Calico deployment yaml (our example was 10.25.0.0/16 due to our 192.168.0.0/16 overlap). Explore this variable to your deployment environment by issuing the following commands: Ceph must be aware of the OSX cluster and public networks. These CIDR ranges are the exact same ranges you used earlier in your Calico deployment yaml (our example was 10.25.0.0/16 due to our 192.168.0.0/16 overlap). Explore this variable to your deployment environment by issuing the following commands:
``` ```
admin@kubenode01:~$ export osd_cluster_network=10.25.0.0/16 export osd_cluster_network=10.25.0.0/16
admin@kubenode01:~$ export osd_public_network=10.25.0.0/16 export osd_public_network=10.25.0.0/16
``` ```
## Ceph Storage Volumes ## Ceph Storage Volumes
Ceph must also have volumes to mount on each host labeled for `ceph-storage`. On each host that you labeled, create the following directory (can be overriden): Ceph must also have volumes to mount on each host labeled for `ceph-storage`. On each host that you labeled, create the following directory (can be overriden):
``` ```
admin@kubenode01:~$ mkdir -p /var/lib/openstack-helm/ceph mkdir -p /var/lib/openstack-helm/ceph
``` ```
*Repeat this step for each node labeled: `ceph-storage`* *Repeat this step for each node labeled: `ceph-storage`*
## Ceph Secrets Generation ## Ceph Secrets Generation
Although you can bring your own secrets, we have conveniently created a secret generation tool for you (for greenfield deployments). You can create secrets for your project by issuing the following: Although you can bring your own secrets, we have conveniently created a secret generation tool for you (for greenfield deployments). You can create secrets for your project by issuing the following:
``` ```
admin@kubenode01:~$ cd helm-toolkit/utils/secret-generator cd helm-toolkit/utils/secret-generator
admin@kubenode01:~$ ./generate_secrets.sh all `./generate_secrets.sh fsid` ./generate_secrets.sh all `./generate_secrets.sh fsid`
admin@kubenode01:~$ cd ../../.. cd ../../..
``` ```
## Nova Compute Instance Storage ## Nova Compute Instance Storage
Nova Compute requires a place to store instances locally. Each node labeled `openstack-compute-node` needs to have the following directory: Nova Compute requires a place to store instances locally. Each node labeled `openstack-compute-node` needs to have the following directory:
``` ```
admin@kubenode01:~$ mkdir -p /var/lib/nova/instances mkdir -p /var/lib/nova/instances
``` ```
*Repeat this step for each node labeled: `openstack-compute-node`* *Repeat this step for each node labeled: `openstack-compute-node`*
## Helm Preparation ## Helm Preparation
Now we need to install and prepare Helm, the core of our project. Please use the installation guide from the [Kubernetes/Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md#from-the-binary-releases) repository. Please take note of our required versions above. Now we need to install and prepare Helm, the core of our project. Please use the installation guide from the [Kubernetes/Helm](https://github.com/kubernetes/helm/blob/master/docs/install.md#from-the-binary-releases) repository. Please take note of our required versions above.
Once installed, and initiated (`helm init`), you will need your local environment to serve helm charts for use. You can do this by: Once installed, and initiated (`helm init`), you will need your local environment to serve helm charts for use. You can do this by:
``` ```
admin@kubenode01:~$ helm serve . & helm serve &
admin@kubenode01:~$ helm repo add local http://localhost:8879/charts helm repo add local http://localhost:8879/charts
``` ```
# Openstack-Helm Installation # Openstack-Helm Installation
Now we are ready to deploy, and verify our Openstack-Helm installation. The first required is to build out the deployment secrets, lint and package each of the charts for the project. Do this my running `make` in the `openstack-helm` directory: Now we are ready to deploy, and verify our Openstack-Helm installation. The first required is to build out the deployment secrets, lint and package each of the charts for the project. Do this my running `make` in the `openstack-helm` directory:
``` ```
admin@kubenode01:~$ make make
``` ```
**Helpful Note:** If you need to make any changes to the deployment, you may run `make` again, delete your helm-deployed chart, and redeploy the chart (update). If you need to delete a chart for any reason, do the following: **Helpful Note:** If you need to make any changes to the deployment, you may run `make` again, delete your helm-deployed chart, and redeploy the chart (update). If you need to delete a chart for any reason, do the following:
``` ```
admin@kubenode01:~$ helm list helm list
NAME REVISION UPDATED STATUS CHART
bootstrap 1 Fri Dec 23 13:37:35 2016 DEPLOYED bootstrap-0.1.0 # NAME REVISION UPDATED STATUS CHART
bootstrap-ceph 1 Fri Dec 23 14:27:51 2016 DEPLOYED bootstrap-0.1.0 # bootstrap 1 Fri Dec 23 13:37:35 2016 DEPLOYED bootstrap-0.2.0
ceph 3 Fri Dec 23 14:18:49 2016 DEPLOYED ceph-0.1.0 # bootstrap-ceph 1 Fri Dec 23 14:27:51 2016 DEPLOYED bootstrap-0.2.0
keystone 1 Fri Dec 23 16:40:56 2016 DEPLOYED keystone-0.1.0 # ceph 3 Fri Dec 23 14:18:49 2016 DEPLOYED ceph-0.2.0
mariadb 1 Fri Dec 23 16:15:29 2016 DEPLOYED mariadb-0.1.0 # keystone 1 Fri Dec 23 16:40:56 2016 DEPLOYED keystone-0.2.0
memcached 1 Fri Dec 23 16:39:15 2016 DEPLOYED memcached-0.1.0 # mariadb 1 Fri Dec 23 16:15:29 2016 DEPLOYED mariadb-0.2.0
rabbitmq 1 Fri Dec 23 16:40:34 2016 DEPLOYED rabbitmq-0.1.0 # memcached 1 Fri Dec 23 16:39:15 2016 DEPLOYED memcached-0.2.0
admin@kubenode01:~$ # rabbitmq 1 Fri Dec 23 16:40:34 2016 DEPLOYED rabbitmq-0.2.0
admin@kubenode01:~$
admin@kubenode01:~$ helm delete --purge keystone helm delete --purge keystone
``` ```
Please ensure that you use ``--purge`` whenever deleting a project. Please ensure that you use ``--purge`` whenever deleting a project.
## Ceph Installation and Verification ## Ceph Installation and Verification
Install the first service, which is Ceph. If all instructions have been followed as mentioned above, this installation should go smoothly. Use the following command to install Ceph: Install the first service, which is Ceph. If all instructions have been followed as mentioned above, this installation should go smoothly. Use the following command to install Ceph:
``` ```
admin@kubenode01:~$ helm install --set network.public=$osd_public_network --name=ceph local/ceph --namespace=ceph helm install --set network.public=$osd_public_network --name=ceph local/ceph --namespace=ceph
``` ```
## Bootstrap Installation ## Bootstrap Installation
At this time (and before verification of Ceph) you'll need to install the `bootstrap` chart. The `bootstrap` chart will install secrets for both the `ceph` and `openstack` namespaces for the general StorageClass: At this time (and before verification of Ceph) you'll need to install the `bootstrap` chart. The `bootstrap` chart will install secrets for both the `ceph` and `openstack` namespaces for the general StorageClass:
``` ```
admin@kubenode01:~$ helm install --name=bootstrap-ceph local/bootstrap --namespace=ceph helm install --name=bootstrap-ceph local/bootstrap --namespace=ceph
admin@kubenode01:~$ helm install --name=bootstrap-openstack local/bootstrap --namespace=openstack helm install --name=bootstrap-openstack local/bootstrap --namespace=openstack
``` ```
You may want to validate that Ceph is deployed successfully. Here are a couple of recommendations for this. You may want to validate that Ceph is deployed successfully. For more information on this, please see the section entitled [Ceph Troubleshooting](../troubleshooting/ts-persistent-storage.md).
### Ceph Validating PVC
To validate persistent volume claim (PVC) creation, we've placed a test manifest in the `./test/` directory. Deploy this pvc and explore the deployment:
```
admin@kubenode01:~$ kubectl get pvc -o wide --all-namespaces -w
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph pvc-test Bound pvc-bc768dea-c93e-11e6-817f-001fc69c26d1 1Gi RWO 9h
admin@kubenode01:~$
```
The output above indicates that the PVC is 'bound' correctly. Now digging deeper:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe pvc pvc-test -n ceph
Name: pvc-test
Namespace: ceph
StorageClass: general
Status: Bound
Volume: pvc-bc768dea-c93e-11e6-817f-001fc69c26d1
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
admin@kubenode01:~/projects/openstack-helm$
```
We can see that we have a VolumeID, and the 'capacity' is 1GB. It is a 'general' storage class. It is just a simple test. You can safely delete this test by issuing the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl delete pvc pvc-test -n ceph
persistentvolumeclaim "pvc-test" deleted
admin@kubenode01:~/projects/openstack-helm$
```
### Ceph Validating StorageClass
Next we can look at the storage class, to make sure that it was created correctly:
```
admin@kubenode01:~$ kubectl describe storageclass/general
Name: general
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/rbd
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
No events.
admin@kubenode01:~$
```
The parameters is what we're looking for here. If we see parameters passed to the StorageClass correctly, we will see the `ceph-mon.ceph:6789` hostname/port, things like `userid`, and appropriate secrets used for volume claims. This all looks great, and it time to Ceph itself.
### Ceph Validation
Most commonly, we want to validate that Ceph is working correctly. This can be done with the following ceph command:
```
admin@kubenode01:~$ kubectl exec -t -i ceph-mon-0 -n ceph -- ceph status
cluster 046de582-f8ee-4352-9ed4-19de673deba0
health HEALTH_OK
monmap e3: 3 mons at {ceph-mon-392438295-6q04c=10.25.65.131:6789/0,ceph-mon-392438295-ksrb2=10.25.49.196:6789/0,ceph-mon-392438295-l0pzj=10.25.79.193:6789/0}
election epoch 6, quorum 0,1,2 ceph-mon-392438295-ksrb2,ceph-mon-392438295-6q04c,ceph-mon-392438295-l0pzj
fsmap e5: 1/1/1 up {0=mds-ceph-mds-2810413505-gtjgv=up:active}
osdmap e23: 5 osds: 5 up, 5 in
flags sortbitwise
pgmap v22012: 80 pgs, 3 pools, 12712 MB data, 3314 objects
101 GB used, 1973 GB / 2186 GB avail
80 active+clean
admin@kubenode01:~$
```
Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is 'HEALTH_OK', we have 3 mons, we've established a quorum, and we can see that our active mds is 'ceph-mds-2810413505-gtjgv'. We have a healthy environment.
For Glance and Cinder to operate, you will need to create some storage pools for these systems. Additionally, Nova can be configured to use a pool as well, but this is off by default.
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128
```
Nova storage would be added like this:
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128
```
The choosing the amount of storage is up to you and can be changed by replacing the 128 to meet your needs.
We are now ready to install our next chart, MariaDB.
## MariaDB Installation and Verification ## MariaDB Installation and Verification
We are using Galera to cluster MariaDB and establish a quorum. To install the MariaDB, issue the following command: We are using Galera to cluster MariaDB and establish a quorum. To install the MariaDB, issue the following command:
``` ```
admin@kubenode01:~$ helm install --name=mariadb local/mariadb --namespace=openstack helm install --name=mariadb local/mariadb --namespace=openstack
``` ```
MariaDB is a StatefulSet (PetSets have been retired in Kubernetes v1.5.0). As such, it initiates a 'seed' which is used to deploy MariaDB members via [affinity/anti-affinity](http://kubernetes.io/docs/user-guide/node-selection/) features. Ceph uses this as well. So what you will notice is the following behavior:
```
openstack mariadb-0 0/1 Running 0 28s 10.25.49.199 kubenode05
openstack mariadb-seed-0ckf4 1/1 Running 0 48s 10.25.162.197 kubenode01
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
openstack mariadb-0 1/1 Running 0 1m 10.25.49.199 kubenode05
openstack mariadb-1 0/1 Pending 0 0s <none>
openstack mariadb-1 0/1 Pending 0 0s <none> kubenode04
openstack mariadb-1 0/1 ContainerCreating 0 0s <none> kubenode04
openstack mariadb-1 0/1 Running 0 3s 10.25.178.74 kubenode04
```
What you're seeing is the output of `kubectl get pods -o wide --all-namespaces`, which is used to monitor the seed host preparing each of the MariaDB/Galera members in order: mariadb-0, then mariadb-1, then mariadb-2. This process can take up to a few minutes, so be patient.
To test MariaDB, do the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl exec mariadb-0 -it -n openstack -- mysql -h mariadb.openstack -uroot -ppassword -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
| mysql |
| performance_schema |
+--------------------+
admin@kubenode01:~/projects/openstack-helm$
```
Now you can see that MariaDB is loaded, with databases intact! If you're at this point, the rest of the installation is easy. You can run the following to check on Galera:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe po/mariadb-0 -n openstack
Name: mariadb-0
Namespace: openstack
Node: kubenode05/192.168.3.25
Start Time: Fri, 23 Dec 2016 16:15:49 -0500
Labels: app=mariadb
galera=enabled
Status: Running
IP: 10.25.49.199
Controllers: StatefulSet/mariadb
...
...
...
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5s 5s 1 {default-scheduler } Normal Scheduled Successfully assigned mariadb-0 to kubenode05
3s 3s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulling pulling image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulled Successfully pulled image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Created Created container with docker id f702bd7c11ef; Security:[seccomp=unconfined]
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Started Started container with docker id f702bd7c11ef
```
So you can see that galera is enabled.
## Installation of Other Services ## Installation of Other Services
Now you can easily install the other services simply by going in order: Now you can easily install the other services simply by going in order:
**Install Memcached/RabbitMQ:** **Install Memcached/Etcd/RabbitMQ:**
``` ```
admin@kubenode01:~$ helm install --name=memcached local/memcached --namespace=openstack helm install --name=memcached local/memcached --namespace=openstack
admin@kubenode01:~$ helm install --name=rabbitmq local/rabbitmq --namespace=openstack helm install --name=etcd-rabbitmq local/etcd --namespace=openstack
helm install --name=rabbitmq local/rabbitmq --namespace=openstack
``` ```
**Install Keystone:** **Install Keystone:**
``` ```
admin@kubenode01:~$ helm install --name=keystone local/keystone --namespace=openstack helm install --name=keystone local/keystone --set replicas=2 --namespace=openstack
``` ```
**Install Horizon:** **Install Horizon:**
``` ```
admin@kubenode01:~$ helm install --name=horizon local/horizon --namespace=openstack helm install --name=horizon local/horizon --set network.enable_node_port=true --namespace=openstack
``` ```
**Install Glance:** **Install Glance:**
``` ```
admin@kubenode01:~$ helm install --name=glance local/glance --namespace=openstack helm install --name=glance local/glance --set replicas.api=2,replicas.registry=2 --namespace=openstack
```
**Install Heat:**
```
helm install --name=heat local/heat --namespace=openstack
```
**Install Neutron:**
```
helm install --name=neutron local/neutron --set replicas.server=2 --namespace=openstack
```
**Install Nova:**
```
helm install --name=nova local/nova --set control_replicas=2 --namespace=openstack
```
**Install Cinder:**
```
helm install --name=cinder local/cinder --set replicas.api=2 --namespace=openstack
``` ```
## Final Checks ## Final Checks
Now you can run through your final checks. Wait for all services to come up: Now you can run through your final checks. Wait for all services to come up :
``` ```
admin@kubenode01:~$ watch kubectl get all --namespace=openstack watch kubectl get all --namespace=openstack
``` ```
Finally, you should now be able to access horizon at http://<horizon-svc-ip> using admin/password Finally, you should now be able to access horizon at http://<horizon-svc-ip> using admin/password

View File

@ -1,10 +1,11 @@
# Troubleshooting # Troubleshooting
Sometimes things go wrong. These guides will help you solve many common issues. Sometimes things go wrong. These guides will help you solve many common issues with the following:
* [Minikube issues](ts-minikube.md) * [Database: Galera](ts-database.md#galera-cluster)
* [Networking issues](ts-networking.md) * [Development: Minikube](ts-minikube.md)
* [Persistent Storage issues](ts-persistent-storage.md) * [Networking: General](ts-networking.md)
* [Persistent Storage: Ceph](ts-persistent-storage.md#ceph)
## Getting Help ## Getting Help

View File

@ -0,0 +1,66 @@
# Troubleshooting - Database Deployments
This guide is to help users debug any general storage issues when deploying Charts in this repository.
# Galera Cluster
**CHART:** openstack-helm/mariadb (when `developer-mode: false`)
MariaDB is a `StatefulSet` (`PetSets` have been retired in Kubernetes v1.5.0). As such, it initiates a 'seed' which is used to deploy MariaDB members via [affinity/anti-affinity](http://kubernetes.io/docs/user-guide/node-selection/) features. Ceph uses this as well. So what you will notice is the following behavior:
```
openstack mariadb-0 0/1 Running 0 28s 10.25.49.199 kubenode05
openstack mariadb-seed-0ckf4 1/1 Running 0 48s 10.25.162.197 kubenode01
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
openstack mariadb-0 1/1 Running 0 1m 10.25.49.199 kubenode05
openstack mariadb-1 0/1 Pending 0 0s <none>
openstack mariadb-1 0/1 Pending 0 0s <none> kubenode04
openstack mariadb-1 0/1 ContainerCreating 0 0s <none> kubenode04
openstack mariadb-1 0/1 Running 0 3s 10.25.178.74 kubenode04
```
What you're seeing is the output of `kubectl get pods -o wide --all-namespaces`, which is used to monitor the seed host preparing each of the MariaDB/Galera members in order: mariadb-0, then mariadb-1, then mariadb-2. This process can take up to a few minutes, so be patient.
To test MariaDB, do the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl exec mariadb-0 -it -n openstack -- mysql -h mariadb.openstack -uroot -ppassword -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
| mysql |
| performance_schema |
+--------------------+
admin@kubenode01:~/projects/openstack-helm$
```
Now you can see that MariaDB is loaded, with databases intact! If you're at this point, the rest of the installation is easy. You can run the following to check on Galera:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe po/mariadb-0 -n openstack
Name: mariadb-0
Namespace: openstack
Node: kubenode05/192.168.3.25
Start Time: Fri, 23 Dec 2016 16:15:49 -0500
Labels: app=mariadb
galera=enabled
Status: Running
IP: 10.25.49.199
Controllers: StatefulSet/mariadb
...
...
...
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5s 5s 1 {default-scheduler } Normal Scheduled Successfully assigned mariadb-0 to kubenode05
3s 3s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulling pulling image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulled Successfully pulled image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Created Created container with docker id f702bd7c11ef; Security:[seccomp=unconfined]
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Started Started container with docker id f702bd7c11ef
```
So you can see that galera is enabled.

View File

@ -2,4 +2,83 @@
This guide is to help users debug any general storage issues when deploying Charts in this repository. This guide is to help users debug any general storage issues when deploying Charts in this repository.
# Diagnosing the problem # Ceph
**CHART:** openstack-helm/ceph
### Ceph Validating PVC
To validate persistent volume claim (PVC) creation, we've placed a test manifest in the `./test/` directory. Deploy this pvc and explore the deployment:
```
admin@kubenode01:~$ kubectl get pvc -o wide --all-namespaces -w
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph pvc-test Bound pvc-bc768dea-c93e-11e6-817f-001fc69c26d1 1Gi RWO 9h
admin@kubenode01:~$
```
The output above indicates that the PVC is 'bound' correctly. Now digging deeper:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe pvc pvc-test -n ceph
Name: pvc-test
Namespace: ceph
StorageClass: general
Status: Bound
Volume: pvc-bc768dea-c93e-11e6-817f-001fc69c26d1
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
admin@kubenode01:~/projects/openstack-helm$
```
We can see that we have a VolumeID, and the 'capacity' is 1GB. It is a 'general' storage class. It is just a simple test. You can safely delete this test by issuing the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl delete pvc pvc-test -n ceph
persistentvolumeclaim "pvc-test" deleted
admin@kubenode01:~/projects/openstack-helm$
```
### Ceph Validating StorageClass
Next we can look at the storage class, to make sure that it was created correctly:
```
admin@kubenode01:~$ kubectl describe storageclass/general
Name: general
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/rbd
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
No events.
admin@kubenode01:~$
```
The parameters is what we're looking for here. If we see parameters passed to the StorageClass correctly, we will see the `ceph-mon.ceph:6789` hostname/port, things like `userid`, and appropriate secrets used for volume claims. This all looks great, and it time to Ceph itself.
### Ceph Validation
Most commonly, we want to validate that Ceph is working correctly. This can be done with the following ceph command:
```
admin@kubenode01:~$ kubectl exec -t -i ceph-mon-0 -n ceph -- ceph status
cluster 046de582-f8ee-4352-9ed4-19de673deba0
health HEALTH_OK
monmap e3: 3 mons at {ceph-mon-392438295-6q04c=10.25.65.131:6789/0,ceph-mon-392438295-ksrb2=10.25.49.196:6789/0,ceph-mon-392438295-l0pzj=10.25.79.193:6789/0}
election epoch 6, quorum 0,1,2 ceph-mon-392438295-ksrb2,ceph-mon-392438295-6q04c,ceph-mon-392438295-l0pzj
fsmap e5: 1/1/1 up {0=mds-ceph-mds-2810413505-gtjgv=up:active}
osdmap e23: 5 osds: 5 up, 5 in
flags sortbitwise
pgmap v22012: 80 pgs, 3 pools, 12712 MB data, 3314 objects
101 GB used, 1973 GB / 2186 GB avail
80 active+clean
admin@kubenode01:~$
```
Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is 'HEALTH_OK', we have 3 mons, we've established a quorum, and we can see that our active mds is 'ceph-mds-2810413505-gtjgv'. We have a healthy environment.
For Glance and Cinder to operate, you will need to create some storage pools for these systems. Additionally, Nova can be configured to use a pool as well, but this is off by default.
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128
```
Nova storage would be added like this:
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128
```
The choosing the amount of storage is up to you and can be changed by replacing the 128 to meet your needs.
We are now ready to install our next chart, MariaDB.