Convert guides-operations to RST

This commit compiles guides-operations pages into multiple .rst files and
adds them to rst docs index.

Change-Id: I14b75a513499374b8d274364da616d47bda2018e
Partial-Implements: blueprint docs-to-rst
This commit is contained in:
Michał Dulko 2017-04-27 17:00:59 +02:00
parent e6c7298baf
commit f549e958aa
35 changed files with 382 additions and 291 deletions

View File

@ -1 +0,0 @@
# Table of Contents

View File

@ -1 +0,0 @@
# Helm-Addons

View File

@ -1 +0,0 @@
# Table of Contents

View File

@ -1 +0,0 @@
# Kubernetes Init-Containers

View File

@ -1 +0,0 @@
# Kubernetes Jobs

View File

@ -1,5 +0,0 @@
# Openstack-Helm Operations: Kubernetes
## Conceptual Guides: Kubernetes
- [Init-Containers](kb-init-containers.md)
- [Jobs](kb-jobs.md)

View File

@ -1 +0,0 @@
# Using Ingress

View File

@ -1 +0,0 @@
# Using NodePorts

View File

@ -1,9 +0,0 @@
# Table of Contents
###    4.1 Kubernetes Control Plane
####     4.1.1 CNI SDN Considerations
####     4.1.2 Calico Networking
###    4.2 Ingress Philosophy
###    4.3 Openstack Networking
####     4.3.1 Flat Networking
####     4.3.1 L2 Networking

View File

@ -1,7 +0,0 @@
# Openstack-Helm Operations: Openstack
## Openstack-Helm Configuration Management
### Configuration Overrides
## Oslo Config Generation Tool

View File

@ -1,3 +0,0 @@
# Openstack-Helm Operations: Openstack
## Overview

View File

@ -1,6 +0,0 @@
# Table of Contents
- [Using Namespaces](sec-namespaces.md)
- [SELinux and SECCOMP](sec-appsec.md)
- [Role-Based Access Control](sec-rbac.md)

View File

@ -1 +0,0 @@
# SECCOMP and SELinux

View File

@ -1 +0,0 @@
# Namespace Isolation

View File

@ -1 +0,0 @@
# Role-Based Access Controls

View File

@ -1,21 +0,0 @@
# Openstack-Helm: Operations Guides
- [Helm Operations](ops-helm/readme.md) - Helm Operator Guides
- [Openstack-Helm Operations](ops-helm/osh-operations.md)
- [Addons and Plugins](ops-helm/osh-addons.md)
- [Kubernetes Operations](ops-kubernetes/readme.md)
- [Init-Containers](ops-kubernetes/kb-init-containers.md)
- [Jobs](ops-kubernetes/kb-jobs.md)
- [Openstack Operations](ops-openstack/readme.md)
- [Config Generation](ops-openstack/os-config/os-config-gen.md) - Openstack-Helm Configuration Management
- [Networking Guides](ops-network/readme.md) - Network Operations
- [Ingress](ops-network/net-ingress.md)
- [Nodeports](ops-network/net-nodeport.md)
- [Security Guides](readme.md) - Security Operations
- [Using Namespaces](ops-security/sec-namespaces.md)
- [SELinux and SECCOMP](ops-security/sec-appsec.md)
- [Role-Based Access Control](ops-security/sec-rbac.md)
- [Troubleshooting Guides](troubleshooting/readme.md)
- [Database Issues](troubleshooting/ts-database.md)
- [Development Issues](troubleshooting/ts-development.md)
- [Networking Issues](troubleshooting/ts-networking.md)
- [Storage Issues](troubleshooting/ts-persistent-storage.md)

View File

@ -1,18 +0,0 @@
# Troubleshooting
Sometimes things go wrong. These guides will help you solve many common issues with the following:
* [Database: Galera](ts-database.md#galera-cluster)
* [Development: Minikube](ts-minikube.md)
* [Networking: General](ts-networking.md)
* [Persistent Storage: Ceph](ts-persistent-storage.md#ceph)
## Getting Help
### Channels
[Contact Information](../../README.md#openstack-helm)
### Bugs and Feature requests
When discovering a new bug, please create a new issue in the [GitHub Tracking System](https://github.com/att-comdev/openstack-helm/issues/new)

View File

@ -1,66 +0,0 @@
# Troubleshooting - Database Deployments
This guide is to help users debug any general storage issues when deploying Charts in this repository.
# Galera Cluster
**CHART:** openstack-helm/mariadb (when `developer-mode: false`)
MariaDB is a `StatefulSet` (`PetSets` have been retired in Kubernetes v1.5.0). As such, it initiates a 'seed' which is used to deploy MariaDB members via [affinity/anti-affinity](http://kubernetes.io/docs/user-guide/node-selection/) features. Ceph uses this as well. So what you will notice is the following behavior:
```
openstack mariadb-0 0/1 Running 0 28s 10.25.49.199 kubenode05
openstack mariadb-seed-0ckf4 1/1 Running 0 48s 10.25.162.197 kubenode01
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
openstack mariadb-0 1/1 Running 0 1m 10.25.49.199 kubenode05
openstack mariadb-1 0/1 Pending 0 0s <none>
openstack mariadb-1 0/1 Pending 0 0s <none> kubenode04
openstack mariadb-1 0/1 ContainerCreating 0 0s <none> kubenode04
openstack mariadb-1 0/1 Running 0 3s 10.25.178.74 kubenode04
```
What you're seeing is the output of `kubectl get pods -o wide --all-namespaces`, which is used to monitor the seed host preparing each of the MariaDB/Galera members in order: mariadb-0, then mariadb-1, then mariadb-2. This process can take up to a few minutes, so be patient.
To test MariaDB, do the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl exec mariadb-0 -it -n openstack -- mysql -h mariadb.openstack -uroot -ppassword -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
| mysql |
| performance_schema |
+--------------------+
admin@kubenode01:~/projects/openstack-helm$
```
Now you can see that MariaDB is loaded, with databases intact! If you're at this point, the rest of the installation is easy. You can run the following to check on Galera:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe po/mariadb-0 -n openstack
Name: mariadb-0
Namespace: openstack
Node: kubenode05/192.168.3.25
Start Time: Fri, 23 Dec 2016 16:15:49 -0500
Labels: app=mariadb
galera=enabled
Status: Running
IP: 10.25.49.199
Controllers: StatefulSet/mariadb
...
...
...
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5s 5s 1 {default-scheduler } Normal Scheduled Successfully assigned mariadb-0 to kubenode05
3s 3s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulling pulling image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulled Successfully pulled image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Created Created container with docker id f702bd7c11ef; Security:[seccomp=unconfined]
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Started Started container with docker id f702bd7c11ef
```
So you can see that galera is enabled.

View File

@ -1,57 +0,0 @@
# Troubleshooting - Minikube
This troubleshooting guide is intended to assist users who are developing charts within this repository when using minikube.
If you discover any issues with Minikube itself, submit an issue to the Minikube repository.
## Diagnosing the problem
In order to protect your general sanity, we've included a curated list of verification and troubleshooting steps that may help you avoid some potential issues while developing Openstack-Helm.
**MariaDB**<br>
To verify the state of MariaDB, use the following command:
```
$ kubectl exec mariadb-0 -it -n openstack -- mysql -u root -p password -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
$
```
**Helm Server/Repository**<br>
Sometimes you will run into Helm server or repository issues. For our purposes, it's mostly safe to whack these. If you are developing charts for other projects, use at your own risk (you most likely know how to resolve these issues already).
To check for a running instance of Helm Server:
```
$ ps -a | grep "helm serve"
29452 ttys004 0:00.23 helm serve .
35721 ttys004 0:00.00 grep --color=auto helm serve
```
Kill the "helm serve" running process:
```
$ kill 29452
```
To clear out previous Helm repositories, and reinstall a local repository:
```
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://localhost:8879/charts
$
$ helm repo remove local
```
This allows you to read your local repository, if you ever need to do these steps:
```
$ helm repo add local http://localhost:8879/charts
```

View File

@ -1,5 +0,0 @@
# Troubleshooting - Networking
This guide is to help users debug any networking issues when deploying Charts in this repository.
# Diagnosing the problem

View File

@ -1,84 +0,0 @@
# Troubleshooting - Persistent Storage
This guide is to help users debug any general storage issues when deploying Charts in this repository.
# Ceph
**CHART:** openstack-helm/ceph
### Ceph Validating PVC
To validate persistent volume claim (PVC) creation, we've placed a test manifest in the `./test/` directory. Deploy this pvc and explore the deployment:
```
admin@kubenode01:~$ kubectl get pvc -o wide --all-namespaces -w
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph pvc-test Bound pvc-bc768dea-c93e-11e6-817f-001fc69c26d1 1Gi RWO 9h
admin@kubenode01:~$
```
The output above indicates that the PVC is 'bound' correctly. Now digging deeper:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl describe pvc pvc-test -n ceph
Name: pvc-test
Namespace: ceph
StorageClass: general
Status: Bound
Volume: pvc-bc768dea-c93e-11e6-817f-001fc69c26d1
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
admin@kubenode01:~/projects/openstack-helm$
```
We can see that we have a VolumeID, and the 'capacity' is 1GB. It is a 'general' storage class. It is just a simple test. You can safely delete this test by issuing the following:
```
admin@kubenode01:~/projects/openstack-helm$ kubectl delete pvc pvc-test -n ceph
persistentvolumeclaim "pvc-test" deleted
admin@kubenode01:~/projects/openstack-helm$
```
### Ceph Validating StorageClass
Next we can look at the storage class, to make sure that it was created correctly:
```
admin@kubenode01:~$ kubectl describe storageclass/general
Name: general
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/rbd
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
No events.
admin@kubenode01:~$
```
The parameters is what we're looking for here. If we see parameters passed to the StorageClass correctly, we will see the `ceph-mon.ceph:6789` hostname/port, things like `userid`, and appropriate secrets used for volume claims. This all looks great, and it time to Ceph itself.
### Ceph Validation
Most commonly, we want to validate that Ceph is working correctly. This can be done with the following ceph command:
```
admin@kubenode01:~$ kubectl exec -t -i ceph-mon-0 -n ceph -- ceph status
cluster 046de582-f8ee-4352-9ed4-19de673deba0
health HEALTH_OK
monmap e3: 3 mons at {ceph-mon-392438295-6q04c=10.25.65.131:6789/0,ceph-mon-392438295-ksrb2=10.25.49.196:6789/0,ceph-mon-392438295-l0pzj=10.25.79.193:6789/0}
election epoch 6, quorum 0,1,2 ceph-mon-392438295-ksrb2,ceph-mon-392438295-6q04c,ceph-mon-392438295-l0pzj
fsmap e5: 1/1/1 up {0=mds-ceph-mds-2810413505-gtjgv=up:active}
osdmap e23: 5 osds: 5 up, 5 in
flags sortbitwise
pgmap v22012: 80 pgs, 3 pools, 12712 MB data, 3314 objects
101 GB used, 1973 GB / 2186 GB avail
80 active+clean
admin@kubenode01:~$
```
Use one of your Ceph Monitors to check the status of the cluster. A couple of things to note above; our health is 'HEALTH_OK', we have 3 mons, we've established a quorum, and we can see that our active mds is 'ceph-mds-2810413505-gtjgv'. We have a healthy environment.
For Glance and Cinder to operate, you will need to create some storage pools for these systems. Additionally, Nova can be configured to use a pool as well, but this is off by default.
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128
```
Nova storage would be added like this:
```
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128
```
The choosing the amount of storage is up to you and can be changed by replacing the 128 to meet your needs.
We are now ready to install our next chart, MariaDB.

View File

@ -15,6 +15,7 @@ Contents:
philosophy
install/index
devref/index
operator/index
contributing
Indices and tables

View File

@ -0,0 +1,3 @@
===============
Getting started
===============

View File

@ -0,0 +1,9 @@
===============
Helm Operations
===============
OpenStack-Helm Operations
=========================
Addons and Plugins
==================

View File

@ -0,0 +1,15 @@
Operations Guides
=================
Contents:
.. toctree::
:maxdepth: 2
getting-started
helm
kubernetes
network
openstack
security
troubleshooting/index

View File

@ -0,0 +1,9 @@
=====================
Kubernetes Operations
=====================
Init-Containers
===============
Jobs
====

View File

@ -0,0 +1,9 @@
=================
Networking Guides
=================
Ingress
=======
Nodeports
=========

View File

@ -0,0 +1,12 @@
====================
OpenStack Operations
====================
OpenStack-Helm Configuration Management
=======================================
Configuration overrides
-----------------------
Oslo Config Generation Tool
===========================

View File

@ -0,0 +1,12 @@
===============
Security Guides
===============
Using namespaces
================
SELinux and SECCOMP
===================
Role-Based Access Control
=========================

View File

@ -0,0 +1,81 @@
====================
Database Deployments
====================
This guide is to help users debug any general storage issues when
deploying Charts in this repository.
Galera Cluster
==============
**CHART:** openstack-helm/mariadb (when ``developer-mode: false``)
MariaDB is a ``StatefulSet`` (``PetSets`` have been retired in
Kubernetes v1.5.0). As such, it initiates a 'seed' which is used to
deploy MariaDB members via `affinity/anti-affinity
<http://kubernetes.io/docs/user-guide/node-selection/>`__
features. Ceph uses this as well. So what you will notice is the
following behavior:
::
openstack mariadb-0 0/1 Running 0 28s 10.25.49.199 kubenode05
openstack mariadb-seed-0ckf4 1/1 Running 0 48s 10.25.162.197 kubenode01
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
openstack mariadb-0 1/1 Running 0 1m 10.25.49.199 kubenode05
openstack mariadb-1 0/1 Pending 0 0s <none>
openstack mariadb-1 0/1 Pending 0 0s <none> kubenode04
openstack mariadb-1 0/1 ContainerCreating 0 0s <none> kubenode04
openstack mariadb-1 0/1 Running 0 3s 10.25.178.74 kubenode04
What you're seeing is the output of
``kubectl get pods -o wide --all-namespaces``, which is used to monitor
the seed host preparing each of the MariaDB/Galera members in order:
mariadb-0, then mariadb-1, then mariadb-2. This process can take up to a
few minutes, so be patient.
To test MariaDB, do the following:
::
admin@kubenode01:~/projects/openstack-helm$ kubectl exec mariadb-0 -it -n openstack -- mysql -h mariadb.openstack -uroot -ppassword -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| keystone |
| mysql |
| performance_schema |
+--------------------+
admin@kubenode01:~/projects/openstack-helm$
Now you can see that MariaDB is loaded, with databases intact! If you're
at this point, the rest of the installation is easy. You can run the
following to check on Galera:
::
admin@kubenode01:~/projects/openstack-helm$ kubectl describe po/mariadb-0 -n openstack
Name: mariadb-0
Namespace: openstack
Node: kubenode05/192.168.3.25
Start Time: Fri, 23 Dec 2016 16:15:49 -0500
Labels: app=mariadb
galera=enabled
Status: Running
IP: 10.25.49.199
Controllers: StatefulSet/mariadb
...
...
...
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5s 5s 1 {default-scheduler } Normal Scheduled Successfully assigned mariadb-0 to kubenode05
3s 3s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulling pulling image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulled Successfully pulled image "quay.io/stackanetes/stackanetes-mariadb:newton"
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Created Created container with docker id f702bd7c11ef; Security:[seccomp=unconfined]
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Started Started container with docker id f702bd7c11ef
So you can see that galera is enabled.

View File

@ -0,0 +1,72 @@
Minikube
========
This troubleshooting guide is intended to assist users who are
developing charts within this repository when using minikube. If you
discover any issues with Minikube itself, submit an issue to the
Minikube repository.
Diagnosing the problem
----------------------
In order to protect your general sanity, we've included a curated list
of verification and troubleshooting steps that may help you avoid some
potential issues while developing Openstack-Helm.
MariaDB
~~~~~~~
To verify the state of MariaDB, use the following command:
::
$ kubectl exec mariadb-0 -it -n openstack -- mysql -u root -p password -e 'show databases;'
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
+--------------------+
$
Helm Server/Repository
~~~~~~~~~~~~~~~~~~~~~~
Sometimes you will run into Helm server or repository issues. For our
purposes, it's mostly safe to whack these. If you are developing
charts for other projects, use at your own risk (you most likely know
how to resolve these issues already).
To check for a running instance of Helm Server:
::
$ ps -a | grep "helm serve"
29452 ttys004 0:00.23 helm serve .
35721 ttys004 0:00.00 grep --color=auto helm serve
Kill the "helm serve" running process:
::
$ kill 29452
To clear out previous Helm repositories, and reinstall a local
repository:
::
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com/
local http://localhost:8879/charts
$
$ helm repo remove local
This allows you to read your local repository, if you ever need to do
these steps:
::
$ helm repo add local http://localhost:8879/charts

View File

@ -0,0 +1,29 @@
===============
Troubleshooting
===============
Sometimes things go wrong. These guides will help you solve many common issues with the following:
.. toctree::
:maxdepth: 2
database
development
networking
persistent-storage
Getting help
============
Channels
--------
* Join us on `Slack <http://slack.k8s.io/>`_ - #openstack-helm
* Join us on `IRC <irc://chat.freenode.net:6697/openstack-helm>`_:
#openstack-helm on freenode
Bugs and Feature requests
-------------------------
When discovering a new bug, please create a new issue in `Launchpad
<https://bugs.launchpad.net/openstack-helm>`_.

View File

@ -0,0 +1,9 @@
==========
Networking
==========
This guide is to help users debug any networking issues when deploying
Charts in this repository.
Diagnosing the problem
======================

View File

@ -0,0 +1,121 @@
==================
Persistent Storage
==================
This guide is to help users debug any general storage issues when
deploying Charts in this repository.
Ceph
====
**CHART:** openstack-helm/ceph
Ceph Validating PVC
~~~~~~~~~~~~~~~~~~~
To validate persistent volume claim (PVC) creation, we've placed a test
manifest in the ``./test/`` directory. Deploy this pvc and explore the
deployment:
::
admin@kubenode01:~$ kubectl get pvc -o wide --all-namespaces -w
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph pvc-test Bound pvc-bc768dea-c93e-11e6-817f-001fc69c26d1 1Gi RWO 9h
admin@kubenode01:~$
The output above indicates that the PVC is 'bound' correctly. Now
digging deeper:
::
admin@kubenode01:~/projects/openstack-helm$ kubectl describe pvc pvc-test -n ceph
Name: pvc-test
Namespace: ceph
StorageClass: general
Status: Bound
Volume: pvc-bc768dea-c93e-11e6-817f-001fc69c26d1
Labels: <none>
Capacity: 1Gi
Access Modes: RWO
No events.
admin@kubenode01:~/projects/openstack-helm$
We can see that we have a VolumeID, and the 'capacity' is 1GB. It is a
'general' storage class. It is just a simple test. You can safely delete
this test by issuing the following:
::
admin@kubenode01:~/projects/openstack-helm$ kubectl delete pvc pvc-test -n ceph
persistentvolumeclaim "pvc-test" deleted
admin@kubenode01:~/projects/openstack-helm$
Ceph Validating StorageClass
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Next we can look at the storage class, to make sure that it was created
correctly:
::
admin@kubenode01:~$ kubectl describe storageclass/general
Name: general
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/rbd
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
No events.
admin@kubenode01:~$
The parameters is what we're looking for here. If we see parameters
passed to the StorageClass correctly, we will see the
``ceph-mon.ceph:6789`` hostname/port, things like ``userid``, and
appropriate secrets used for volume claims. This all looks great, and it
time to Ceph itself.
Ceph Validation
~~~~~~~~~~~~~~~
Most commonly, we want to validate that Ceph is working correctly. This
can be done with the following ceph command:
::
admin@kubenode01:~$ kubectl exec -t -i ceph-mon-0 -n ceph -- ceph status
cluster 046de582-f8ee-4352-9ed4-19de673deba0
health HEALTH_OK
monmap e3: 3 mons at {ceph-mon-392438295-6q04c=10.25.65.131:6789/0,ceph-mon-392438295-ksrb2=10.25.49.196:6789/0,ceph-mon-392438295-l0pzj=10.25.79.193:6789/0}
election epoch 6, quorum 0,1,2 ceph-mon-392438295-ksrb2,ceph-mon-392438295-6q04c,ceph-mon-392438295-l0pzj
fsmap e5: 1/1/1 up {0=mds-ceph-mds-2810413505-gtjgv=up:active}
osdmap e23: 5 osds: 5 up, 5 in
flags sortbitwise
pgmap v22012: 80 pgs, 3 pools, 12712 MB data, 3314 objects
101 GB used, 1973 GB / 2186 GB avail
80 active+clean
admin@kubenode01:~$
Use one of your Ceph Monitors to check the status of the cluster. A
couple of things to note above; our health is 'HEALTH\_OK', we have 3
mons, we've established a quorum, and we can see that our active mds is
'ceph-mds-2810413505-gtjgv'. We have a healthy environment.
For Glance and Cinder to operate, you will need to create some storage
pools for these systems. Additionally, Nova can be configured to use a
pool as well, but this is off by default.
::
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128
Nova storage would be added like this:
::
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128
The choosing the amount of storage is up to you and can be changed by
replacing the 128 to meet your needs.
We are now ready to install our next chart, MariaDB.