Unpdate and add new doc

Implements: blueprint enhance-documentation

Change-Id: I7a67b55bbb12fae9d0f2e5c693182254a2abc0e6
This commit is contained in:
Harry Zhang 2017-08-23 16:44:51 +08:00
parent d49fd75750
commit 03929dc65b
9 changed files with 372 additions and 296 deletions

View File

@ -5,6 +5,10 @@ fabric controller, to provision containers as the compute instance, along with o
services (e.g. Cinder, Neutron). It supports multiple container runtime technologies, e.g. Docker,
Hyper, and offers built-in soft / hard multi-tenancy (depending on the container runtime used).
# Stackube Authors
Stackube is an open source project with an active development community. The project is initiated by HyperHQ, and involves contribution from ARM, China Mobile, etc.
## Architecture
![alt](doc/images/StackubeArchitecture.png)

View File

@ -12,6 +12,12 @@ fabric controller, to provision containers as the compute instance, along with o
services (e.g. Cinder, Neutron). It supports multiple container runtime technologies, e.g. Docker,
Hyper, and offers built-in soft/hard multi-tenancy (depending on the container runtime used).
============
Architecture
============
.. image:: ../images/StackubeArchitecture.png
===========
Components
===========
@ -38,3 +44,5 @@ Components
* Stackube-proxy: service discovery and load balancing, replacement of kube-proxy.
* Kubestack: the CNI network plugin, which connects containers to Neutron network.

View File

@ -1,9 +1,29 @@
Deployment Documentation
Developer Documentation
=====================================
This page describes how to setup a working development environment that can be used in developing stackube on Ubuntu or
CentOS. These instructions assume you're already installed git, golang and python on your host.
=========
Design Tips
=========
The Stackube project is very simple. The main part of it is a stackube-controller, which use Kubernetes Customized Resource Definition (CRD, previous TPR) to:
1. Manage tenants based on namespace change in k8s
2. Manage RBAC based on namespace change in k8s
3. Manage networks based on tenants change in k8s
The tenant is a CRD which maps to Keystone tenant, the network is a CRD which maps to Neutron network. We also have a kubestack binary which is the CNI plug-in for Neutron.
Also, Stackube has it's own stackube-proxy to replace kube-proxy because network in Stackube is L2 isolated, so we need a multi-tenant version kube-proxy here.
We also replaced kube-dns in k8s for the same reason: we need to have a kube-dns running in every namespace instead of a global DNS server because namespaces are isolated.
You can see that: Stackube cluster = upstream Kubernetes + several our own add-ons + standalone OpenStack components.
Please note: Cinder RBD based block device as volume is implemented in https://github.com/kubernetes/frakti, you need to contribute there if you have any idea and build a new stackube/flex-volume Docker image for Stackube to use.
=========
Build
=========
@ -36,12 +56,15 @@ Three docker images will be built:
stackube/stackube-controller:v0.1
stackube/kubestack:v0.1
=========
Start
=========
===========================
(Optional) Configure Stackube
===========================
The following parts suppose you have already deployed an environment of OpenStack and Kubernetes on same baremetal host.
If the cluster is not deployed via devstack, don't forget to setup `--experimental-keystone-url` for kube-apiserver, e.g.
If you deployed Stackube by following official guide, you can skip this part.
But if not, these steps below are needed to make sure your Stackube cluster work.
Please note the following parts suppose you have already deployed an environment of OpenStack and Kubernetes on same baremetal host. And don't forget to setup `--experimental-keystone-url` for kube-apiserver, e.g.
::
@ -73,9 +96,7 @@ Then create external network in Neutron if there is no one.
neutron subnet-create --ip_version 4 --gateway 10.123.0.1 br-ex 10.123.0.0/16 --allocation-pool start=10.123.0.2,end=10.123.0.200 --name public-subnet
Now, we are ready to deploy stackube components.
First create configure file for Stackube.
And create configure file for Stackube.
::
@ -109,139 +130,7 @@ Then deploy stackube components:
kubectl create -f stackube-configmap.yaml
kubectl create -f deployment/stackube-proxy.yaml
kubectl create -f deployment/stackube.yaml
kubectl create -f deployment/flexvolume/flexvolume-ds.yaml
=========
Test
=========
1. Create a new tenant
::
$ cat test-tenant.yaml
apiVersion: "stackube.kubernetes.io/v1"
kind: Tenant
metadata:
name: test
spec:
username: "test"
password: "password"
$ kubectl create -f test-tenant.yaml
2. Check the auto-created namespace and network. Wait a while, the namespace and network for this tenant should be created automatically:
::
$ kubectl get namespace test
NAME STATUS AGE
test Active 58m
$ kubectl -n test get network test -o yaml
apiVersion: stackube.kubernetes.io/v1
kind: Network
metadata:
clusterName: ""
creationTimestamp: 2017-08-03T11:58:31Z
generation: 0
name: test
namespace: test
resourceVersion: "3992023"
selfLink: /apis/stackube.kubernetes.io/v1/namespaces/test/networks/test
uid: 11d452eb-7843-11e7-8319-68b599b7918c
spec:
cidr: 10.244.0.0/16
gateway: 10.244.0.1
networkID: ""
status:
state: Active
3. Check the Network and Tenant created in Neutron by Stackube controller.
::
$ source ~/keystonerc_admin
$ neutron net-list
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| 421d913a-a269-408a-9765-2360e202ad5b | kube-test-test | 915b36add7e34018b7241ab63a193530 | bb446a53-de4d-4546-81fc-8736a9a88e3a 10.244.0.0/16 |
4. Check the kube-dns pods created in the new namespace.
::
# kubectl -n test get pods
NAME READY STATUS RESTARTS AGE
kube-dns-1476438210-37jv7 3/3 Running 0 1h
5. Create pods and services in the new namespace.
::
# kubectl -n test run nginx --image=nginx
deployment "nginx" created
# kubectl -n test expose deployment nginx --port=80
service "nginx" exposed
# kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-1476438210-37jv7 3/3 Running 0 1h 10.244.0.4 stackube
nginx-4217019353-6gjxq 1/1 Running 0 27s 10.244.0.10 stackube
# kubectl -n test run -i -t busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10
Name: nginx
Address 1: 10.108.57.129 nginx.test.svc.cluster.local
/ # wget -O- nginx
Connecting to nginx (10.108.57.129:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |*********************************************************************| 612 0:00:00 ETA
/ #
6. Finally, remove the tenant.
::
$ kubectl delete tenant test
tenant "test" deleted
7. Check Network in Neutron is also deleted by Stackube controller
::
$ neutron net-list
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
Now, you are ready to try Stackube features.

View File

@ -1,9 +1,16 @@
=============================================
Welcome to Stackube developer documentation!
Welcome to Stackube documentation!
=============================================
Stackube is a multi-tenant and secure Kubernetes deployment enabled by OpenStack
core components.
Stackube is a Kubernetes-centric OpenStack distro. It uses Kubernetes, instead of Nova, as the compute
fabric controller, to provision containers as the compute instance, along with other OpenStack
services (e.g. Cinder, Neutron). It supports multiple container runtime technologies, e.g. Docker,
Hyper, and offers built-in soft / hard multi-tenancy (depending on the container runtime used).
Stackube Authors
==============
Stackube is an open source project with an active development community. The project is initiated by HyperHQ, and involves contribution from ARM, China Mobile, etc.
Introduction
==============
@ -12,9 +19,10 @@ Introduction
:maxdepth: 2
architecture
stackube_scope_clarification
Developer Guide
Deployment Guide
================
.. toctree::
@ -22,7 +30,18 @@ Developer Guide
setup
Developer Guide
================
.. toctree::
:maxdepth: 2
developer
volume
User Guide
================
.. toctree::
:maxdepth: 2
user_guide

View File

@ -0,0 +1,8 @@
Setting up a multi nodes cluster Stackube
=====================================
This page describes how to setup a multi-nodes cluster of Stackube.
=================
TODO
=================

View File

@ -1,69 +1,21 @@
Setting Up a Development Environment
=====================================
=============================================
Welcome to Stackube setup documentation!
=============================================
This page describes how to setup a working development environment that can be used in developing stackube on Ubuntu or
CentOS. These instructions assume you're already installed git, golang and python on your host.
Stackube is a multi-tenant and secure Kubernetes deployment enabled by OpenStack
core components.
=================
Getting the code
=================
Single node devbox
==============
Grab the code:
::
This is a single node devbox
git clone git://git.openstack.org/openstack/stackube
.. toctree::
:maxdepth: 2
==================================
Spawn up Kubernetes and OpenStack
==================================
single_node
devstack is used to spawn up a kubernetes and openstack environment.
.. toctree::
:maxdepth: 2
Create stack user:
::
sudo useradd -s /bin/bash -d /opt/stack -m stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo su - stack
Grab the devstack:
::
git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata
cd devstack
Create a local.conf:
::
curl -sSL https://raw.githubusercontent.com/openstack/stackube/master/devstack/local.conf.sample -o local.conf
Start installation:
::
./stack.sh
Setup environment variables for kubectl and openstack client:
::
export KUBECONFIG=/opt/stack/admin.conf
source /opt/stack/devstack/openrc admin admin
================
Add a new node
================
Same procedure as above, but create the local.conf by following command:
::
curl -sSL https://raw.githubusercontent.com/openstack/stackube/master/devstack/local.conf.node.sample -o local.conf
And configure local.conf:
- Set `HOST_IP` to local host's IP
- Set `SERVICE_HOST` to master's IP
- Set `KUBEADM_TOKEN` to kubeadm token
Start installation:
::
./stack.sh
multi_node

View File

@ -0,0 +1,54 @@
Setting up a single node Stackube
=====================================
This page describes how to setup a working development environment that can be used in developing stackube on Ubuntu or CentOS. These instructions assume you're already installed git, golang and python on your host.
=================
Getting the code
=================
Grab the code:
::
git clone git://git.openstack.org/openstack/stackube
==================================
Spawn up Kubernetes and OpenStack
==================================
devstack is used to spawn up a kubernetes and openstack environment.
Create stack user:
::
sudo useradd -s /bin/bash -d /opt/stack -m stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo su - stack
Grab the devstack:
::
git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata
cd devstack
Create a local.conf:
::
curl -sSL https://raw.githubusercontent.com/openstack/stackube/master/devstack/local.conf.sample -o local.conf
Start installation:
::
./stack.sh
Setup environment variables for kubectl and openstack client:
::
export KUBECONFIG=/opt/stack/admin.conf
source /opt/stack/devstack/openrc admin admin
Setup environment variables for kubectl and openstack client:
::
export KUBECONFIG=/etc/kubernetes/admin.conf
source openrc admin admin

228
doc/source/user_guide.rst Normal file
View File

@ -0,0 +1,228 @@
Stackube User Guide
=====================================
=============================
Tenant and Network Management
=============================
In this part, we will introduce tenant management and networking in Stackube. The tenant, which is 1:1 mapped with k8s namespace, is managed by using k8s CRD (previous TPR) to interact with Keystone. And the tenant is also 1:1 mapped with a network automatically, which is also implemented by CRD with standalone Neutron.
1. Create a new tenant
::
$ cat test-tenant.yaml
apiVersion: "stackube.kubernetes.io/v1"
kind: Tenant
metadata:
name: test
spec:
username: "test"
password: "password"
$ kubectl create -f test-tenant.yaml
2. Check the auto-created namespace and network. Wait a while, the namespace and network for this tenant should be created automatically:
::
$ kubectl get namespace test
NAME STATUS AGE
test Active 58m
$ kubectl -n test get network test -o yaml
apiVersion: stackube.kubernetes.io/v1
kind: Network
metadata:
clusterName: ""
creationTimestamp: 2017-08-03T11:58:31Z
generation: 0
name: test
namespace: test
resourceVersion: "3992023"
selfLink: /apis/stackube.kubernetes.io/v1/namespaces/test/networks/test
uid: 11d452eb-7843-11e7-8319-68b599b7918c
spec:
cidr: 10.244.0.0/16
gateway: 10.244.0.1
networkID: ""
status:
state: Active
3. Check the Network and Tenant created in Neutron by Stackube controller.
::
$ source ~/keystonerc_admin
$ neutron net-list
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| 421d913a-a269-408a-9765-2360e202ad5b | kube-test-test | 915b36add7e34018b7241ab63a193530 | bb446a53-de4d-4546-81fc-8736a9a88e3a 10.244.0.0/16 |
4. Check the kube-dns pods created in the new namespace.
::
# kubectl -n test get pods
NAME READY STATUS RESTARTS AGE
kube-dns-1476438210-37jv7 3/3 Running 0 1h
5. Create pods and services in the new namespace.
::
# kubectl -n test run nginx --image=nginx
deployment "nginx" created
# kubectl -n test expose deployment nginx --port=80
service "nginx" exposed
# kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-1476438210-37jv7 3/3 Running 0 1h 10.244.0.4 stackube
nginx-4217019353-6gjxq 1/1 Running 0 27s 10.244.0.10 stackube
# kubectl -n test run -i -t busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10
Name: nginx
Address 1: 10.108.57.129 nginx.test.svc.cluster.local
/ # wget -O- nginx
Connecting to nginx (10.108.57.129:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |*********************************************************************| 612 0:00:00 ETA
/ #
6. Finally, remove the tenant.
::
$ kubectl delete tenant test
tenant "test" deleted
7. Check Network in Neutron is also deleted by Stackube controller
::
$ neutron net-list
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
=============================
Persistent volume
=============================
This part describes the persistent volume design and usage in Stackube.
=================
Standard Kubernetes volume
=================
Stackube is a standard upstream Kubernetes cluster, so any type of `Kubernetes volumes
<https://kubernetes.io/docs/concepts/storage/volumes/>`_. can be used here, for example:
::
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/exports"
Please note since Stackube is a baremetal k8s cluster, cloud provider based volume like GCE, AWS etc is not supported by default.
But unless you are using emptyDir or hostPath, we will recommend always you the "Cinder RBD based block device as volume" described below in Stackube, this will bring you much higher performance.
==================================
Cinder RBD based block device as volume
==================================
The reason this volume type is preferred is: by default Stackube will run most of your workloads in a VM-based Pod, in this case directory sharing is used by hypervisor based runtime for volumes mounts, but this actually has more I/O overhead than bind mount.
On the other hand, the hypervisor Pod make it possible to mount block device directly to the VM-based Pod, so we can eliminates directory sharing.
In Stackube, we use a flexvolume to directly use Cinder RBD based block device as Pod volume. The usage is very simple:
::
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-persistent-storage
mountPath: /var/lib/nginx
volumes:
- name: nginx-persistent-storage
flexVolume:
driver: "cinder/flexvolume_driver"
fsType: ext4
options:
volumeID: daa7b4e6-1792-462d-ad47-78e900fed429
Please note the name of flexvolume is: "cinder/flexvolume_driver". Users are expected to provide a valid volume ID created with Cinder beforehand. Then a related RBD device will be attached to the VM-based Pod.
If your cluster is installed by stackube/devstack or following other stackube official guide, a /etc/kubernetes/cinder.conf file will be generated automatically on every node.
Otherwise, users are expected to write a /etc/kubernetes/cinder.conf on every node. The contents is like:
::
[Global]
auth-url = _AUTH_URL_
username = _USERNAME_
password = _PASSWORD_
tenant-name = _TENANT_NAME_
region = _REGION_
[RBD]
keyring = _KEYRING_
and also, users need to make sure flexvolume_driver binary is in /usr/libexec/kubernetes/kubelet-plugins/volume/exec/cinder~flexvolume_driver/ of every node.

View File

@ -1,86 +0,0 @@
Persistent volume in Stackube
=====================================
This page describes the persistent volume design and usage in Stackube.
=================
Use standard Kubernetes volume
=================
Stackube is a standard upstream Kubernetes cluster, so any type of `Kubernetes volumes
<https://kubernetes.io/docs/concepts/storage/volumes/>`_. can be used here, for example:
::
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/exports"
Please note since Stackube is a baremetal k8s cluster, cloud provider based volume like GCE, AWS etc is not supported by default.
But unless you are using emptyDir or hostPath, we highly recommend the "Cinder RBD based block device as volume" described below in Stackube, because this volume type will bring you much higher performance.
==================================
Use Cinder RBD based block device as volume
==================================
The reason this volume type is recommended is: by default Stackube will run most of your workloads in a VM-based Pod, in this case directory sharing is used by hypervisor based runtime for volumes mounts, but this actually has more I/O overhead than bind mount.
On the other hand, the hypervisor Pod make it possible to mount block device directly to the VM-based Pod, so we can eliminates directory sharing.
In Stackube, we use a flexvolume to directly use Cinder RBD based block device as Pod volume. The usage is very simple:
::
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-persistent-storage
mountPath: /var/lib/nginx
volumes:
- name: nginx-persistent-storage
flexVolume:
driver: "cinder/flexvolume_driver"
fsType: ext4
options:
volumeID: daa7b4e6-1792-462d-ad47-78e900fed429
Please note the name of flexvolume is: "cinder/flexvolume_driver". Users are expected to provide a valid volume ID created with Cinder beforehand. Then a related RBD device will be attached to the VM-based Pod.
If your cluster is installed by stackube/devstack or following other stackube official guide, a /etc/kubernetes/cinder.conf file will be generated automatically on every node.
Otherwise, users are expected to write a /etc/kubernetes/cinder.conf on every node. The contents is like:
::
[Global]
auth-url = _AUTH_URL_
username = _USERNAME_
password = _PASSWORD_
tenant-name = _TENANT_NAME_
region = _REGION_
[RBD]
keyring = _KEYRING_
and also, users need to make sure flexvolume_driver binary is in /usr/libexec/kubernetes/kubelet-plugins/volume/exec/cinder~flexvolume_driver/ of every node.