stackube/doc/source/stackube_scope_clarification.rst
Pengfei Ni e978fceacb Fix document format, e.g. numbers and paragraph
Change-Id: I914ea3968513af77d8aacfc21973f06b2065608e
Signed-off-by: Pengfei Ni <feiskyer@gmail.com>
2017-08-04 10:32:50 +08:00

4.2 KiB
Raw Blame History

Stackube Scope

A multi-tenant and secure Kubernetes deployment enabled by OpenStack core components.

Not another “Kubernetes on OpenStack” project

Stackube is a standard upstream Kubernetes deployment with:

  1. Mixed container runtime of Docker (Linux container) and HyperContainer (hypervisor-based container)
  2. Keystone for tenant management
  3. Neutron for container network
  4. Cinder for persistent volume

The main difference between Stackube with existing container service project in OpenStack foundation (e.g. Magnum) is: Stackube works alongside OpenStack, not on OpenStack.

This means:

  1. Only standalone vanilla OpenStack components are required
  2. Traditional VMs are not required because HyperContainer will provide hypervisor level isolation for containerized workloads.
  3. All the components mentioned above are managed by Kubernetes plugin API.

Whats inside Stackube repo?

  1. Keystone RBAC plugin
  2. Neutron CNI plugin
    • With a k8s Network object controller
  3. Standard k8s upstream Cinder plugin with block device mode
  4. Deployment scripts and guide
  5. Other documentations

Please note:

  1. Plugins above will be deployed as system Pod and DaemonSet.
  2. All other Kubernetes volumes are also supported in Stackube, while k8s Cinder plugin with block device mode can provide better performance in mixed runtime which will be preferred by default.

Whats the difference between other plugin projects?

  1. Kuryr
    • This is a Neutron network plugin for Docker network model, which is not directly supported in Kubernetes. Kuryr can provide CNI interface, but Stackube also requires tenant aware network management which is not included in Kuryr.
  2. Fuxi
    • This is a Cinder volume plugin for Docker volume model, which is not supported in latest CRI based Kubernetes (using k8s volume plugin for now, and soon CSI). Also, Stackube prefers a “block-device to Pod” mode in volume plugin when HyperContainer runtime is enabled, which is not supported in Fuxi.
  3. K8s-cloud-provider
    • This is a “Kubernetes on OpenStack” integration which requires full functioning OpenStack deployment.
  4. Zun
    • This is a OpenStack API container service, while Stackube exposes well-known Kubernetes API and does not require full OpenStack deployment.

As summary, one distinguishable difference is that plugins in Stackube are designed to enable hard multi-tenancy in Kubernetes as a whole solution, while the other OpenStack plugin projects do not address this and solely focus on just integrating with Kubernetes/Docker as-is. There are many gaps to fill when use them to build a real multi-tenant cloud, for example, how tenants cooperate with networks in k8s.

Another difference is Stackube use mixed container runtimes mode of k8s to enable secure runtime, which is not in scope of existing foundation projects. In fact, all plugins in Stackube should work well for both Docker and HyperContainer.

The architecture of Stackube is fully decoupled and it would be easy for us (and wed like to) integrate it with any OpenStack-Kubernetes plugin. But right now, we hope to keep everything as simple as possible and focus on the core components.

Deployment workflow

On control nodes

Install standalone Keystone, Neutron, Cinder (ceph rbd). This can be done by any existing tools like devstack, RDO etc.

On other nodes

  1. Install neutron L2 agents

    This can be done by any existing tools like devstack, RDO etc.

  2. Install Kubernetes

    • Including container runtimes, CRI shims, CNI etc
    • This can be done by any existing tools like kubeadm etc
  3. Deploy Stackube

kubectl create -f stackube-configmap.yaml
kubectl create -f deployment/stackube-proxy.yaml
kubectl create -f deployment/stackube.yaml

This will deploy all Stackube plugins as Pods and DaemonSets to the cluster. You can also deploy all these components in a single node.

After that, users can use Kubernetes API to manage containers with hypervisor isolation, Neutron network, Cinder volume and tenant awareness.