Update remaining markdown docs to RST
There were few remaining README.md files. This commit converts them to RST. Change-Id: Ia0be0012fff33b9be5c9db3febc1e45a076701ec
This commit is contained in:
parent
fb11f693ab
commit
cc243499ec
178
ceph/README.md
178
ceph/README.md
@ -1,178 +0,0 @@
|
||||
# openstack-helm/ceph
|
||||
|
||||
This chart installs a working version of ceph. It is based on the ceph-docker work and follows closely with the setup [examples](https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes) for kubernetes.
|
||||
|
||||
It attempts to simplify that process by wrapping up much of the setup into a helm-chart. A few items are still necessary, however until they can be refined:
|
||||
|
||||
### SkyDNS Resolution
|
||||
|
||||
The Ceph MONs are what clients talk to when mounting Ceph storage. Because Ceph MON IPs can change, we need a Kubernetes service to front them. Otherwise your clients will eventually stop working over time as MONs are rescheduled.
|
||||
|
||||
To get skyDNS resolution working, the resolv.conf on your nodes should look something like this:
|
||||
|
||||
```
|
||||
domain <EXISTING_DOMAIN>
|
||||
search <EXISTING_DOMAIN>
|
||||
|
||||
search svc.cluster.local #Your kubernetes cluster ip domain
|
||||
|
||||
nameserver 10.0.0.10 #The cluster IP of skyDNS
|
||||
nameserver <EXISTING_RESOLVER_IP>
|
||||
```
|
||||
|
||||
### Ceph and RBD utilities installed on the nodes
|
||||
|
||||
The Kubernetes kubelet shells out to system utilities to mount Ceph volumes. This means that every system must have these utilities installed. This requirement extends to the control plane, since there may be interactions between kube-controller-manager and the Ceph cluster.
|
||||
|
||||
For Debian-based distros:
|
||||
|
||||
```
|
||||
apt-get install ceph-fs-common ceph-common
|
||||
```
|
||||
|
||||
For Redhat-based distros:
|
||||
|
||||
```
|
||||
yum install ceph
|
||||
```
|
||||
|
||||
### Linux Kernel version 4.2.0 or newer
|
||||
|
||||
You'll need a newer kernel to use this. Kernel panics have been observed on older versions. Your kernel should also have RBD support.
|
||||
|
||||
This has been tested on:
|
||||
|
||||
- Ubuntu 15.10
|
||||
|
||||
This will not work on:
|
||||
|
||||
- Debian 8.5
|
||||
|
||||
|
||||
### Override the default network settings
|
||||
|
||||
By default, `10.244.0.0/16` is used for the `cluster_network` and `public_network` in ceph.conf. To change these defaults, set the following environment variables according to your network requirements. These IPs should be set according to the range of your Pod IPs in your kubernetes cluster:
|
||||
|
||||
```
|
||||
export osd_cluster_network=192.168.0.0/16
|
||||
export osd_public_network=192.168.0.0/16
|
||||
```
|
||||
|
||||
For a kubeadm installed weave cluster, you will likely want to run:
|
||||
|
||||
```
|
||||
export osd_cluster_network=10.32.0.0/12
|
||||
export osd_public_network=10.32.0.0/12
|
||||
```
|
||||
|
||||
### Label your storage nodes
|
||||
|
||||
You must label your storage nodes in order to run Ceph pods on them.
|
||||
|
||||
```
|
||||
kubectl label node <nodename> node-type=storage
|
||||
```
|
||||
|
||||
If you want all nodes in your Kubernetes cluster to be a part of your Ceph cluster, label them all.
|
||||
|
||||
```
|
||||
kubectl label nodes node-type=storage --all
|
||||
```
|
||||
|
||||
### Quickstart
|
||||
|
||||
You will need to generate ceph keys and configuration. There is a simple to use utility that can do this quickly. Please note the generator utility (per ceph-docker) requires the sigil template framework: (https://github.com/gliderlabs/sigil) to be installed and on the current path.
|
||||
|
||||
```
|
||||
cd common/utils/secret-generator
|
||||
./generate_secrets.sh all `./generate_secrets.sh fsid`
|
||||
cd ../../..
|
||||
```
|
||||
|
||||
At this point, you're ready to generate base64 encoded files based on the secrets generated above. This is done automatically if you run make which rebuilds all charts.
|
||||
|
||||
```
|
||||
make
|
||||
```
|
||||
|
||||
You can also trigger it specifically:
|
||||
|
||||
```
|
||||
make base64
|
||||
make ceph
|
||||
```
|
||||
|
||||
Finally, you can now deploy your ceph chart:
|
||||
|
||||
```
|
||||
helm --debug install local/ceph --namespace=ceph
|
||||
```
|
||||
|
||||
You should see a deployed/successful helm deployment:
|
||||
|
||||
```
|
||||
# helm ls
|
||||
NAME REVISION UPDATED STATUS CHART
|
||||
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
|
||||
```
|
||||
|
||||
as well as all kubernetes resources deployed into the ceph namespace:
|
||||
|
||||
```
|
||||
# kubectl get all --namespace=ceph
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
svc/ceph-mon None <none> 6789/TCP 1h
|
||||
svc/ceph-rgw 100.76.18.187 <pending> 80/TCP 1h
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
po/ceph-mds-840702866-0n24u 1/1 Running 3 1h
|
||||
po/ceph-mon-1870970076-7h5zw 1/1 Running 2 1h
|
||||
po/ceph-mon-1870970076-d4uu2 1/1 Running 3 1h
|
||||
po/ceph-mon-1870970076-s6d2p 1/1 Running 1 1h
|
||||
po/ceph-mon-check-4116985937-ggv4m 1/1 Running 0 1h
|
||||
po/ceph-osd-2m2mf 1/1 Running 2 1h
|
||||
po/ceph-rgw-2085838073-02154 0/1 Pending 0 1h
|
||||
po/ceph-rgw-2085838073-0d6z7 0/1 CrashLoopBackOff 21 1h
|
||||
po/ceph-rgw-2085838073-3trec 0/1 Pending 0 1h
|
||||
```
|
||||
|
||||
Note that the ceph-rgw image is crashing because of an issue processing the mon_host name 'ceph-mon' in ceph.conf. This is an upstream issue that needs to be worked but is not required to test ceph rbd or ceph filesystem functionality.
|
||||
|
||||
Finally, you can now test a ceph rbd volume:
|
||||
|
||||
```
|
||||
export PODNAME=`kubectl get pods --selector="app=ceph,daemon=mon" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" --namespace=ceph`
|
||||
kubectl exec -it $PODNAME --namespace=ceph -- rbd create ceph-rbd-test --size 20G
|
||||
kubectl exec -it $PODNAME --namespace=ceph -- rbd info ceph-rbd-test
|
||||
```
|
||||
|
||||
If that works, you can create a container and attach it to that volume:
|
||||
|
||||
```
|
||||
cd ceph/utils/test
|
||||
kubectl create -f ceph-rbd-test.yaml --namespace=ceph
|
||||
kubectl exec -it --namespace=ceph ceph-rbd-test -- df -h
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
|
||||
Always make sure to delete any test instances that have ceph volumes mounted before you delete your ceph cluster. Otherwise, kubelet may get stuck trying to unmount volumes which can only be recovered with a reboot. If you ran the tests above, this can be done with:
|
||||
|
||||
```
|
||||
kubectl delete ceph-rbd-test --namespace=ceph
|
||||
```
|
||||
|
||||
The easiest way to delete your environment is to delete the helm install:
|
||||
|
||||
```
|
||||
# helm ls
|
||||
NAME REVISION UPDATED STATUS CHART
|
||||
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
|
||||
|
||||
# helm delete saucy-elk
|
||||
```
|
||||
|
||||
And finally, because helm does not appear to cleanup all artifacts, you will want to delete the ceph namespace to remove any secrets helm installed:
|
||||
|
||||
```
|
||||
kubectl delete namespace ceph
|
||||
```
|
219
ceph/README.rst
Normal file
219
ceph/README.rst
Normal file
@ -0,0 +1,219 @@
|
||||
openstack-helm/ceph
|
||||
===================
|
||||
|
||||
This chart installs a working version of ceph. It is based on the
|
||||
ceph-docker work and follows closely with the setup `examples
|
||||
<https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes>`__
|
||||
for kubernetes.
|
||||
|
||||
It attempts to simplify that process by wrapping up much of the setup
|
||||
into a helm-chart. A few items are still necessary, however until they
|
||||
can be refined:
|
||||
|
||||
SkyDNS Resolution
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Ceph MONs are what clients talk to when mounting Ceph storage.
|
||||
Because Ceph MON IPs can change, we need a Kubernetes service to front
|
||||
them. Otherwise your clients will eventually stop working over time as
|
||||
MONs are rescheduled.
|
||||
|
||||
To get skyDNS resolution working, the resolv.conf on your nodes should
|
||||
look something like this:
|
||||
|
||||
::
|
||||
|
||||
domain <EXISTING_DOMAIN>
|
||||
search <EXISTING_DOMAIN>
|
||||
|
||||
search svc.cluster.local #Your kubernetes cluster ip domain
|
||||
|
||||
nameserver 10.0.0.10 #The cluster IP of skyDNS
|
||||
nameserver <EXISTING_RESOLVER_IP>
|
||||
|
||||
Ceph and RBD utilities installed on the nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Kubernetes kubelet shells out to system utilities to mount Ceph
|
||||
volumes. This means that every system must have these utilities
|
||||
installed. This requirement extends to the control plane, since there
|
||||
may be interactions between kube-controller-manager and the Ceph
|
||||
cluster.
|
||||
|
||||
For Debian-based distros:
|
||||
|
||||
::
|
||||
|
||||
apt-get install ceph-fs-common ceph-common
|
||||
|
||||
For Redhat-based distros:
|
||||
|
||||
::
|
||||
|
||||
yum install ceph
|
||||
|
||||
Linux Kernel version 4.2.0 or newer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You'll need a newer kernel to use this. Kernel panics have been observed
|
||||
on older versions. Your kernel should also have RBD support.
|
||||
|
||||
This has been tested on:
|
||||
|
||||
* Ubuntu 15.10
|
||||
|
||||
This will not work on:
|
||||
|
||||
* Debian 8.5
|
||||
|
||||
Override the default network settings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, ``10.244.0.0/16`` is used for the ``cluster_network`` and
|
||||
``public_network`` in ceph.conf. To change these defaults, set the
|
||||
following environment variables according to your network requirements.
|
||||
These IPs should be set according to the range of your Pod IPs in your
|
||||
kubernetes cluster:
|
||||
|
||||
::
|
||||
|
||||
export osd_cluster_network=192.168.0.0/16
|
||||
export osd_public_network=192.168.0.0/16
|
||||
|
||||
For a kubeadm installed weave cluster, you will likely want to run:
|
||||
|
||||
::
|
||||
|
||||
export osd_cluster_network=10.32.0.0/12
|
||||
export osd_public_network=10.32.0.0/12
|
||||
|
||||
Label your storage nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You must label your storage nodes in order to run Ceph pods on them.
|
||||
|
||||
::
|
||||
|
||||
kubectl label node <nodename> node-type=storage
|
||||
|
||||
If you want all nodes in your Kubernetes cluster to be a part of your
|
||||
Ceph cluster, label them all.
|
||||
|
||||
::
|
||||
|
||||
kubectl label nodes node-type=storage --all
|
||||
|
||||
Quickstart
|
||||
~~~~~~~~~~
|
||||
|
||||
You will need to generate ceph keys and configuration. There is a simple
|
||||
to use utility that can do this quickly. Please note the generator
|
||||
utility (per ceph-docker) requires the `sigil template framework
|
||||
<https://github.com/gliderlabs/sigil>`_ to be installed and on the current
|
||||
path.
|
||||
|
||||
::
|
||||
|
||||
cd common/utils/secret-generator
|
||||
./generate_secrets.sh all `./generate_secrets.sh fsid`
|
||||
cd ../../..
|
||||
|
||||
At this point, you're ready to generate base64 encoded files based on
|
||||
the secrets generated above. This is done automatically if you run make
|
||||
which rebuilds all charts.
|
||||
|
||||
::
|
||||
|
||||
make
|
||||
|
||||
You can also trigger it specifically:
|
||||
|
||||
::
|
||||
|
||||
make base64
|
||||
make ceph
|
||||
|
||||
Finally, you can now deploy your ceph chart:
|
||||
|
||||
::
|
||||
|
||||
helm --debug install local/ceph --namespace=ceph
|
||||
|
||||
You should see a deployed/successful helm deployment:
|
||||
|
||||
::
|
||||
|
||||
# helm ls
|
||||
NAME REVISION UPDATED STATUS CHART
|
||||
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
|
||||
|
||||
as well as all kubernetes resources deployed into the ceph namespace:
|
||||
|
||||
::
|
||||
|
||||
# kubectl get all --namespace=ceph
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
svc/ceph-mon None <none> 6789/TCP 1h
|
||||
svc/ceph-rgw 100.76.18.187 <pending> 80/TCP 1h
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
po/ceph-mds-840702866-0n24u 1/1 Running 3 1h
|
||||
po/ceph-mon-1870970076-7h5zw 1/1 Running 2 1h
|
||||
po/ceph-mon-1870970076-d4uu2 1/1 Running 3 1h
|
||||
po/ceph-mon-1870970076-s6d2p 1/1 Running 1 1h
|
||||
po/ceph-mon-check-4116985937-ggv4m 1/1 Running 0 1h
|
||||
po/ceph-osd-2m2mf 1/1 Running 2 1h
|
||||
po/ceph-rgw-2085838073-02154 0/1 Pending 0 1h
|
||||
po/ceph-rgw-2085838073-0d6z7 0/1 CrashLoopBackOff 21 1h
|
||||
po/ceph-rgw-2085838073-3trec 0/1 Pending 0 1h
|
||||
|
||||
Note that the ceph-rgw image is crashing because of an issue processing
|
||||
the mon\_host name 'ceph-mon' in ceph.conf. This is an upstream issue
|
||||
that needs to be worked but is not required to test ceph rbd or ceph
|
||||
filesystem functionality.
|
||||
|
||||
Finally, you can now test a ceph rbd volume:
|
||||
|
||||
::
|
||||
|
||||
export PODNAME=`kubectl get pods --selector="app=ceph,daemon=mon" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" --namespace=ceph`
|
||||
kubectl exec -it $PODNAME --namespace=ceph -- rbd create ceph-rbd-test --size 20G
|
||||
kubectl exec -it $PODNAME --namespace=ceph -- rbd info ceph-rbd-test
|
||||
|
||||
If that works, you can create a container and attach it to that volume:
|
||||
|
||||
::
|
||||
|
||||
cd ceph/utils/test
|
||||
kubectl create -f ceph-rbd-test.yaml --namespace=ceph
|
||||
kubectl exec -it --namespace=ceph ceph-rbd-test -- df -h
|
||||
|
||||
Cleanup
|
||||
~~~~~~~
|
||||
|
||||
Always make sure to delete any test instances that have ceph volumes
|
||||
mounted before you delete your ceph cluster. Otherwise, kubelet may get
|
||||
stuck trying to unmount volumes which can only be recovered with a
|
||||
reboot. If you ran the tests above, this can be done with:
|
||||
|
||||
::
|
||||
|
||||
kubectl delete ceph-rbd-test --namespace=ceph
|
||||
|
||||
The easiest way to delete your environment is to delete the helm
|
||||
install:
|
||||
|
||||
::
|
||||
|
||||
# helm ls
|
||||
NAME REVISION UPDATED STATUS CHART
|
||||
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
|
||||
|
||||
# helm delete saucy-elk
|
||||
|
||||
And finally, because helm does not appear to cleanup all artifacts, you
|
||||
will want to delete the ceph namespace to remove any secrets helm
|
||||
installed:
|
||||
|
||||
::
|
||||
|
||||
kubectl delete namespace ceph
|
@ -1,43 +0,0 @@
|
||||
# Development Environment Setup
|
||||
|
||||
## Requirements
|
||||
|
||||
* Hardware
|
||||
* 16GB RAM
|
||||
* 32GB HDD Space
|
||||
* Software
|
||||
* Vagrant >= 1.8.0
|
||||
* VirtualBox >= 5.1.0
|
||||
* Kubectl
|
||||
* Helm
|
||||
* Git
|
||||
|
||||
## Deploy
|
||||
|
||||
* Make sure you are in the directory containing the Vagrantfile before running the following commands.
|
||||
|
||||
### Create VM
|
||||
|
||||
``` bash
|
||||
vagrant up --provider virtualbox
|
||||
```
|
||||
|
||||
### Deploy NFS Provisioner for development PVCs
|
||||
|
||||
``` bash
|
||||
vagrant ssh --command "sudo docker exec kubeadm-aio kubectl create -R -f /opt/nfs-provisioner/"
|
||||
```
|
||||
|
||||
### Setup Clients and deploy Helm's tiller
|
||||
|
||||
``` bash
|
||||
./setup-dev-host.sh
|
||||
```
|
||||
|
||||
### Label VM node(s) for OpenStack-Helm Deployment
|
||||
|
||||
``` bash
|
||||
kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
|
||||
kubectl label nodes openvswitch=enabled --all --namespace=openstack
|
||||
kubectl label nodes openstack-compute-node=enabled --all --namespace=openstack
|
||||
```
|
55
dev/README.rst
Normal file
55
dev/README.rst
Normal file
@ -0,0 +1,55 @@
|
||||
==================
|
||||
Vagrant Deployment
|
||||
==================
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
* Hardware:
|
||||
|
||||
* 16GB RAM
|
||||
* 32GB HDD Space
|
||||
|
||||
* Software
|
||||
|
||||
* Vagrant >= 1.8.0
|
||||
* VirtualBox >= 5.1.0
|
||||
* Kubectl
|
||||
* Helm
|
||||
* Git
|
||||
|
||||
Deploy
|
||||
------
|
||||
|
||||
Make sure you are in the directory containing the Vagrantfile before
|
||||
running the following commands.
|
||||
|
||||
Create VM
|
||||
---------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
vagrant up --provider virtualbox
|
||||
|
||||
Deploy NFS Provisioner for development PVCs
|
||||
-------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
vagrant ssh --command "sudo docker exec kubeadm-aio kubectl create -R -f /opt/nfs-provisioner/"
|
||||
|
||||
Setup Clients and deploy Helm's tiller
|
||||
--------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./setup-dev-host.sh
|
||||
|
||||
Label VM node(s) for OpenStack-Helm Deployment
|
||||
----------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
|
||||
kubectl label nodes openvswitch=enabled --all --namespace=openstack
|
||||
kubectl label nodes openstack-compute-node=enabled --all --namespace=openstack
|
@ -1,52 +1 @@
|
||||
==================
|
||||
Vagrant Deployment
|
||||
==================
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
* Hardware
|
||||
* 16GB RAM
|
||||
* 32GB HDD Space
|
||||
* Software
|
||||
* Vagrant >= 1.8.0
|
||||
* VirtualBox >= 5.1.0
|
||||
* Kubectl
|
||||
* Helm
|
||||
* Git
|
||||
|
||||
Deploy
|
||||
------
|
||||
|
||||
Make sure you are in the directory containing the Vagrantfile before
|
||||
running the following commands.
|
||||
|
||||
Create VM
|
||||
---------
|
||||
|
||||
::
|
||||
|
||||
vagrant up --provider virtualbox
|
||||
|
||||
Deploy NFS Provisioner for development PVCs
|
||||
-------------------------------------------
|
||||
|
||||
::
|
||||
|
||||
vagrant ssh --command "sudo docker exec kubeadm-aio kubectl create -R -f /opt/nfs-provisioner/"
|
||||
|
||||
Setup Clients and deploy Helm's tiller
|
||||
--------------------------------------
|
||||
|
||||
::
|
||||
|
||||
./setup-dev-host.sh
|
||||
|
||||
Label VM node(s) for OpenStack-Helm Deployment
|
||||
----------------------------------------------
|
||||
|
||||
::
|
||||
|
||||
kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
|
||||
kubectl label nodes openvswitch=enabled --all --namespace=openstack
|
||||
kubectl label nodes openstack-compute-node=enabled --all --namespace=openstack
|
||||
.. include:: ../../../../dev/README.rst
|
||||
|
@ -8,5 +8,4 @@ OpenStack-Helm Configuration Management
|
||||
Configuration overrides
|
||||
-----------------------
|
||||
|
||||
Oslo Config Generation Tool
|
||||
===========================
|
||||
.. include:: ../../../tools/gen-oslo-openstack-helm/README.rst
|
||||
|
@ -1,76 +0,0 @@
|
||||
# Openstack-Helm
|
||||
|
||||
Welcome to the Openstack-Helm project!
|
||||
|
||||
## Mission Statement
|
||||
|
||||
Openstack-Helm is a project that provides a flexible, production-grade Kubernetes deployment of Openstack Services using Kubernetes/Helm deployment primitives. The charts contained within the Openstack-Helm project are designed to be a plenary framework of self-contained, service-level manifest templates for development, operations, and troubleshooting, leveraging upstream Kubernetes and Helm best practices with third-party add-ons treated as a last resort.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
The documentation provided for Openstack-Helm are provided in the following role-specific guides:
|
||||
|
||||
- [Welcome Guide](guides-welcome/readme.md)
|
||||
- [Mission](#mission-statement) - Openstack-Helm Mission Statement
|
||||
- [Project Overview](guides-welcome/welcome-overview.md)
|
||||
- [Resiliency Philosophy](guides-welcome/welcome-resiliency.md)
|
||||
- [Scalability Philosophy](guides-welcome/welcome-scaling.md)
|
||||
- [Installation Guides](guides-install/readme.md) - Various Installation Options
|
||||
- [Developer Installation](guides-install/developer/readme.md) - Environment for Openstack-Helm Development
|
||||
- [Minikube](guides-install/developer/install-minikube.md)
|
||||
- [Vagrant](guides-install/developer/install-vagrant.md)
|
||||
- [All-in-One](guides-install/install-aio.md) - Evaluation of Openstack-Helm
|
||||
- [Multinode](guides-install/install-multinode.md) - Multinode or Production Deployments
|
||||
- [Developer Guides](guides-developer/readme.md) - Resources for Openstack-Helm Developers
|
||||
- [Getting Started](guides-developer/getting-started/readme.md) - Development Philosophies
|
||||
- [Default Values](guides-developer/getting-started/gs-values.md)
|
||||
- [Chart Overrides](guides-developer/getting-started/gs-overrides.md)
|
||||
- [Replica Guidelines](guides-developer/getting-started/gs-replicas.md)
|
||||
- [Image Guidelines](guides-developer/getting-started/gs-images.md)
|
||||
- [Resource Guidelines](guides-developer/getting-started/gs-resources.md)
|
||||
- [Labeling Guidelines](guides-developer/getting-started/gs-labels.md)
|
||||
- [Endpoint Considerations](guides-developer/getting-started/gs-endpoints.md)
|
||||
- [Helm Upgrades Considerations](guides-developer/getting-started/gs-upgrades.md)
|
||||
- [Using Conditionals](guides-developer/getting-started/gs-conditionals.md)
|
||||
- [Helm Development Handbook](guides-developer/readme.md) - Hands-On Development Guide
|
||||
- [Getting Started](guides-developer/getting-started/readme.md) - Development Philosophies
|
||||
- [Default Values](guides-developer/getting-started/gs-values.md)
|
||||
- [Chart Overrides](guides-developer/getting-started/gs-overrides.md)
|
||||
- [Replica Guidelines](guides-developer/getting-started/gs-replicas.md)
|
||||
- [Image Guidelines](guides-developer/getting-started/gs-images.md)
|
||||
- [Resource Guidelines](guides-developer/getting-started/gs-resources.md)
|
||||
- [Labeling Guidelines](guides-developer/getting-started/gs-labels.md)
|
||||
- [Endpoint Considerations](guides-developer/getting-started/gs-endpoints.md)
|
||||
- [Helm Upgrades Considerations](guides-developer/getting-started/gs-upgrades.md)
|
||||
- [Using Conditionals](guides-developer/getting-started/gs-conditionals.md)
|
||||
- [Helm-Toolkit Overview](guides-developer/dev-helm/helm-toolkit.md) - Overview of Helm-Toolkit
|
||||
- [User Registration](guides-developer/dev-helm/registration-user.md)
|
||||
- [Domain Registration](guides-developer/dev-helm/registration-domain.md)
|
||||
- [Host Registration](guides-developer/dev-helm/registration-host.md)
|
||||
- [Endpoint Registration](guides-developer/dev-helm/registration-endpoint.md)
|
||||
- [Service Registration](guides-developer/dev-helm/registration-service.md)
|
||||
- [Kubernetes Development Handbook](guides-developer/dev-kubernetes/readme.md)
|
||||
- [Kubernetes Development Considerations](guides-developer/dev-kubernetes/considerations.md)
|
||||
- [Operator Guides](guides-operator/readme.md) - Resources for Openstack-Helm Developers
|
||||
- [Helm Operations](guides-operator/ops-helm/readme.md) - Helm Operator Guides
|
||||
- [Openstack-Helm Operations](guides-operator/ops-helm/osh-operations.md)
|
||||
- [Addons and Plugins](guides-operator/ops-helm/osh-addons.md)
|
||||
- [Kubernetes Operations](guides-operator/ops-kubernetes/readme.md)
|
||||
- [Init-Containers](guides-operator/ops-kubernetes/kb-init-containers.md)
|
||||
- [Jobs](guides-operator/ops-kubernetes/kb-jobs.md)
|
||||
- [Openstack Operations](guides-operator/readme.md)
|
||||
- [Config Generation](guides-operator/ops-openstack/os-config/os-config-gen.md) - Openstack-Helm Configuration Management
|
||||
- [Networking Guides](guides-operator/ops-network/readme.md) - Network Operations
|
||||
- [Ingress](guides-operator/ops-network/net-ingress.md)
|
||||
- [Nodeports](guides-operator/ops-network/net-nodeport.md)
|
||||
- [Security Guides](guides-operator/readme.md) - Security Operations
|
||||
- [Using Namespaces](guides-operator/ops-security/sec-namespaces.md)
|
||||
- [SELinux and SECCOMP](guides-operator/ops-security/sec-appsec.md)
|
||||
- [Role-Based Access Control](guides-operator/ops-security/sec-rbac.md)
|
||||
- [Troubleshooting Guides](guides-operator/troubleshooting/readme.md)
|
||||
- [Database Issues](guides-operator/troubleshooting/ts-database.md)
|
||||
- [Development Issues](troubleshooting/ts-development.md)
|
||||
- [Networking Issues](guides-operator/troubleshooting/ts-networking.md)
|
||||
- [Storage Issues](guides-operator/troubleshooting/ts-persistent-storage.md)
|
||||
- [Appendix A: Helm Resources](appendix/resources-helm.md) - Curated List of Helm Resources
|
||||
- [Appendix B: Kubernetes Resources](appendix/resources-kubernetes.md) - Curated List of Kubernetes Resources
|
@ -1,65 +0,0 @@
|
||||
# Ceph Kubernetes Secret Generation
|
||||
|
||||
This script will generate ceph keyrings and configs as Kubernetes secrets.
|
||||
|
||||
Sigil is required for template handling and must be installed in system PATH. Instructions can be found here: <https://github.com/gliderlabs/sigil>
|
||||
|
||||
The following functions are provided:
|
||||
|
||||
## Generate raw FSID (can be used for other functions)
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh fsid
|
||||
```
|
||||
|
||||
## Generate raw ceph.conf (For verification)
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh ceph-conf-raw <fsid> "overridekey=value"
|
||||
```
|
||||
|
||||
Take a look at `ceph/ceph.conf.tmpl` for the default values
|
||||
|
||||
## Generate encoded ceph.conf secret
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh ceph-conf <fsid> "overridekey=value"
|
||||
```
|
||||
|
||||
## Generate encoded admin keyring secret
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh admin-keyring
|
||||
```
|
||||
|
||||
## Generate encoded mon keyring secret
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh mon-keyring
|
||||
```
|
||||
|
||||
## Generate a combined secret
|
||||
|
||||
Contains ceph.conf, admin keyring and mon keyring. Useful for generating the `/etc/ceph` directory
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh combined-conf
|
||||
```
|
||||
|
||||
## Generate encoded boostrap keyring secret
|
||||
|
||||
```bash
|
||||
./generate_secrets.sh bootstrap-keyring <osd|mds|rgw>
|
||||
```
|
||||
|
||||
# Kubernetes workflow
|
||||
|
||||
```bash
|
||||
./generator/generate_secrets.sh all `./generate_secrets.sh fsid`
|
||||
|
||||
kubectl create secret generic ceph-conf-combined --from-file=ceph.conf --from-file=ceph.client.admin.keyring --from-file=ceph.mon.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-rgw-keyring --from-file=ceph.keyring=ceph.rgw.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-mds-keyring --from-file=ceph.keyring=ceph.mds.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-osd-keyring --from-file=ceph.keyring=ceph.osd.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-client-key --from-file=ceph-client-key --namespace=ceph
|
||||
```
|
78
helm-toolkit/utils/secret-generator/README.rst
Normal file
78
helm-toolkit/utils/secret-generator/README.rst
Normal file
@ -0,0 +1,78 @@
|
||||
Ceph Kubernetes Secret Generation
|
||||
=================================
|
||||
|
||||
This script will generate ceph keyrings and configs as Kubernetes
|
||||
secrets.
|
||||
|
||||
Sigil is required for template handling and must be installed in system
|
||||
``PATH``. Instructions can be found`here
|
||||
<https://github.com/gliderlabs/sigil>`__
|
||||
|
||||
The following functions are provided:
|
||||
|
||||
Generate raw FSID (can be used for other functions)
|
||||
---------------------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh fsid
|
||||
|
||||
Generate raw ceph.conf (For verification)
|
||||
-----------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh ceph-conf-raw <fsid> "overridekey=value"
|
||||
|
||||
Take a look at ``ceph/ceph.conf.tmpl`` for the default values
|
||||
|
||||
Generate encoded ceph.conf secret
|
||||
---------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh ceph-conf <fsid> "overridekey=value"
|
||||
|
||||
Generate encoded admin keyring secret
|
||||
-------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh admin-keyring
|
||||
|
||||
Generate encoded mon keyring secret
|
||||
-----------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh mon-keyring
|
||||
|
||||
Generate a combined secret
|
||||
--------------------------
|
||||
|
||||
Contains ceph.conf, admin keyring and mon keyring. Useful for generating
|
||||
the ``/etc/ceph`` directory
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh combined-conf
|
||||
|
||||
Generate encoded boostrap keyring secret
|
||||
----------------------------------------
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generate_secrets.sh bootstrap-keyring <osd|mds|rgw>
|
||||
|
||||
Kubernetes workflow
|
||||
===================
|
||||
|
||||
.. code:: bash
|
||||
|
||||
./generator/generate_secrets.sh all `./generate_secrets.sh fsid`
|
||||
|
||||
kubectl create secret generic ceph-conf-combined --from-file=ceph.conf --from-file=ceph.client.admin.keyring --from-file=ceph.mon.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-rgw-keyring --from-file=ceph.keyring=ceph.rgw.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-mds-keyring --from-file=ceph.keyring=ceph.mds.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-bootstrap-osd-keyring --from-file=ceph.keyring=ceph.osd.keyring --namespace=ceph
|
||||
kubectl create secret generic ceph-client-key --from-file=ceph-client-key --namespace=ceph
|
@ -1,19 +0,0 @@
|
||||
# openstack-helm/mariadb
|
||||
|
||||
By default, this chart creates a 3-member mariadb galera cluster.
|
||||
|
||||
This chart leverages StatefulSets, with persistent storage.
|
||||
|
||||
It creates a job that acts as a temporary standalone galera cluster. This host is bootstrapped with authentication and then the WSREP bindings are exposed publicly. The cluster members being StatefulSets are provisioned one at a time. The first host must be marked as ```Ready``` before the next host will be provisioned. This is determined by the readinessProbes which actually validate that MySQL is up and responsive.
|
||||
|
||||
The configuration leverages xtrabackup-v2 for synchronization. This may later be augmented to leverage rsync which has some benefits.
|
||||
|
||||
Once the seed job completes, which completes only when galera reports that it is Synced and all cluster members are reporting in thus matching the cluster count according to the job to the replica count in the helm values configuration, the job is terminated. When the job is no longer active, future StatefulSets provisioned will leverage the existing cluster members as gcomm endpoints. It is only when the job is running that the cluster members leverage the seed job as their gcomm endpoint. This ensures you can restart members and scale the cluster.
|
||||
|
||||
The StatefulSets all leverage PVCs to provide stateful storage to /var/lib/mysql.
|
||||
|
||||
You must ensure that your control nodes that should receive mariadb instances are labeled with openstack-control-plane=enabled, or whatever you have configured in values.yaml for the label configuration:
|
||||
|
||||
```
|
||||
kubectl label nodes openstack-control-plane=enabled --all
|
||||
```
|
38
mariadb/README.rst
Normal file
38
mariadb/README.rst
Normal file
@ -0,0 +1,38 @@
|
||||
openstack-helm/mariadb
|
||||
======================
|
||||
|
||||
By default, this chart creates a 3-member mariadb galera cluster.
|
||||
|
||||
This chart leverages StatefulSets, with persistent storage.
|
||||
|
||||
It creates a job that acts as a temporary standalone galera cluster.
|
||||
This host is bootstrapped with authentication and then the WSREP
|
||||
bindings are exposed publicly. The cluster members being StatefulSets
|
||||
are provisioned one at a time. The first host must be marked as
|
||||
``Ready`` before the next host will be provisioned. This is determined
|
||||
by the readinessProbes which actually validate that MySQL is up and
|
||||
responsive.
|
||||
|
||||
The configuration leverages xtrabackup-v2 for synchronization. This may
|
||||
later be augmented to leverage rsync which has some benefits.
|
||||
|
||||
Once the seed job completes, which completes only when galera reports
|
||||
that it is Synced and all cluster members are reporting in thus matching
|
||||
the cluster count according to the job to the replica count in the helm
|
||||
values configuration, the job is terminated. When the job is no longer
|
||||
active, future StatefulSets provisioned will leverage the existing
|
||||
cluster members as gcomm endpoints. It is only when the job is running
|
||||
that the cluster members leverage the seed job as their gcomm endpoint.
|
||||
This ensures you can restart members and scale the cluster.
|
||||
|
||||
The StatefulSets all leverage PVCs to provide stateful storage to
|
||||
``/var/lib/mysql``.
|
||||
|
||||
You must ensure that your control nodes that should receive mariadb
|
||||
instances are labeled with ``openstack-control-plane=enabled``, or
|
||||
whatever you have configured in values.yaml for the label
|
||||
configuration:
|
||||
|
||||
::
|
||||
|
||||
kubectl label nodes openstack-control-plane=enabled --all
|
@ -1,21 +0,0 @@
|
||||
# Openstack-Helm Gate Scripts
|
||||
|
||||
These scripts are used in the OpenStack-Helm Gates and can also be run locally to aid development and for demonstration purposes. Please note that they assume full control of a machine, and may be destructive in nature, so should only be run on a dedicated host.
|
||||
|
||||
## Usage
|
||||
|
||||
The Gate scripts use the `setup_gate.sh` as an entrypoint and are controlled by environment variables, an example of use to run the basic integration test is below:
|
||||
|
||||
``` bash
|
||||
export INTEGRATION=aio
|
||||
export INTEGRATION_TYPE=basic
|
||||
./tools/gate/setup_gate.sh
|
||||
```
|
||||
|
||||
### Supported Platforms
|
||||
|
||||
Currently supported host platforms are:
|
||||
* Ubuntu 16.04
|
||||
* CentOS 7
|
||||
|
||||
With some preparation to docker, and disabling of SELinux operation of Fedora 25 is also supported.
|
28
tools/gate/README.rst
Normal file
28
tools/gate/README.rst
Normal file
@ -0,0 +1,28 @@
|
||||
Openstack-Helm Gate Scripts
|
||||
===========================
|
||||
|
||||
These scripts are used in the OpenStack-Helm Gates and can also be run
|
||||
locally to aid development and for demonstration purposes. Please note
|
||||
that they assume full control of a machine, and may be destructive in
|
||||
nature, so should only be run on a dedicated host.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
The Gate scripts use the ``setup_gate.sh`` as an entrypoint and are
|
||||
controlled by environment variables, an example of use to run the basic
|
||||
integration test is below:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
export INTEGRATION=aio
|
||||
export INTEGRATION_TYPE=basic
|
||||
./tools/gate/setup_gate.sh
|
||||
|
||||
Supported Platforms
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Currently supported host platforms are: \* Ubuntu 16.04 \* CentOS 7
|
||||
|
||||
With some preparation to docker, and disabling of SELinux operation of
|
||||
Fedora 25 is also supported.
|
@ -1,23 +0,0 @@
|
||||
# gen-oslo-openstack-helm
|
||||
Oslo Config Generator Hack to Generate Helm Configs
|
||||
|
||||
## Usage
|
||||
|
||||
From this directory run the following commands, adjusting for the OpenStack
|
||||
project and/or branch desired as necessary.
|
||||
|
||||
``` bash
|
||||
docker build . -t gen-oslo-openstack-helm
|
||||
PROJECT=heat
|
||||
sudo rm -rf /tmp/${PROJECT} || true
|
||||
docker run -it --rm \
|
||||
-e PROJECT="${PROJECT}" \
|
||||
-e PROJECT_BRANCH="stable/newton" \
|
||||
-e PROJECT_REPO=https://git.openstack.org/openstack/${PROJECT}.git \
|
||||
-v /tmp:/tmp:rw \
|
||||
gen-oslo-openstack-helm
|
||||
```
|
||||
|
||||
This container will then drop you into a shell, at the project root with
|
||||
OpenStack-Helm formatted configuration files in the standard locations produced
|
||||
by genconfig for the project.
|
26
tools/gen-oslo-openstack-helm/README.rst
Normal file
26
tools/gen-oslo-openstack-helm/README.rst
Normal file
@ -0,0 +1,26 @@
|
||||
gen-oslo-openstack-helm
|
||||
=======================
|
||||
|
||||
Oslo Config Generator Hack to Generate Helm Configs
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
From this directory run the following commands, adjusting for the
|
||||
OpenStack project and/or branch desired as necessary.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
docker build . -t gen-oslo-openstack-helm
|
||||
PROJECT=heat
|
||||
sudo rm -rf /tmp/${PROJECT} || true
|
||||
docker run -it --rm \
|
||||
-e PROJECT="${PROJECT}" \
|
||||
-e PROJECT_BRANCH="stable/newton" \
|
||||
-e PROJECT_REPO=https://git.openstack.org/openstack/${PROJECT}.git \
|
||||
-v /tmp:/tmp:rw \
|
||||
gen-oslo-openstack-helm
|
||||
|
||||
This container will then drop you into a shell, at the project root with
|
||||
OpenStack-Helm formatted configuration files in the standard locations
|
||||
produced by genconfig for the project.
|
@ -1,92 +0,0 @@
|
||||
# Kubeadm AIO Container
|
||||
|
||||
This container builds a small AIO Kubeadm based Kubernetes deployment for Development and Gating use.
|
||||
|
||||
## Instructions
|
||||
|
||||
### OS Specific Host setup:
|
||||
|
||||
#### Ubuntu:
|
||||
|
||||
From a freshly provisioned Ubuntu 16.04 LTS host run:
|
||||
``` bash
|
||||
sudo apt-get update -y
|
||||
sudo apt-get install -y \
|
||||
docker.io \
|
||||
nfs-common \
|
||||
git \
|
||||
make
|
||||
```
|
||||
|
||||
### OS Independent Host setup:
|
||||
|
||||
You should install the `kubectl` and `helm` binaries:
|
||||
|
||||
``` bash
|
||||
KUBE_VERSION=v1.6.0
|
||||
HELM_VERSION=v2.3.0
|
||||
|
||||
TMP_DIR=$(mktemp -d)
|
||||
curl -sSL https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -o ${TMP_DIR}/kubectl
|
||||
chmod +x ${TMP_DIR}/kubectl
|
||||
sudo mv ${TMP_DIR}/kubectl /usr/local/bin/kubectl
|
||||
curl -sSL https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar -zxv --strip-components=1 -C ${TMP_DIR}
|
||||
sudo mv ${TMP_DIR}/helm /usr/local/bin/helm
|
||||
rm -rf ${TMP_DIR}
|
||||
```
|
||||
|
||||
And clone the OpenStack-Helm repo:
|
||||
|
||||
``` bash
|
||||
git clone https://git.openstack.org/openstack/openstack-helm
|
||||
```
|
||||
|
||||
### Build the AIO environment (Optional)
|
||||
|
||||
A known good image is published to dockerhub on a fairly regular basis, but if
|
||||
you wish to build your own image, from the root directory of the OpenStack-Helm
|
||||
repo run:
|
||||
|
||||
``` bash
|
||||
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6
|
||||
sudo docker build --pull -t ${KUBEADM_IMAGE} tools/kubeadm-aio
|
||||
```
|
||||
|
||||
### Deploy the AIO environment
|
||||
|
||||
To launch the environment run:
|
||||
|
||||
``` bash
|
||||
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6
|
||||
export KUBE_VERSION=v1.6.2
|
||||
./tools/kubeadm-aio/kubeadm-aio-launcher.sh
|
||||
export KUBECONFIG=${HOME}/.kubeadm-aio/admin.conf
|
||||
```
|
||||
|
||||
One this has run, you should hopefully have a Kubernetes single node environment
|
||||
running, with Helm, Calico, a NFS PVC provisioner and appropriate RBAC rules and
|
||||
node labels to get developing.
|
||||
|
||||
If you wish to use this environment at the primary Kubernetes environment on
|
||||
your host you may run the following, but note that this will wipe any previous
|
||||
client configuration you may have.
|
||||
|
||||
``` bash
|
||||
mkdir -p ${HOME}/.kube
|
||||
cat ${HOME}/.kubeadm-aio/admin.conf > ${HOME}/.kube/config
|
||||
```
|
||||
|
||||
If you wish to create dummy network devices for Neutron to manage there is a
|
||||
helper script that can set them up for you:
|
||||
|
||||
``` bash
|
||||
sudo docker exec kubelet /usr/bin/openstack-helm-aio-network-prep
|
||||
```
|
||||
|
||||
### Logs
|
||||
|
||||
You can get the logs from your `kubeadm-aio` container by running:
|
||||
|
||||
``` bash
|
||||
sudo docker logs -f kubeadm-aio
|
||||
```
|
102
tools/kubeadm-aio/README.rst
Normal file
102
tools/kubeadm-aio/README.rst
Normal file
@ -0,0 +1,102 @@
|
||||
Kubeadm AIO Container
|
||||
=====================
|
||||
|
||||
This container builds a small AIO Kubeadm based Kubernetes deployment
|
||||
for Development and Gating use.
|
||||
|
||||
Instructions
|
||||
------------
|
||||
|
||||
OS Specific Host setup:
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ubuntu:
|
||||
^^^^^^^
|
||||
|
||||
From a freshly provisioned Ubuntu 16.04 LTS host run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo apt-get update -y
|
||||
sudo apt-get install -y \
|
||||
docker.io \
|
||||
nfs-common \
|
||||
git \
|
||||
make
|
||||
|
||||
OS Independent Host setup:
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You should install the ``kubectl`` and ``helm`` binaries:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
KUBE_VERSION=v1.6.0
|
||||
HELM_VERSION=v2.3.0
|
||||
|
||||
TMP_DIR=$(mktemp -d)
|
||||
curl -sSL https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -o ${TMP_DIR}/kubectl
|
||||
chmod +x ${TMP_DIR}/kubectl
|
||||
sudo mv ${TMP_DIR}/kubectl /usr/local/bin/kubectl
|
||||
curl -sSL https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VERSION}-linux-amd64.tar.gz | tar -zxv --strip-components=1 -C ${TMP_DIR}
|
||||
sudo mv ${TMP_DIR}/helm /usr/local/bin/helm
|
||||
rm -rf ${TMP_DIR}
|
||||
|
||||
And clone the OpenStack-Helm repo:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
git clone https://git.openstack.org/openstack/openstack-helm
|
||||
|
||||
Build the AIO environment (optional)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A known good image is published to dockerhub on a fairly regular basis, but if
|
||||
you wish to build your own image, from the root directory of the OpenStack-Helm
|
||||
repo run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6
|
||||
sudo docker build --pull -t ${KUBEADM_IMAGE} tools/kubeadm-aio
|
||||
|
||||
Deploy the AIO environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To launch the environment then run:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
export KUBEADM_IMAGE=openstackhelm/kubeadm-aio:v1.6
|
||||
export KUBE_VERSION=v1.6.2
|
||||
./tools/kubeadm-aio/kubeadm-aio-launcher.sh
|
||||
export KUBECONFIG=${HOME}/.kubeadm-aio/admin.conf
|
||||
|
||||
One this has run, you should hopefully have a Kubernetes single node
|
||||
environment running, with Helm, Calico, a NFS PVC provisioner and
|
||||
appropriate RBAC rules and node labels to get developing.
|
||||
|
||||
If you wish to use this environment at the primary Kubernetes
|
||||
environment on your host you may run the following, but note that this
|
||||
will wipe any previous client configuration you may have.
|
||||
|
||||
.. code:: bash
|
||||
|
||||
mkdir -p ${HOME}/.kube
|
||||
cat ${HOME}/.kubeadm-aio/admin.conf > ${HOME}/.kube/config
|
||||
|
||||
If you wish to create dummy network devices for Neutron to manage there
|
||||
is a helper script that can set them up for you:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo docker exec kubelet /usr/bin/openstack-helm-aio-network-prep
|
||||
|
||||
Logs
|
||||
~~~~
|
||||
|
||||
You can get the logs from your ``kubeadm-aio`` container by running:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
sudo docker logs -f kubeadm-aio
|
Loading…
Reference in New Issue
Block a user