make minikube tldr section easier to copy/paste (#151)

* make minikube tldr section easier to copy/paste

* adding additional label for ovs

* small mods to dev and gs guides in docs
This commit is contained in:
Brandon B. Jozsa 2017-01-29 13:43:10 -05:00 committed by Alan Meadows
parent 98b3d212ec
commit f4e3d82888
2 changed files with 25 additions and 20 deletions

View File

@ -27,20 +27,19 @@ If your environment meets all of the prerequisites above, you can simply use the
``` ```
# Clone the project: # Clone the project:
$ git clone https://github.com/att-comdev/openstack-helm.git && cd openstack-helm git clone https://github.com/att-comdev/openstack-helm.git && cd openstack-helm
# Get a list of the current tags: # Get a list of the current tags:
$ git tag -l git tag -l
0.1.0
# Checkout the tag you want to work with (if desired, or use master for development): # Checkout the tag you want to work with (if desired, or use master for development):
$ git checkout 0.1.0 git checkout 0.1.0
# Start a local Helm Server: # Start a local Helm Server:
$ helm serve & helm serve &
# You may need to change these params for your environment. Look up use of --iso-url if needed: # You may need to change these params for your environment. Look up use of --iso-url if needed:
$ minikube start \ minikube start \
--network-plugin=cni \ --network-plugin=cni \
--kubernetes-version v1.5.1 \ --kubernetes-version v1.5.1 \
--disk-size 40g \ --disk-size 40g \
@ -53,25 +52,25 @@ $ minikube start \
kubectl create -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml kubectl create -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/calico.yaml
# Initialize Helm/Deploy Tiller: # Initialize Helm/Deploy Tiller:
$ helm init helm init
# Package the Openstack-Helm Charts, and push them to your local Helm repository: # Package the Openstack-Helm Charts, and push them to your local Helm repository:
$ make make
# Label the Minikube as an Openstack Control Plane node: # Label the Minikube as an Openstack Control Plane node:
$ kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack kubectl label nodes openstack-control-plane=enabled --all --namespace=openstack
# Deploy each chart: # Deploy each chart:
$ helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack helm install --name mariadb --set development.enabled=true local/mariadb --namespace=openstack
$ helm install --name=memcached local/memcached --namespace=openstack helm install --name=memcached local/memcached --namespace=openstack
$ helm install --name=rabbitmq local/rabbitmq --namespace=openstack helm install --name=rabbitmq local/rabbitmq --namespace=openstack
$ helm install --name=keystone local/keystone --namespace=openstack helm install --name=keystone local/keystone --namespace=openstack
$ helm install --name=cinder local/cinder --namespace=openstack helm install --name=cinder local/cinder --namespace=openstack
$ helm install --name=glance local/glance --namespace=openstack helm install --name=glance local/glance --namespace=openstack
$ helm install --name=heat local/heat --namespace=openstack helm install --name=heat local/heat --namespace=openstack
$ helm install --name=nova local/nova --namespace=openstack helm install --name=nova local/nova --namespace=openstack
$ helm install --name=neutron local/neutron --namespace=openstack helm install --name=neutron local/neutron --namespace=openstack
$ helm install --name=horizon local/horizon --namespace=openstack helm install --name=horizon local/horizon --namespace=openstack
``` ```
# Getting Started # Getting Started

View File

@ -190,11 +190,17 @@ Please ensure that you have verified and completed the steps above to prevent is
Although Ceph is mentioned throughout this guide, our deployment is flexible to allow you the option of bringing any type of persistent storage. Although most of these verification steps are the same, if not very similar, we will use Ceph as our example throughout this guide. Although Ceph is mentioned throughout this guide, our deployment is flexible to allow you the option of bringing any type of persistent storage. Although most of these verification steps are the same, if not very similar, we will use Ceph as our example throughout this guide.
## Node Labels ## Node Labels
First, we must label our nodes according to their role. Although we are labeling `all` nodes, you are free to label only the nodes you wish. You must have at least one, although a minimum of three are recommended. First, we must label our nodes according to their role. Although we are labeling `all` nodes, you are free to label only the nodes you wish. You must have at least one, although a minimum of three are recommended. Nodes are labeled according to their Openstack roles:
**Storage Nodes:** `ceph-storage`
**Control Plane:** `openstack-control-plane`
**Compute Nodes:** `openvswitch`, `openstack-compute-node`
``` ```
admin@kubenode01:~$ kubectl label nodes openstack-control-plane=enabled --all admin@kubenode01:~$ kubectl label nodes openstack-control-plane=enabled --all
admin@kubenode01:~$ kubectl label nodes ceph-storage=enabled --all admin@kubenode01:~$ kubectl label nodes ceph-storage=enabled --all
admin@kubenode01:~$ kubectl label nodes openvswitch=enabled --all
admin@kubenode01:~$ kubectl label nodes openstack-compute-node=enabled --all
``` ```
## Obtaining the Project ## Obtaining the Project