porthole/Openstack_Utility_Readme
Dodda Prateek (pd2839) 69d9e6db4c Chart/Dockerfile for Openstack Utility Container
Added Support for rbac

Change-Id: I6644824776f7890c2475904ba3404e281e10e54e
Co-authored-by: Sreejith Punnapuzha <Sreejith.Punnapuzha@outlook.com>
2019-08-20 14:56:51 +00:00

110 lines
3.4 KiB
Plaintext

Openstack Utility Container
-----------------
Prerequisites: Deploy OSH-AIO
Installation
------------
1. Add the below to /etc/sudoers
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
2. Install the latest versions of Git, CA Certs & Make if necessary
sudo apt-get update \
sudo apt-get dist-upgrade -y \
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime
3. Clone the OpenStack-Helm Repos
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
4. Proxy Configuration
In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables.
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
5. Deploy Kubernetes & Helm
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
Please remove DNS Nameservers (namespace 10.96.0.10) from /etc/resolv.conf, Since python set-up client would fail without it.
Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh
Re-add DNS nameservers back in (/etc/resolv.conf) so that keystone URL's DNS would resolve.
Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh
6. Deploy Ceph
./tools/deployment/developer/ceph/040-ceph.sh
Activate the namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
7. Deploy Keystone
./tools/deployment/developer/ceph/080-keystone.sh
8. Deploy Heat
./tools/deployment/developer/ceph/090-heat.sh
9. Deploy Horizon
./tools/deployment/developer/ceph/100-horizon.sh
10.Deploy Glance
./tools/deployment/developer/ceph/120-glance.sh
11.Deploy Cinder
./tools/deployment/developer/ceph/130-cinder.sh
12.Deploy LibVirt
./tools/deployment/developer/ceph/150-libvirt.sh
13.Deploy Compute Kit (Nova and Neutron)
./tools/deployment/developer/ceph/160-compute-kit.sh
15.To run further commands from the CLI manually, execute the following to set up authentication credentials
export OS_CLOUD=openstack_helm
16.Clone the Porthole and openstack utility repo as well.
git clone https://review.opendev.org/openstack/airship-porthole
git pull ssh://pd2839@review.opendev.org:29418/airship/porthole refs/changes/70/674670/13
cd porthole
./install_openstack_utility.sh
Usage
-----
Get in to the utility pod using kubectl exec. To perform any operation use the below example. Please be ready with password for accessing below cli commands.
kubectl exec -it <POD_NAME> -n utility /bin/bash
example:
utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> --os-project-name <PROJECT_NAME
utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> --os-project-name <PROJECT_NAME