f2125faf4d
This Patchset updated Ubuntu to Bionic and Python from 2.0 to 3.0 Change-Id: I0e967fa7f7f4d4a5619489a9da79b6561da9c9cb |
||
---|---|---|
charts | ||
docs | ||
images | ||
jmphost | ||
tools | ||
zuul.d | ||
.gitignore | ||
.gitreview | ||
LICENSE | ||
Makefile | ||
README.md |
Utility Containers
Utility containers give Operations staff an interface to an Airship environment that enables them to perform routine operations and troubleshooting activities. Utility containers support Airship environments without exposing secrets and credentials while at the same time restricting access to the actual containers.
Prerequisites
Deploy OSH-AIO.
System Requirements
The recommended minimum system requirements for a full deployment are:
- 16 GB RAM
- 8 Cores
- 48 GB HDD
Installation
-
Add the below to
/etc/sudoers
.root ALL=(ALL) NOPASSWD: ALL ubuntu ALL=(ALL) NOPASSWD: ALL
-
Install the latest versions of Git, CA Certs, and Make if necessary.
sudo apt-get update sudo apt-get dist-upgrade -y sudo apt-get install --no-install-recommends -y \ ca-certificates \ git \ make \ jq \ nmap \ curl \ uuid-runtime \ bc
-
Clone the OpenStack-Helm repositories.
git clone https://git.openstack.org/openstack/openstack-helm-infra.git git clone https://git.openstack.org/openstack/openstack-helm.git
-
Configure proxies.
In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to
openstack-helm-infra/tools/gate/devel/local-vars.yaml
.proxy: http: http://username:password@host:port https: https://username:password@host:port noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API,
172.17.0.1
, and.svc.cluster.local
to yourno_proxy
andNO_PROXY
environment variables.export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
-
Deploy Kubernetes and Helm.
cd openstack-helm ./tools/deployment/developer/common/010-deploy-k8s.sh
Edit
/etc/resolv.conf
and remove the DNS nameserver entry (nameserver 10.96.0.10
). The Python setup client fails if this nameserver entry is present. -
Setup clients on the host, and assemble the charts.
./tools/deployment/developer/common/020-setup-client.sh
Re-add DNS nameservers back to
/etc/resolv.conf
so that the Keystone URLs DNS will resolve. -
Deploy the ingress controller.
./tools/deployment/developer/common/030-ingress.sh
-
Deploy Ceph.
./tools/deployment/developer/ceph/040-ceph.sh
-
Activate the namespace to be able to use Ceph.
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
-
Deploy Keystone.
./tools/deployment/developer/ceph/080-keystone.sh
-
Deploy Heat.
./tools/deployment/developer/ceph/090-heat.sh
-
Deploy Horizon.
./tools/deployment/developer/ceph/100-horizon.sh
-
Deploy Glance.
./tools/deployment/developer/ceph/120-glance.sh
-
Deploy Cinder.
./tools/deployment/developer/ceph/130-cinder.sh
-
Deploy LibVirt.
./tools/deployment/developer/ceph/150-libvirt.sh
-
Deploy the compute kit (Nova and Neutron).
./tools/deployment/developer/ceph/160-compute-kit.sh
-
To run further commands from the CLI manually, execute the following to set up authentication credentials.
export OS_CLOUD=openstack_helm
-
Clone the Porthole repository to the openstack-helm project.
git clone https://opendev.org/airship/porthole.git
To deploy utility pods
-
Add and make the chart:
cd porthole helm repo add <chartname> http://localhost:8879/charts make all
-
Deploy
Ceph-utility
../tools/deployment/utilities/010-ceph-utility.sh
-
Deploy
Compute-utility
../tools/deployment/utilities/020-compute-utility.sh
-
Deploy
Etcdctl-utility
../tools/deployment/utilities/030-etcdctl-utility.sh
-
Deploy
Mysqlclient-utility
../tools/deployment/utilities/040-Mysqlclient-utility.sh
-
Deploy
Openstack-utility
../tools/deployment/utilities/050-openstack-utility.sh
NOTE
The PostgreSQL utility container is deployed as a part of Airship-in-a-Bottle (AIAB).
To deploy and test postgresql-utility
, see the
PostgreSQL README.