3dc8d73be9
Utility containers shall act as an interface to an Airship environment and shall enable them to perform routine operational and debugging activities. Utility containers shall enable Operations to seamlessly support Airship environment without exposing secrets and credentials, and at the same time restricting the access to actual containers. The compute-utility container permits access to services running on each compute node. Services include ovs, libvirt, ipmi, perccli, numa and sos. Change-Id: I389b6f62f8abbd665960a2fd4de880f0f5380c2a
122 lines
3.0 KiB
Plaintext
122 lines
3.0 KiB
Plaintext
Compute Utility Container
|
|
-----------------
|
|
Prerequisites: Deploy OSH-AIO
|
|
|
|
|
|
Installation
|
|
------------
|
|
1. Add the below to /etc/sudoers
|
|
|
|
root ALL=(ALL) NOPASSWD: ALL
|
|
ubuntu ALL=(ALL) NOPASSWD: ALL
|
|
|
|
2. Install the latest versions of Git, CA Certs & Make if necessary
|
|
|
|
#!/bin/bash
|
|
set -xe
|
|
|
|
sudo apt-get update
|
|
sudo apt-get install --no-install-recommends -y \
|
|
ca-certificates \
|
|
git \
|
|
make \
|
|
jq \
|
|
nmap \
|
|
curl \
|
|
uuid-runtime
|
|
|
|
3. Proxy Configuration
|
|
|
|
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables.
|
|
|
|
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
|
|
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
|
|
|
|
4. Clone the OpenStack-Helm Repos
|
|
|
|
|
|
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
|
|
git clone https://git.openstack.org/openstack/openstack-helm.git
|
|
|
|
In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
|
|
|
|
proxy:
|
|
http: http://username:password@host:port
|
|
https: https://username:password@host:port
|
|
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
|
|
|
|
|
|
5. Deploy Kubernetes & Helm
|
|
|
|
cd openstack-helm
|
|
./tools/deployment/developer/common/010-deploy-k8s.sh
|
|
|
|
6. Install OpenStack-Helm
|
|
|
|
Setup Clients on the host and assemble the charts
|
|
./tools/deployment/developer/common/020-setup-client.sh
|
|
|
|
Deploy the ingress controller
|
|
./tools/deployment/developer/common/030-ingress.sh
|
|
|
|
7. Deploy Ceph
|
|
|
|
./tools/deployment/developer/ceph/040-ceph.sh
|
|
|
|
Activate the namespace to be able to use Ceph
|
|
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
|
|
|
|
8. Deploy Keystone
|
|
./tools/deployment/developer/ceph/080-keystone.sh
|
|
|
|
9. Deploy Heat
|
|
./tools/deployment/developer/ceph/090-heat.sh
|
|
|
|
10. Deploy Horizon
|
|
./tools/deployment/developer/ceph/100-horizon.sh
|
|
|
|
11. Deploy Glance
|
|
./tools/deployment/developer/ceph/120-glance.sh
|
|
|
|
12. Deploy Cinder
|
|
./tools/deployment/developer/ceph/130-cinder.sh
|
|
|
|
13. Deploy LibVirt # required if you want to test compute-utility functionality
|
|
./tools/deployment/developer/ceph/150-libvirt.sh
|
|
|
|
14. Deploy Compute Kit (Nova and Neutron)
|
|
./tools/deployment/developer/ceph/160-compute-kit.sh
|
|
|
|
15. To run further commands from the CLI manually, execute the following to set up authentication credentials
|
|
export OS_CLOUD=openstack_helm
|
|
|
|
16. Clone the Porthole and compute utility repo as well.
|
|
|
|
git clone https://review.opendev.org/airship/porthole
|
|
|
|
cd porthole
|
|
./install_compute_utility.sh
|
|
|
|
Usage
|
|
-----
|
|
|
|
Get in to the utility pod using kubectl exec. To perform any operation use the below example.
|
|
|
|
kubectl exec -it <POD_NAME> -n utility /bin/bash
|
|
|
|
Run the utilscli with commands formatted: utilscli <client-name> <server-hostname> <command> <options>
|
|
|
|
example:
|
|
utilscli libvirt-client mtn16r001c002 virsh list
|
|
|
|
|
|
Accepted client-names are:
|
|
libvirt-client
|
|
ovs-client
|
|
ipmi-client
|
|
perccli-client
|
|
numa-client
|
|
sos-client
|
|
|
|
Commands for each client vary with the client.
|