Airship utility CLI access
33ce528756
This patch set removes porthole-images-build and porthole-images-upload zuul jobs, as they seem to be not used. Parent job upload-docker-image currently does not support quay.io registry. Add Zuul jobs to build and upload to quay.io mysqlclient-utility container image, basing on other Airship project jobs. Change Dockerfile of mysqlclient-utility to make it more consistent with other Airship Dockerfiles. Removed mysqlclient-utility/build.sh, as this seem not to be used. Rewrite Makefile to support build-image-$(IMAGE_NAME) target, which should be used to build images from different directories and allows to base them on different base image. Remove Makefile targets (pull-all-images, pull-images, dev-deploy) which could not be executed anyway, because of absence of scripts they refer to. Remove other unused Makefile targets. Adjust copyright (license) message in files which are largely sourced from Airship Apache 2.0 code. Removed calicoctl, ceph, compute and openstack -utility jobs, as this jobs only do linting of the same code multiple time, burdening OpenStack Infrastructure. Added one linting job which runs agains complete code base. Removed unused helm_install.sh, helm_tk.sh and install_*_utility.sh scripts. Change-Id: I2d8f4b0cfaa6a8def0539ce6c40cc3c320f36a7d |
||
---|---|---|
calicoctl-utility | ||
ceph-utility | ||
compute-utility | ||
Dockerfiles | ||
docs | ||
mysqlclient-utility | ||
ncct-utility | ||
openstack-utility | ||
tools | ||
zuul.d | ||
.gitignore | ||
.gitreview | ||
Compute_Utility_Readme | ||
LICENSE | ||
Makefile | ||
Openstack_Utility_Readme | ||
README |
Utility Container ----------------- 1. Ceph utility Container Installation ------------ 1. Add the below to /etc/sudoers root ALL=(ALL) NOPASSWD: ALL ubuntu ALL=(ALL) NOPASSWD: ALL 2. Install the latest versions of Git, CA Certs & Make if necessary #!/bin/bash set -xe sudo apt-get update sudo apt-get install --no-install-recommends -y \ ca-certificates \ git \ make \ jq \ nmap \ curl \ uuid-runtime 3. Proxy Configuration In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml. proxy: http: http://username:password@host:port https: https://username:password@host:port noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables. export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local 4. Clone the OpenStack-Helm Repos #!/bin/bash set -xe git clone https://git.openstack.org/openstack/openstack-helm-infra.git git clone https://git.openstack.org/openstack/openstack-helm.git 5. Deploy Kubernetes & Helm cd openstack-helm ./tools/deployment/developer/common/010-deploy-k8s.sh 6. Install OpenStack-Helm Setup Clients on the host and assemble the charts ./tools/deployment/developer/common/020-setup-client.sh Deploy the ingress controller ./tools/deployment/developer/common/030-ingress.sh 7. Deploy Ceph ./tools/deployment/developer/ceph/040-ceph.sh Activate the OpenStack namespace to be able to use Ceph ./tools/deployment/developer/ceph/045-ceph-ns-activate.sh 8. Deploy Porthole git clone https://github.com/att-comdev/porthole.git cd porthole ./install_utility.sh Usage ----- Get in to the utility pod using kubectl exec. To perform any operation on the ceph cluster use the below example. example: utilscli ceph osd tree utilscli rbd ls utilscli rados lspools TODO ---- 1. Customize oslo filters to restrict commands.