Adjust code to project conventions

1. Cleaning up old Makefiles and scripts which are not used anymore

2. Renaming files to contain .md extension so that they render correctly

3. Moving README* files so that they are stored in same location

4. Adding installation scripts for all the utilities

Change-Id: Id8475cc7323f6905a0ea6df330197c61688a2a90
This commit is contained in:
Rahul Khiyani 2019-10-30 13:18:00 -05:00
parent 5e7492b94c
commit 55703d7563
20 changed files with 426 additions and 422 deletions

View File

@ -1,109 +0,0 @@
Openstack Utility Container
-----------------
Prerequisites: Deploy OSH-AIO
Installation
------------
1. Add the below to /etc/sudoers
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
2. Install the latest versions of Git, CA Certs & Make if necessary
sudo apt-get update \
sudo apt-get dist-upgrade -y \
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime
3. Clone the OpenStack-Helm Repos
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
4. Proxy Configuration
In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables.
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
5. Deploy Kubernetes & Helm
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
Please remove DNS Nameservers (namespace 10.96.0.10) from /etc/resolv.conf, Since python set-up client would fail without it.
Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh
Re-add DNS nameservers back in (/etc/resolv.conf) so that keystone URL's DNS would resolve.
Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh
6. Deploy Ceph
./tools/deployment/developer/ceph/040-ceph.sh
Activate the namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
7. Deploy Keystone
./tools/deployment/developer/ceph/080-keystone.sh
8. Deploy Heat
./tools/deployment/developer/ceph/090-heat.sh
9. Deploy Horizon
./tools/deployment/developer/ceph/100-horizon.sh
10.Deploy Glance
./tools/deployment/developer/ceph/120-glance.sh
11.Deploy Cinder
./tools/deployment/developer/ceph/130-cinder.sh
12.Deploy LibVirt
./tools/deployment/developer/ceph/150-libvirt.sh
13.Deploy Compute Kit (Nova and Neutron)
./tools/deployment/developer/ceph/160-compute-kit.sh
15.To run further commands from the CLI manually, execute the following to set up authentication credentials
export OS_CLOUD=openstack_helm
16.Clone the Porthole and openstack utility repo as well.
git clone https://review.opendev.org/openstack/airship-porthole
git pull ssh://pd2839@review.opendev.org:29418/airship/porthole refs/changes/70/674670/13
cd porthole
./install_openstack_utility.sh
Usage
-----
Get in to the utility pod using kubectl exec. To perform any operation use the below example. Please be ready with password for accessing below cli commands.
kubectl exec -it <POD_NAME> -n utility /bin/bash
example:
utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> --os-project-name <PROJECT_NAME
utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> --os-project-name <PROJECT_NAME

88
README
View File

@ -1,88 +0,0 @@
Utility Container
-----------------
1. Ceph utility Container
Installation
------------
1. Add the below to /etc/sudoers
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
2. Install the latest versions of Git, CA Certs & Make if necessary
#!/bin/bash
set -xe
sudo apt-get update
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime
3. Proxy Configuration
In order to deploy OpenStack-Helm behind corporate proxy servers, add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to your no_proxy and NO_PROXY environment variables.
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
4. Clone the OpenStack-Helm Repos
#!/bin/bash
set -xe
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
5. Deploy Kubernetes & Helm
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
6. Install OpenStack-Helm
Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh
Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh
7. Deploy Ceph
./tools/deployment/developer/ceph/040-ceph.sh
Activate the OpenStack namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
8. Deploy Porthole
git clone https://github.com/att-comdev/porthole.git
cd porthole
./install_utility.sh
Usage
-----
Get in to the utility pod using kubectl exec. To perform any operation on the ceph cluster use the below example.
example:
utilscli ceph osd tree
utilscli rbd ls
utilscli rados lspools
TODO
----
1. Customize oslo filters to restrict commands.

145
README.md Normal file
View File

@ -0,0 +1,145 @@
# Utility Containers
Utility containers provide a component level, consolidated view of
running containers within Network Cloud infrastructure to members
of the Operation team. This allows Operation team members access to
check the state of various services running within the component
pods of Network Cloud.
## Prerequisites
Deploy OSH-AIO
## System Requirements
The recommended minimum system requirements for a full deployment are:
* 16GB of RAM
* 8 Cores
* 48GB HDD
## Installation
1. Add the below to /etc/sudoers
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
2. Install the latest versions of Git, CA Certs & Make if necessary
sudo apt-get update
sudo apt-get dist-upgrade -y
sudo apt-get install --no-install-recommends -y \
ca-certificates \
git \
make \
jq \
nmap \
curl \
uuid-runtime \
bc
3. Clone the OpenStack-Helm Repos
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
git clone https://git.openstack.org/openstack/openstack-helm.git
4. Proxy Configuration
In order to deploy OpenStack-Helm behind corporate proxy servers,
add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
proxy:
http: http://username:password@host:port
https: https://username:password@host:port
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to
your no_proxy and NO_PROXY environment variables.
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
5. Deploy Kubernetes & Helm
cd openstack-helm
./tools/deployment/developer/common/010-deploy-k8s.sh
Please remove DNS nameserver (nameserver 10.96.0.10) from /etc/resolv.conf,
Since python set-up client would fail without it.
6. Setup Clients on the host and assemble the charts
./tools/deployment/developer/common/020-setup-client.sh
Re-add DNS nameservers back in /etc/resolv.conf so that keystone URL's DNS would resolve.
7. Deploy the ingress controller
./tools/deployment/developer/common/030-ingress.sh
8. Deploy Ceph
./tools/deployment/developer/ceph/040-ceph.sh
9. Activate the namespace to be able to use Ceph
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
10. Deploy Keystone
./tools/deployment/developer/ceph/080-keystone.sh
11. Deploy Heat
./tools/deployment/developer/ceph/090-heat.sh
12. Deploy Horizon
./tools/deployment/developer/ceph/100-horizon.sh
13. Deploy Glance
./tools/deployment/developer/ceph/120-glance.sh
14. Deploy Cinder
./tools/deployment/developer/ceph/130-cinder.sh
15. Deploy LibVirt
./tools/deployment/developer/ceph/150-libvirt.sh
16. Deploy Compute Kit (Nova and Neutron)
./tools/deployment/developer/ceph/160-compute-kit.sh
17. To run further commands from the CLI manually, execute the following
to set up authentication credentials
export OS_CLOUD=openstack_helm
18. Clone the Porthole repo to openstack-helm project
git clone https://opendev.org/airship/porthole.git
## To deploy utility pods
1. cd porthole
2. helm repo add <chartname> http://localhost:8879/charts
3. make all
4. Deploy Ceph-utility
./tools/deployment/utilities/010-ceph-utility.sh
5. Deploy Compute-utility
./tools/deployment/utilities/020-compute-utility.sh
6. Deploy Etcdctl-utility
./tools/deployment/utilities/030-etcdctl-utility.sh
7. Deploy Mysqlclient-utility.sh
./tools/deployment/utilities/040-Mysqlclient-utility.sh
8. Deploy Openstack-utility.sh
./tools/deployment/utilities/050-openstack-utility.sh
## NOTE
For postgresql-utility please refer to below URL as per validation
postgresql-utility is deployed in AIAB
https://opendev.org/airship/porthole/src/branch/master/images/postgresql-utility/README.md

View File

@ -1,70 +0,0 @@
# MySqlClient Utility Container
This container allows users access to MariaDB pods remotely to perform db
functions. Authorized users in UCP keystone RBAC will able to run queries
through 'utilscli' helper.
## Prerequisites
1. Internet access
2. Successfully deploy [Openstack Helm Chart](https://docs.openstack.org/openstack-helm/latest/install/index.html) sandbox
3. Have access to Jump Host where the user k8s profile has already been setup
## Installation
1. Clone the OpenStack-Helm and Porthole repos
$git clone https://git.openstack.org/openstack/openstack-helm-infra.git
$git clone https://git.openstack.org/openstack/openstack-helm.git
$git clone https://review.opendev.org/airship/porthole
2. Pull PatchSet (optional)
$cd porthole
$git pull https://review.opendev.org/airship/porthole refs/changes/[patchset number]/[latest change set]
## Validation
Execute into the pod by using **kubectl** command line:
### Case 1 - Execute into the pod
$kubectl exec -it <POD_NAME> -n utility /bin/bash
It's expected to provide a shell prompt
### Case 2 - Test connectiviy to Mariadb (optional)
Find mariadb pod and its corresponding IP
kubectl get pods --all-namespaces -o wide |grep -i mariadb-server|awk '{print $1,$2,$7}'
An Output should look similar to below
openstack mariadb-server-0 192.168.207.19
Now connect to the pod as illustrated in Case 1 by providing CLI arguements accordingly
CLI Syntax
$kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> -e 'show databases;'
It's expected to see an output looks similar to below.
|--------------------|
| Database |
|--------------------|
| cinder |
| glance |
| heat |
| horizon |
| information_schema |
| keystone |
| mysql |
| neutron |
| nova |
| nova_api |
| nova_cell0 |
| performance_schema |
+--------------------+

View File

@ -1,15 +0,0 @@
Generic Docker Makefile
-----------------------
This is a generic make and dockerfile for calicoctl utility container, which
can be used to create docker images using different calico releases.
Usage:
make IMAGE_TAG=<calicoctl_version>
eg:
1. Create docker image for calicoctl release v3.4.0
make IMAGE_TAG=v3.4.0

View File

@ -0,0 +1,20 @@
# Calicoctl-utility Container
This container shall allow access to calico pod running on every node.
Support personnel should be able to get the appropriate data from this utility container
by specifying the node and respective service command within the local cluster.
## Generic Docker Makefile
This is a generic make and dockerfile for calicoctl utility container, which
can be used to create docker images using different calico releases.
## Usage
make IMAGE_TAG=<calicoctl_version>
Example:
1. Create docker image for calicoctl release v3.4.0
make IMAGE_TAG=v3.4.0

View File

@ -1,47 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ifndef CEPH_RELEASE
$(error The TAG variable is missing.)
endif
ifndef UBUNTU_RELEASE
$(error The ENV variable is missing.)
endif
SHELL := /bin/bash
DOCKER_REGISTRY ?= quay.io
IMAGE_NAME ?= ceph-utility
IMAGE_PREFIX ?= airship/porthole
IMAGE_TAG ?= latest
LABEL ?= mimic
IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG}
# Build ceph-utility Docker image for this project
.PHONY: images
images: build_$(IMAGE_NAME)
# Make targets intended for use by the primary targets above.
.PHONY: build_$(IMAGE_NAME)
build_$(IMAGE_NAME):
docker build -f Dockerfile.ubuntu \
--network host \
--build-arg CEPH_RELEASE=$(CEPH_RELEASE) \
--build-arg UBUNTU_RELEASE=$(UBUNTU_RELEASE) \
$(EXTRA_BUILD_ARGS) \
-t $(IMAGE) \
--label $(LABEL) --label CEPH_RELEASE=$(CEPH_RELEASE) \
.

View File

@ -1,26 +0,0 @@
Generic Docker Makefile
-----------------------
This is a generic make and dockerfile for ceph utility container. This can be used to create docker images using different ceph releases and ubuntu releases
Usage:
make CEPH_RELEASE=<release_name> UBUNTU_RELEASE=<release_name>
eg:
1. Create docker image for ceph luminous release on ubuntu xenial (16.04)
make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial
2. Create docker image for ceph mimic release on ubuntu xenial (16.04)
make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial
3. Create docker image for ceph luminous release on ubuntu bionic (18.04)
make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic
4. Create docker image for ceph mimic release on ubuntu bionic (18.04)
make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic

View File

@ -0,0 +1,42 @@
# Ceph-utility Container
This CEPH utility container will help the Operation user to check the state/stats
of Ceph resources in the K8s Cluster. This utility container will help to perform
restricted admin level activities without exposing credentials/Keyring to user in
utility container.
## Generic Docker Makefile
This is a generic make and dockerfile for ceph utility container.
This can be used to create docker images using different ceph releases and ubuntu releases
## Usage
make CEPH_RELEASE=<release_name> UBUNTU_RELEASE=<release_name>
example:
1. Create docker image for ceph luminous release on ubuntu xenial (16.04)
make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial
2. Create docker image for ceph mimic release on ubuntu xenial (16.04)
make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial
3. Create docker image for ceph luminous release on ubuntu bionic (18.04)
make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic
4. Create docker image for ceph mimic release on ubuntu bionic (18.04)
make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic
5. Get in to the utility pod using kubectl exec.
To perform any operation on the ceph cluster use the below example.
example:
utilscli ceph osd tree
utilscli rbd ls
utilscli rados lspools

View File

@ -1,37 +0,0 @@
# Copyright 2017 The Openstack-Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
SHELL := /bin/bash
DOCKER_REGISTRY ?= quay.io
IMAGE_NAME ?= compute-utility
IMAGE_PREFIX ?= attcomdev
IMAGE_TAG ?= latest
LABEL ?= mimic
IMAGE := ${DOCKER_REGISTRY}/${IMAGE_PREFIX}/${IMAGE_NAME}:${IMAGE_TAG}
# Build compute-utility Docker image for this project
.PHONY: images
images: build_$(IMAGE_NAME)
# Make targets intended for use by the primary targets above.
.PHONY: build_$(IMAGE_NAME)
build_$(IMAGE_NAME):
docker build -f Dockerfile.${DISTRO} \
--network host \
-t $(IMAGE) \
--label $(LABEL) \
.

View File

@ -0,0 +1,30 @@
# Compute-utility Container
This container shall allow access to services running on the each compute node.
Support personnel should be able to get the appropriate data from this utility container
by specifying the node and respective service command within the local cluster.
## Usage
1. Get in to the utility pod using kubectl exec. To perform any operation use the below example.
- kubectl exec -it <POD_NAME> -n utility /bin/bash
2. Run the utilscli with commands formatted:
- utilscli <client-name> <server-hostname> <command> <options>
example:
- utilscli libvirt-client mtn16r001c002 virsh list
Accepted client-names are:
libvirt-client
ovs-client
ipmi-client
perccli-client
numa-client
sos-client
Commands for each client vary with the client.

View File

@ -1,16 +0,0 @@
#!/bin/bash
set -xe
SCRIPT=`realpath $0`
SCRIPT_DIR=`dirname ${SCRIPT}`
## Only build from main folder
cd ${SCRIPT_DIR}/..
IMAGE="compute-utility"
VERSION=${VERSION:-latest}
DISTRO=${DISTRO:-ubuntu_xenial}
REGISTRY_URI=${REGISTRY_URI:-"openstackhelm/"}
EXTRA_TAG_INFO=${EXTRA_TAG_INFO:-""}
docker build -f ${IMAGE}/Dockerfile.${DISTRO} \
--network=host -t ${REGISTRY_URI}${IMAGE}:${VERSION}-${DISTRO}${EXTRA_TAG_INFO} \
${extra_build_args} \
${IMAGE}

View File

@ -0,0 +1,47 @@
# Mysqlclient-utility Container
This container allows users access to MariaDB pods remotely to perform db
functions. Authorized users in UCP keystone RBAC will able to run queries
through 'utilscli' helper.
## Usage & Test
Get in to the utility pod using kubectl exec. Then perform the followings:
## Case 1 - Execute into the pod
- $kubectl exec -it <POD_NAME> -n utility /bin/bash
## Case 2 - Test connectivity to Mariadb (optional)
1. Find mariadb pod and its corresponding IP
---
- $kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \
| while read a b ; do kubectl get pod $b -n $a -o wide
done
---
2. Now connect to the pod as described in Case 1 by providing the arguments
as indicated for the CLI, as shown below
- $kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> \
-e 'show databases;'
It's expected to see an output looks similar to below.
>--------------------+\
| Database |\
|--------------------|\
| cinder |\
| glance |\
| heat |\
| horizon |\
| information_schema |\
| keystone |\
| mysql |\
| neutron |\
| nova |\
| nova_api |\
| nova_cell0 |\
| performance_schema |\
+--------------------+\

View File

@ -0,0 +1,24 @@
# Openstack-utility Container
Utility container for Openstack shall enable Operations to trigger the command set for
Compute, Network, Identity, Image, Block Storage, Queueing service APIs together from
within a single shell with a uniform command structure. The access to Openstack shall
be controlled through Openstack RBAC role assigned to the user. User will have to set
the Openstack environment (openrc) in utility container to access the Openstack CLIs.
The generic environment file will be placed in Utility container with common setting except
username, password and project_ID. User needs to pass such parameters through command options.
## Usage
1. Get in to the utility pod using kubectl exec.
To perform any operation use the below example.
Please be ready with password for accessing below cli commands.
- kubectl exec -it <POD_NAME> -n utility /bin/bash
example:
utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME
utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME

View File

@ -1,14 +0,0 @@
#!/bin/bash
SCRIPT=`realpath $0`
SCRIPT_DIR=`dirname ${SCRIPT}`
## Only build from main folder
cd ${SCRIPT_DIR}/..
IMAGE="openstack-utility"
VERSION=${VERSION:-latest}
DISTRO=${DISTRO:-ubuntu_xenial}
REGISTRY_URI=${REGISTRY_URI:-"openstackhelm/"}
EXTRA_TAG_INFO=${EXTRA_TAG_INFO:-""}
docker build -f ${IMAGE}/Dockerfile.${DISTRO} --network=host -t ${REGISTRY_URI}${IMAGE}:${VERSION}-${DISTRO}${EXTRA_TAG_INFO} ${extra_build_args} ${IMAGE}
cd -

View File

@ -0,0 +1,52 @@
#!/bin/bash
set -xe
#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../../openstack-helm-infra"}
#: ${PORTHOLE_PATH}:=""
make -C ${OSH_INFRA_PATH} ceph-provisioners
#NOTE: Deploy command
: ${OSH_EXTRA_HELM_ARGS:=""}
tee /tmp/ceph-utility-config.yaml <<EOF
endpoints:
identity:
namespace: openstack
object_store:
namespace: ceph
ceph_mon:
namespace: ceph
network:
public: 172.17.0.1/16
cluster: 172.17.0.1/16
deployment:
storage_secrets: false
ceph: false
rbd_provisioner: false
cephfs_provisioner: false
client_secrets: true
rgw_keystone_user_and_endpoints: false
bootstrap:
enabled: false
conf:
rgw_ks:
enabled: true
EOF
helm upgrade --install ceph-utility-config ${OSH_INFRA_PATH}/ceph-provisioners \
--namespace=utility \
--values=/tmp/ceph-utility-config.yaml \
${OSH_EXTRA_HELM_ARGS} \
${OSH_EXTRA_HELM_ARGS_CEPH_NS_ACTIVATE}
#NOTE: Wait for deploy
./${OSH_INFRA_PATH}/tools/deployment/common/wait-for-pods.sh utility
cd charts
make ceph-utility
helm upgrade --install ceph-utility ./ceph-utility \
--namespace=utility
#NOTE: Validate Deployment info
kubectl get -n utility jobs
kubectl get -n utility secrets
kubectl get -n utility configmaps

View File

@ -0,0 +1,17 @@
#!/bin/bash
set -xe
#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../../openstack-helm-infra"}
cd charts
make compute-utility
helm upgrade --install compute-utility ./compute-utility --namespace=utility
#NOTE: Validate Deployment info
kubectl get -n utility jobs
kubectl get -n utility secrets
kubectl get -n utility configmaps
kubectl get -n utility pods

View File

@ -0,0 +1,16 @@
#!/bin/bash
set -xe
#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../../openstack-helm-infra"}
cd charts
make etcdctl-utility
helm upgrade --install etcdctl-utility ./etcdctl-utility --namespace=utility
#NOTE: Validate Deployment info
kubectl get -n utility secrets
kubectl get -n utility configmaps
kubectl get pods -n utility

View File

@ -0,0 +1,15 @@
#!/bin/bash
set -xe
#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../../openstack-helm-infra"}
cd charts
make mysqlclient-utility
helm upgrade --install mysqlclient-utility ./mysqlclient-utility --namespace=utility
#NOTE: Validate Deployment info
kubectl get pods -n utility |grep mysqlclient-utility
helm status mysqlclient-utility

View File

@ -0,0 +1,18 @@
#!/bin/bash
set -xe
#NOTE: Lint and package chart
: ${OSH_INFRA_PATH:="../openstack-helm-infra"}
cd charts
make openstack-utility
helm upgrade --install openstack-utility ./openstack-utility --namespace=utility
#NOTE: Validate Deployment info
kubectl get pods --all-namespaces | grep openstack-utility
helm status openstack-utility
export OS_CLOUD=openstack_helm
sleep 30 #NOTE(portdirect): Wait for ingress controller to update rules and restart Nginx
openstack endpoint list