Editorial changes to documentation files

Edited and revised formatting to improve readability and
consistency with other docs in this repo.

Change-Id: I8693b85fdbd84e625e774ae0fe4d81dae7d74a57
This commit is contained in:
Lindsey Durway 2019-12-04 15:11:30 -06:00
parent cd3df1b85c
commit 7b8f9d6147
9 changed files with 281 additions and 239 deletions

View File

@ -1,79 +1,93 @@
# Ceph Maintenance # Ceph Maintenance
This MOP covers Maintenance Activities related to Ceph. This document provides procedures for maintaining Ceph OSDs.
## Table of Contents ## ## Check OSD Status
<!-- TOC depthFrom:1 depthTo:6 withLinks:1 updateOnSave:1 orderedList:0 --> To check the current status of OSDs, execute the following.
- Table of Contents
- 1. Generic commands
- 2. Replace failed OSD
## 1. Generic Commands ##
### Check OSD Status
To check the current status of OSDs, execute the following:
``` ```
utilscli osd-maintenance check_osd_status utilscli osd-maintenance check_osd_status
``` ```
### OSD Removal ## OSD Removal
To purge OSDs in down state, execute the following:
To purge OSDs that are in the down state, execute the following.
``` ```
utilscli osd-maintenance osd_remove utilscli osd-maintenance osd_remove
``` ```
### OSD Removal By OSD ID ## OSD Removal by OSD ID
To purge OSDs by OSD ID in down state, execute the following:
To purge down OSDs by specifying OSD ID, execute the following.
``` ```
utilscli osd-maintenance remove_osd_by_id --osd-id <OSDID> utilscli osd-maintenance remove_osd_by_id --osd-id <OSDID>
``` ```
### Reweight OSDs ## Reweight OSDs
To adjust an OSDs crush weight in the CRUSH map of a running cluster, execute the following:
To adjust an OSDs crush weight in the CRUSH map of a running cluster,
execute the following.
``` ```
utilscli osd-maintenance reweight_osds utilscli osd-maintenance reweight_osds
``` ```
## 2. Replace failed OSD ## ## Replace a Failed OSD
In the context of a failed drive, Please follow below procedure. If a drive fails, follow these steps to replace a failed OSD.
Disable OSD pod on the host from being rescheduled 1. Disable the OSD pod on the host to keep it from being rescheduled.
```
kubectl label nodes --all ceph_maintenance_window=inactive kubectl label nodes --all ceph_maintenance_window=inactive
```
Replace `<NODE>` with the name of the node were the failed osd pods exist. 2. Below, replace `<NODE>` with the name of the node where the failed OSD pods exist.
```
kubectl label nodes <NODE> --overwrite ceph_maintenance_window=active kubectl label nodes <NODE> --overwrite ceph_maintenance_window=active
```
Replace `<POD_NAME>` with failed OSD pod name 3. Below, replace `<POD_NAME>` with the failed OSD pod name.
```
kubectl patch -n ceph ds <POD_NAME> -p='{"spec":{"template":{"spec":{"nodeSelector":{"ceph-osd":"enabled","ceph_maintenance_window":"inactive"}}}}}' kubectl patch -n ceph ds <POD_NAME> -p='{"spec":{"template":{"spec":{"nodeSelector":{"ceph-osd":"enabled","ceph_maintenance_window":"inactive"}}}}}'
```
Following commands should be run from utility container Complete the recovery by executing the following commands from the Ceph utility container.
Capture the failed OSD ID. Check for status `down` 1. Capture the failed OSD ID. Check for status `down`.
```
utilscli ceph osd tree utilscli ceph osd tree
```
Remove the OSD from Cluster. Replace `<OSD_ID>` with above captured failed OSD ID 2. Remove the OSD from the cluster. Below, replace
`<OSD_ID>` with the ID of the failed OSD.
```
utilscli osd-maintenance osd_remove_by_id --osd-id <OSD_ID> utilscli osd-maintenance osd_remove_by_id --osd-id <OSD_ID>
```
Remove the failed drive and replace it with a new one without bringing down the node. 3. Remove the failed drive and replace it with a new one without bringing down
the node.
Once new drive is placed, change the label and delete the concern OSD pod in `error` or `CrashLoopBackOff` state. Replace `<POD_NAME>` with failed OSD pod name. 4. Once the new drive is in place, change the label and delete the OSD pod that
is in the `error` or `CrashLoopBackOff` state. Below, replace `<POD_NAME>`
with the failed OSD pod name.
```
kubectl label nodes <NODE> --overwrite ceph_maintenance_window=inactive kubectl label nodes <NODE> --overwrite ceph_maintenance_window=inactive
kubectl delete pod <POD_NAME> -n ceph kubectl delete pod <POD_NAME> -n ceph
```
Once pod is deleted, kubernetes will re-spin a new pod for the OSD. Once Pod is up, the osd is added to ceph cluster with weight equal to `0`. we need to re-weight the osd. Once the pod is deleted, Kubernetes will re-spin a new pod for the OSD.
Once the pod is up, the OSD is added to the Ceph cluster with a weight equal
to `0`. Re-weight the OSD.
```
utilscli osd-maintenance reweight_osds utilscli osd-maintenance reweight_osds
```

View File

@ -1,10 +1,12 @@
# RBD PVC/PV script # RBD PVC/PV script
This MOP covers Maintenance Activities related to using the rbd_pv script This document provides instructions for using the `rbd_pv` script to
to backup and recover PVCs within your kubernetes environment using Ceph. perform Ceph maintenance actions such as
backing up and recovering PVCs within your Kubernetes environment.
## Usage ## Usage
Execute utilscli rbd_pv without arguements to list usage options.
Execute `utilscli rbd_pv` without arguments to list usage options.
``` ```
utilscli rbd_pv utilscli rbd_pv
@ -14,20 +16,24 @@ Snapshot Usage: utilscli rbd_pv [-b <pvc name>] [-n <namespace>] [-p <ceph rbd p
``` ```
## Backing up a PVC/PV from RBD ## Backing up a PVC/PV from RBD
To backup a PV, execute the following:
To backup a PV, execute the following.
``` ```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack
``` ```
## Restoring a PVC/PV backup ## Restoring a PVC/PV Backup
To restore a PV RBD backup image, execute the following:
To restore a PV RBD backup image, execute the following.
``` ```
utilscli rbd_pv -r /backup/kubernetes-dynamic-pvc-ab1f2e8f-21a4-11e9-ab61-ca77944df03c.img utilscli rbd_pv -r /backup/kubernetes-dynamic-pvc-ab1f2e8f-21a4-11e9-ab61-ca77944df03c.img
``` ```
NOTE: The original PVC/PV will be renamed and not overwritten.
NOTE: Before restoring, you _must_ ensure it is not mounted! **Note:** The original PVC/PV will be renamed, not overwritten.
**Important:** Before restoring, you _must_ ensure the PVC/PV is not mounted!
## Creating a Snapshot for a PVC/PV ## Creating a Snapshot for a PVC/PV
@ -35,23 +41,23 @@ NOTE: Before restoring, you _must_ ensure it is not mounted!
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s create utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s create
``` ```
## Rolling back to a Snapshot for a PVC/PV ## Rolling Back to a Snapshot for a PVC/PV
``` ```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s rollback utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s rollback
``` ```
NOTE: Before rolling back a snapshot, you _must_ ensure the PVC/PV volume is not mounted!! **Important:** Before rolling back a snapshot, you _must_ ensure the PVC/PV volume is not mounted!
## Removing a Snapshot for a PVC/PV ## Removing a Snapshot for a PVC/PV
**Important:** This command removes all snapshots in Ceph associated with this PVC/PV!
``` ```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s remove utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s remove
``` ```
NOTE: This will remove all snapshots in Ceph associated to this PVC/PV! ## Show Snapshot and Image Details for a PVC/PV
## Show Snapshot and Image details for a PVC/PV
``` ```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s show utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s show

View File

@ -1,42 +1,53 @@
# Calicoctl-utility Container # Calicoctl-utility Container
<<<<<<< HEAD This container shall allow access to the Calico pod running on every node.
This container shall allow access to calico pod running on every node. Operations personnel should be able to get the appropriate data from this
Support personnel should be able to get the appropriate data from this utility container utility container by specifying the node and respective service command
by specifying the node and respective service command within the local cluster. within the local cluster.
## Generic Docker Makefile ## Generic Docker Makefile
This is a generic make and dockerfile for calicoctl utility container, which This is a generic make and dockerfile for the calicoctl utility container,
can be used to create docker images using different calico releases. which can be used to create docker images using different calico releases.
## Usage ### Make Syntax
make IMAGE_TAG=<calicoctl_version> ```bash
make IMAGE_TAG=<calicoctl_version>
```
Example: Example:
1. Create docker image for calicoctl release v3.4.0 Create a docker image for calicoctl release v3.4.0.
```bash
make IMAGE_TAG=v3.4.0 make IMAGE_TAG=v3.4.0
======= ```
Utility container for Calicoctl shall enable Operations to trigger the command set for
Network APIs together from within a single shell with a uniform command structure. The
access to network-Calico shall be controlled through RBAC role assigned to the user.
## Usage ## Using the Utility Container
Get in to the utility pod using kubectl exec. The utility container for calicoctl shall enable Operations to access the
To perform any operation use the below example. command set for network APIs together from within a single shell with a
uniform command structure. The access to network-Calico shall be controlled
through an RBAC role assigned to the user.
- kubectl exec -it <POD_NAME> -n utility /bin/bash ### Usage
Get into the utility pod using `kubectl exec`.
Execute an operation as in the following example.
```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
Example: Example:
1. utilscli calicoctl get nodes ```bash
utilscli calicoctl get nodes
NAME NAME
bionic bionic
2. utilscli calicoctl version utilscli calicoctl version
Client Version: v3.4.4 Client Version: v3.4.4
Git commit: e3ecd927 Git commit: e3ecd927
```

View File

@ -1,42 +1,55 @@
# Ceph-utility Container # Ceph-utility Container
This CEPH utility container will help the Operation user to check the state/stats The Ceph utility container enables Operations to check the state/stats
of Ceph resources in the K8s Cluster. This utility container will help to perform of Ceph resources in the Kubernetes cluster. This utility container enables
restricted admin level activities without exposing credentials/Keyring to user in Operations to perform restricted administrative activities without exposing
utility container. the credentials or keyring.
## Generic Docker Makefile ## Generic Docker Makefile
This is a generic make and dockerfile for the Ceph utility container.
This is a generic make and dockerfile for ceph utility container. This can be used to create docker images using different Ceph releases and
This can be used to create docker images using different ceph releases and ubuntu releases Ubuntu releases
## Usage ## Usage
make CEPH_RELEASE=<release_name> UBUNTU_RELEASE=<release_name> ```bash
make CEPH_RELEASE=<release_name> UBUNTU_RELEASE=<release_name>
```
example: Example:
1. Create docker image for ceph luminous release on ubuntu xenial (16.04) 1. Create a docker image for the Ceph Luminous release on Ubuntu Xenial (16.04).
```bash
make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial
```
2. Create docker image for ceph mimic release on ubuntu xenial (16.04) 2. Create a docker image for the Ceph Mimic release on Ubuntu Xenial (16.04).
```bash
make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial
```
3. Create docker image for ceph luminous release on ubuntu bionic (18.04) 3. Create a docker image for the Ceph Luminous release on Ubuntu Bionic (18.04).
```bash
make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic
```
4. Create docker image for ceph mimic release on ubuntu bionic (18.04) 4. Create a docker image for the Ceph Mimic release on Ubuntu Bionic (18.04).
```bash
make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic
```
5. Get in to the utility pod using kubectl exec. 5. Get into the utility pod using `kubectl exec`.
To perform any operation on the ceph cluster use the below example. Perform an operation on the Ceph cluster as in the following example.
example: Example:
```
utilscli ceph osd tree utilscli ceph osd tree
utilscli rbd ls utilscli rbd ls
utilscli rados lspools utilscli rados lspools
```

View File

@ -1,30 +1,38 @@
# Compute-utility Container # Compute-utility Container
This container shall allow access to services running on the each compute node. This container enables Operations personnel to access services running on
Support personnel should be able to get the appropriate data from this utility container the compute nodes. Operations personnel can get the appropriate data from this
by specifying the node and respective service command within the local cluster. utility container by specifying the node and respective service command within
the local cluster.
## Usage ## Usage
1. Get in to the utility pod using kubectl exec. To perform any operation use the below example. 1. Get into the utility pod using `kubectl exec`. Perform an operation as in
the following example.
- kubectl exec -it <POD_NAME> -n utility /bin/bash ```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
2. Run the utilscli with commands formatted: 2. Use the following syntax to run commands.
- utilscli <client-name> <server-hostname> <command> <options> ```
utilscli <client-name> <server-hostname> <command> <options>
```
example: Example:
- utilscli libvirt-client mtn16r001c002 virsh list ```
utilscli libvirt-client node42 virsh list
```
Accepted client names are:
Accepted client-names are: * libvirt-client
libvirt-client * ovs-client
ovs-client * ipmi-client
ipmi-client * perccli-client
perccli-client * numa-client
numa-client * sos-client
sos-client
Commands for each client vary with the client. Commands for each client vary with the client.

View File

@ -1,70 +1,74 @@
# etcdctl utility Container # Etcdctl Utility Container
## Prerequisites: Deploy Airship in a Bottle(AIAB) ## Prerequisites: Deploy Airship in a Bottle (AIAB)
To get started, run the following in a fresh Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk). To get started, deploy Airship and OpenStack Helm (OSH).
This will deploy Airship and Openstack Helm (OSH). Execute the following in a fresh Ubuntu 16.04 VM having these minimum requirements:
1. Add the below to /etc/sudoers * 4 vCPU
* 20 GB RAM
* 32 GB disk storage
1. Add the following entries to `/etc/sudoers`.
``` ```
root ALL=(ALL) NOPASSWD: ALL root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL ubuntu ALL=(ALL) NOPASSWD: ALL
``` ```
2. Install the latest versions of Git, CA Certs & bundle & Make if necessary 2. Install the latest versions of Git, CA Certs, and Make if necessary.
``` ```bash
set -xe \ set -xe \
sudo apt-get update \ sudo apt-get update \
sudo apt-get install --no-install-recommends -y \ sudo apt-get install --no-install-recommends -y \
ca-certificates \ ca-certificates \
git \ git \
make \ make \
jq \ jq \
nmap \ nmap \
curl \ curl \
uuid-runtime uuid-runtime
``` ```
## Deploy Airship in a Bottle(AIAB) ## Deploy Airship in a Bottle (AIAB)
Deploy AirShip in a Bottle(AIAB) which will deploy etcdctl-utility pod. Deploy Airship in a Bottle (AIAB), which deploys the etcdctl-utility pod.
``` ```bash
sudo -i \ sudo -i \
mkdir -p root/deploy && cd "$_" \ mkdir -p root/deploy && cd "$_" \
git clone https://opendev.org/airship/treasuremap \ git clone https://opendev.org/airship/treasuremap \
cd /root/deploy/treasuremap/tools/deployment/aiab \ cd /root/deploy/treasuremap/tools/deployment/aiab \
./airship-in-a-bottle.sh ./airship-in-a-bottle.sh
``` ```
## Usage and Test ## Usage and Test
Get in to the etcdctl-utility pod using kubectl exec. Get into the etcdctl-utility pod using `kubectl exec`.
To perform any operation use the below example. Perform an operation as in the following example.
``` ```
$kubectl exec -it <POD_NAME> -n utility -- /bin/bash kubectl exec -it <POD_NAME> -n utility -- /bin/bash
``` ```
example: Example:
``` ```
utilscli etcdctl member list utilscli etcdctl member list
utilscli etcdctl endpoint health utilscli etcdctl endpoint health
utilscli etcdctl endpoint status utilscli etcdctl endpoint status
nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl member list nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl member list
90d1b75fa1b31b89, started, ubuntu, https://10.0.2.15:2380, https://10.0.2.15:2379 90d1b75fa1b31b89, started, ubuntu, https://10.0.2.15:2380, https://10.0.2.15:2379
ab1f60375c5ef1d3, started, auxiliary-1, https://10.0.2.15:22380, https://10.0.2.15:22379 ab1f60375c5ef1d3, started, auxiliary-1, https://10.0.2.15:22380, https://10.0.2.15:22379
d8ed590018245b3c, started, auxiliary-0, https://10.0.2.15:12380, https://10.0.2.15:12379 d8ed590018245b3c, started, auxiliary-0, https://10.0.2.15:12380, https://10.0.2.15:12379
nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl endpoint health nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl endpoint health
https://kubernetes-etcd.kube-system.svc.cluster.local:2379 is healthy: https://kubernetes-etcd.kube-system.svc.cluster.local:2379 is healthy:
successfully committed proposal: took = 1.787714ms successfully committed proposal: took = 1.787714ms
nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl alarm list nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl alarm list
nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl version nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ utilscli etcdctl version
etcdctl version: 3.4.2 etcdctl version: 3.4.2
API version: 3.3 API version: 3.3
nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$ nobody@airship-etcdctl-utility-998b4f4d6-65x6d:/$
``` ```

View File

@ -1,47 +1,52 @@
# Mysqlclient-utility Container # Mysqlclient-utility Container
This container allows users access to MariaDB pods remotely to perform db This utility container allows Operations personnel to access MariaDB pods
functions. Authorized users in UCP keystone RBAC will able to run queries remotely to perform database functions. Authorized users in UCP Keystone
through 'utilscli' helper. RBAC will able to run queries through the `utilscli` helper.
## Usage & Test ## Usage
Get in to the utility pod using kubectl exec. Then perform the followings: Get into the utility pod using `kubectl exec`.
## Case 1 - Execute into the pod ```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
- $kubectl exec -it <POD_NAME> -n utility /bin/bash ## Testing Connectivity to Mariadb (Optional)
## Case 2 - Test connectivity to Mariadb (optional) 1. Find the mariadb pod and its corresponding IP.
1. Find mariadb pod and its corresponding IP ```
--- kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \
- $kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \
| while read a b ; do kubectl get pod $b -n $a -o wide | while read a b ; do kubectl get pod $b -n $a -o wide
done done
--- ```
2. Now connect to the pod as described in Case 1 by providing the arguments 2. Connect to the indicated pod by providing the arguments
as indicated for the CLI, as shown below specified for the CLI as shown below.
- $kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> \ ```
kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> \
-e 'show databases;' -e 'show databases;'
```
It's expected to see an output looks similar to below. The output should resemble the following.
>--------------------+\ ```
| Database |\ >--------------------+\
|--------------------|\ | Database |\
| cinder |\ |--------------------|\
| glance |\ | cinder |\
| heat |\ | glance |\
| horizon |\ | heat |\
| information_schema |\ | horizon |\
| keystone |\ | information_schema |\
| mysql |\ | keystone |\
| neutron |\ | mysql |\
| nova |\ | neutron |\
| nova_api |\ | nova |\
| nova_cell0 |\ | nova_api |\
| performance_schema |\ | nova_cell0 |\
+--------------------+\ | performance_schema |\
+--------------------+\
```

View File

@ -1,24 +1,30 @@
# Openstack-utility Container # OpenStack-utility Container
Utility container for Openstack shall enable Operations to trigger the command set for The utility container for OpenStack shall enable Operations to access the
Compute, Network, Identity, Image, Block Storage, Queueing service APIs together from command set for Compute, Network, Identity, Image, Block Storage, and
within a single shell with a uniform command structure. The access to Openstack shall Queueing service APIs together from within a single shell with a uniform
be controlled through Openstack RBAC role assigned to the user. User will have to set command structure. The access to OpenStack shall be controlled through an
the Openstack environment (openrc) in utility container to access the Openstack CLIs. OpenStack RBAC role assigned to the user. The user will have to set
The generic environment file will be placed in Utility container with common setting except the OpenStack environment (openrc) in the utility container to access the
username, password and project_ID. User needs to pass such parameters through command options. OpenStack CLIs. The generic environment file will be placed in the utility
container with common settings except username, password, and project_ID.
The user needs to specify these parameters using command options.
## Usage ## Usage
1. Get in to the utility pod using kubectl exec. Get into the utility pod using `kubectl exec`.
To perform any operation use the below example. Perform an operation as in the following example.
Please be ready with password for accessing below cli commands. Please be ready with your password for accessing the CLI commands.
- kubectl exec -it <POD_NAME> -n utility /bin/bash ```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
example: Example:
```bash
utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \ utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME --os-project-name <PROJECT_NAME
utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \ utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME --os-project-name <PROJECT_NAME
```

View File

@ -1,13 +1,13 @@
# Jump host installation # Jump Host Installation
The install will Kubernetes client and the corresponding dependencies in order This procedure installs the Kubernetes client and corresponding dependencies,
to able to connect to K8S cluster remotely. It will also create a generic enabling remote access to the Kubernetes cluster. The procedure also creates
kubectl configuration file with appropriate attributes required. a generic `kubectl` configuration file having the appropriate attributes.
This revision covers the implementation as described. [k8s-keystone-auth]( This revision covers the implementation as described. [k8s-keystone-auth](
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-keystone-webhook-authenticator-and-authorizer.md#new-kubectl-clients-from-v1110-and-later) https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-keystone-webhook-authenticator-and-authorizer.md#new-kubectl-clients-from-v1110-and-later)
## 1. Pre-requisites ## 1. Prerequisites
* Ubuntu OS version 14.x or higher * Ubuntu OS version 14.x or higher
* Connectivity to the Internet * Connectivity to the Internet
@ -16,43 +16,13 @@ https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-ke
## 2. Installation ## 2. Installation
### 2.1 Clone Porthole main project ### 2.1 Clone Porthole Main Project
$git clone https://review.opendev.org/airship/porthole $git clone https://review.opendev.org/airship/porthole
Cloning into 'porthole'... ### 2.2 Run Setup
remote: Counting objects: 362, done
remote: Finding sources: 100% (362/362)
remote: Total 362 (delta 185), reused 311 (delta 185)
Receiving objects: 100% (362/362), 98.30 KiB | 0 bytes/s, done.
Resolving deltas: 100% (185/185), done.
Checking connectivity... done.
### 2.2 Pull PatchSet (optional)
$cd porthole
$git pull https://review.opendev.org/airship/porthole refs/changes/92/674892/[latest change set]
remote: Counting objects: 10, done
remote: Finding sources: 100% (8/8)
remote: Total 8 (delta 2), reused 7 (delta 2)
Unpacking objects: 100% (8/8), done.
From https://review.opendev.org/airship/porthole
branch refs/changes/92/674892/9 -> FETCH_HEAD
Merge made by the 'recursive' strategy.
jmphost/README.md | 130 ++++++++++++++++++++++++++++++++++++++++
jmphost/funs_uc.sh | 57 ++++++++++++++++++++++++++++++++++++++++
jmphost/setup-access.sh | 132 ++++++++++++++++++++++++++++++++++++++++
zuul.d/jmphost-utility.yaml | 35 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 354 insertions(+)
create mode 100644 jmphost/README.md
create mode 100755 jmphost/funs_uc.sh
create mode 100755 jmphost/setup-access.sh
create mode 100644 zuul.d/jmphost-utility.yaml
### 2.3 Run Setup
$cd $porthole
$sudo -s $sudo -s
$cd jmphost $cd jmphost
$./setup-access.sh "site" "userid" "namespace" $./setup-access.sh "site" "userid" "namespace"
@ -131,16 +101,20 @@ https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-ke
args: args:
- "--keystone-url=https://<FQDN TO UCP KEYSTONE>/v3" - "--keystone-url=https://<FQDN TO UCP KEYSTONE>/v3"
## Validation ## 3. Validation
- Now log out and log back in as the user. To test, perform these steps.
- Update the configuration file with user corresponding credentials.
For testing purposes: 1. Log out and log back in as the user.
- Replacing **"OS_USERNAME"** and **"OS_PASSWORD"** with UCP Keystone credentials
- Set the **"OS_PROJECT_NAME"** value accordingly
### List pods 2. Update the configuration file with the user's credentials.
* Replace *"OS_USERNAME"* and *"OS_PASSWORD"* with UCP Keystone
credentials.
* Set the *"OS_PROJECT_NAME"* value accordingly.
### 3.1 List Pods
$kubectl get pods -n utility $kubectl get pods -n utility
@ -152,7 +126,7 @@ For testing purposes:
clcp-ucp-ceph-utility-config-ceph-ns-key-generator-pvfcl 0/1 Completed 0 4h12m clcp-ucp-ceph-utility-config-ceph-ns-key-generator-pvfcl 0/1 Completed 0 4h12m
clcp-ucp-ceph-utility-config-test 0/1 Completed 0 4h12m clcp-ucp-ceph-utility-config-test 0/1 Completed 0 4h12m
### Execute into the pod ### 3.2 Execute into the Pod
$kubectl exec -it [pod-name] -n utility /bin/bash $kubectl exec -it [pod-name] -n utility /bin/bash
@ -160,5 +134,6 @@ For testing purposes:
command terminated with exit code 126 command terminated with exit code 126
Because the user id entered in the configuration file is not a member in UCP keystone The "permission denied" error is expected in this case because the user ID
RBAC to execute into the pod, it's expecting to see "permission denied". entered in the configuration file is not a member in the UCP Keystone
RBAC to execute into the pod.