Editorial changes to documentation files

Edited and revised formatting to improve readability and
consistency with other docs in this repo.

Change-Id: I8693b85fdbd84e625e774ae0fe4d81dae7d74a57
This commit is contained in:
Lindsey Durway 2019-12-04 15:11:30 -06:00
parent cd3df1b85c
commit 7b8f9d6147
9 changed files with 281 additions and 239 deletions

View File

@ -1,79 +1,93 @@
# Ceph Maintenance
This MOP covers Maintenance Activities related to Ceph.
This document provides procedures for maintaining Ceph OSDs.
## Table of Contents ##
## Check OSD Status
<!-- TOC depthFrom:1 depthTo:6 withLinks:1 updateOnSave:1 orderedList:0 -->
- Table of Contents
- 1. Generic commands
- 2. Replace failed OSD
## 1. Generic Commands ##
### Check OSD Status
To check the current status of OSDs, execute the following:
To check the current status of OSDs, execute the following.
```
utilscli osd-maintenance check_osd_status
```
### OSD Removal
To purge OSDs in down state, execute the following:
## OSD Removal
To purge OSDs that are in the down state, execute the following.
```
utilscli osd-maintenance osd_remove
```
### OSD Removal By OSD ID
To purge OSDs by OSD ID in down state, execute the following:
## OSD Removal by OSD ID
To purge down OSDs by specifying OSD ID, execute the following.
```
utilscli osd-maintenance remove_osd_by_id --osd-id <OSDID>
```
### Reweight OSDs
To adjust an OSDs crush weight in the CRUSH map of a running cluster, execute the following:
## Reweight OSDs
To adjust an OSDs crush weight in the CRUSH map of a running cluster,
execute the following.
```
utilscli osd-maintenance reweight_osds
```
## 2. Replace failed OSD ##
## Replace a Failed OSD
In the context of a failed drive, Please follow below procedure.
If a drive fails, follow these steps to replace a failed OSD.
Disable OSD pod on the host from being rescheduled
1. Disable the OSD pod on the host to keep it from being rescheduled.
```
kubectl label nodes --all ceph_maintenance_window=inactive
```
Replace `<NODE>` with the name of the node were the failed osd pods exist.
2. Below, replace `<NODE>` with the name of the node where the failed OSD pods exist.
```
kubectl label nodes <NODE> --overwrite ceph_maintenance_window=active
```
Replace `<POD_NAME>` with failed OSD pod name
3. Below, replace `<POD_NAME>` with the failed OSD pod name.
```
kubectl patch -n ceph ds <POD_NAME> -p='{"spec":{"template":{"spec":{"nodeSelector":{"ceph-osd":"enabled","ceph_maintenance_window":"inactive"}}}}}'
```
Following commands should be run from utility container
Complete the recovery by executing the following commands from the Ceph utility container.
Capture the failed OSD ID. Check for status `down`
1. Capture the failed OSD ID. Check for status `down`.
```
utilscli ceph osd tree
```
Remove the OSD from Cluster. Replace `<OSD_ID>` with above captured failed OSD ID
2. Remove the OSD from the cluster. Below, replace
`<OSD_ID>` with the ID of the failed OSD.
```
utilscli osd-maintenance osd_remove_by_id --osd-id <OSD_ID>
```
Remove the failed drive and replace it with a new one without bringing down the node.
3. Remove the failed drive and replace it with a new one without bringing down
the node.
Once new drive is placed, change the label and delete the concern OSD pod in `error` or `CrashLoopBackOff` state. Replace `<POD_NAME>` with failed OSD pod name.
4. Once the new drive is in place, change the label and delete the OSD pod that
is in the `error` or `CrashLoopBackOff` state. Below, replace `<POD_NAME>`
with the failed OSD pod name.
```
kubectl label nodes <NODE> --overwrite ceph_maintenance_window=inactive
kubectl delete pod <POD_NAME> -n ceph
```
Once pod is deleted, kubernetes will re-spin a new pod for the OSD. Once Pod is up, the osd is added to ceph cluster with weight equal to `0`. we need to re-weight the osd.
Once the pod is deleted, Kubernetes will re-spin a new pod for the OSD.
Once the pod is up, the OSD is added to the Ceph cluster with a weight equal
to `0`. Re-weight the OSD.
```
utilscli osd-maintenance reweight_osds
```

View File

@ -1,10 +1,12 @@
# RBD PVC/PV script
This MOP covers Maintenance Activities related to using the rbd_pv script
to backup and recover PVCs within your kubernetes environment using Ceph.
This document provides instructions for using the `rbd_pv` script to
perform Ceph maintenance actions such as
backing up and recovering PVCs within your Kubernetes environment.
## Usage
Execute utilscli rbd_pv without arguements to list usage options.
Execute `utilscli rbd_pv` without arguments to list usage options.
```
utilscli rbd_pv
@ -14,20 +16,24 @@ Snapshot Usage: utilscli rbd_pv [-b <pvc name>] [-n <namespace>] [-p <ceph rbd p
```
## Backing up a PVC/PV from RBD
To backup a PV, execute the following:
To backup a PV, execute the following.
```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack
```
## Restoring a PVC/PV backup
To restore a PV RBD backup image, execute the following:
## Restoring a PVC/PV Backup
To restore a PV RBD backup image, execute the following.
```
utilscli rbd_pv -r /backup/kubernetes-dynamic-pvc-ab1f2e8f-21a4-11e9-ab61-ca77944df03c.img
```
NOTE: The original PVC/PV will be renamed and not overwritten.
NOTE: Before restoring, you _must_ ensure it is not mounted!
**Note:** The original PVC/PV will be renamed, not overwritten.
**Important:** Before restoring, you _must_ ensure the PVC/PV is not mounted!
## Creating a Snapshot for a PVC/PV
@ -35,23 +41,23 @@ NOTE: Before restoring, you _must_ ensure it is not mounted!
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s create
```
## Rolling back to a Snapshot for a PVC/PV
## Rolling Back to a Snapshot for a PVC/PV
```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s rollback
```
NOTE: Before rolling back a snapshot, you _must_ ensure the PVC/PV volume is not mounted!!
**Important:** Before rolling back a snapshot, you _must_ ensure the PVC/PV volume is not mounted!
## Removing a Snapshot for a PVC/PV
**Important:** This command removes all snapshots in Ceph associated with this PVC/PV!
```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s remove
```
NOTE: This will remove all snapshots in Ceph associated to this PVC/PV!
## Show Snapshot and Image details for a PVC/PV
## Show Snapshot and Image Details for a PVC/PV
```
utilscli rbd_pv -b mysql-data-mariadb-server-0 -n openstack -s show

View File

@ -1,42 +1,53 @@
# Calicoctl-utility Container
<<<<<<< HEAD
This container shall allow access to calico pod running on every node.
Support personnel should be able to get the appropriate data from this utility container
by specifying the node and respective service command within the local cluster.
This container shall allow access to the Calico pod running on every node.
Operations personnel should be able to get the appropriate data from this
utility container by specifying the node and respective service command
within the local cluster.
## Generic Docker Makefile
This is a generic make and dockerfile for calicoctl utility container, which
can be used to create docker images using different calico releases.
This is a generic make and dockerfile for the calicoctl utility container,
which can be used to create docker images using different calico releases.
## Usage
### Make Syntax
```bash
make IMAGE_TAG=<calicoctl_version>
```
Example:
1. Create docker image for calicoctl release v3.4.0
Create a docker image for calicoctl release v3.4.0.
```bash
make IMAGE_TAG=v3.4.0
=======
Utility container for Calicoctl shall enable Operations to trigger the command set for
Network APIs together from within a single shell with a uniform command structure. The
access to network-Calico shall be controlled through RBAC role assigned to the user.
```
## Usage
## Using the Utility Container
Get in to the utility pod using kubectl exec.
To perform any operation use the below example.
The utility container for calicoctl shall enable Operations to access the
command set for network APIs together from within a single shell with a
uniform command structure. The access to network-Calico shall be controlled
through an RBAC role assigned to the user.
- kubectl exec -it <POD_NAME> -n utility /bin/bash
### Usage
Get into the utility pod using `kubectl exec`.
Execute an operation as in the following example.
```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
Example:
1. utilscli calicoctl get nodes
```bash
utilscli calicoctl get nodes
NAME
bionic
2. utilscli calicoctl version
utilscli calicoctl version
Client Version: v3.4.4
Git commit: e3ecd927
```

View File

@ -1,42 +1,55 @@
# Ceph-utility Container
This CEPH utility container will help the Operation user to check the state/stats
of Ceph resources in the K8s Cluster. This utility container will help to perform
restricted admin level activities without exposing credentials/Keyring to user in
utility container.
The Ceph utility container enables Operations to check the state/stats
of Ceph resources in the Kubernetes cluster. This utility container enables
Operations to perform restricted administrative activities without exposing
the credentials or keyring.
## Generic Docker Makefile
This is a generic make and dockerfile for ceph utility container.
This can be used to create docker images using different ceph releases and ubuntu releases
This is a generic make and dockerfile for the Ceph utility container.
This can be used to create docker images using different Ceph releases and
Ubuntu releases
## Usage
```bash
make CEPH_RELEASE=<release_name> UBUNTU_RELEASE=<release_name>
```
example:
Example:
1. Create docker image for ceph luminous release on ubuntu xenial (16.04)
1. Create a docker image for the Ceph Luminous release on Ubuntu Xenial (16.04).
```bash
make CEPH_RELEASE=luminous UBUNTU_RELEASE=xenial
```
2. Create docker image for ceph mimic release on ubuntu xenial (16.04)
2. Create a docker image for the Ceph Mimic release on Ubuntu Xenial (16.04).
```bash
make CEPH_RELEASE=mimic UBUNTU_RELEASE=xenial
```
3. Create docker image for ceph luminous release on ubuntu bionic (18.04)
3. Create a docker image for the Ceph Luminous release on Ubuntu Bionic (18.04).
```bash
make CEPH_RELEASE=luminous UBUNTU_RELEASE=bionic
```
4. Create docker image for ceph mimic release on ubuntu bionic (18.04)
4. Create a docker image for the Ceph Mimic release on Ubuntu Bionic (18.04).
```bash
make CEPH_RELEASE=mimic UBUNTU_RELEASE=bionic
```
5. Get in to the utility pod using kubectl exec.
To perform any operation on the ceph cluster use the below example.
5. Get into the utility pod using `kubectl exec`.
Perform an operation on the Ceph cluster as in the following example.
example:
Example:
```
utilscli ceph osd tree
utilscli rbd ls
utilscli rados lspools
```

View File

@ -1,30 +1,38 @@
# Compute-utility Container
This container shall allow access to services running on the each compute node.
Support personnel should be able to get the appropriate data from this utility container
by specifying the node and respective service command within the local cluster.
This container enables Operations personnel to access services running on
the compute nodes. Operations personnel can get the appropriate data from this
utility container by specifying the node and respective service command within
the local cluster.
## Usage
1. Get in to the utility pod using kubectl exec. To perform any operation use the below example.
1. Get into the utility pod using `kubectl exec`. Perform an operation as in
the following example.
- kubectl exec -it <POD_NAME> -n utility /bin/bash
```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
2. Run the utilscli with commands formatted:
2. Use the following syntax to run commands.
- utilscli <client-name> <server-hostname> <command> <options>
```
utilscli <client-name> <server-hostname> <command> <options>
```
example:
Example:
- utilscli libvirt-client mtn16r001c002 virsh list
```
utilscli libvirt-client node42 virsh list
```
Accepted client names are:
Accepted client-names are:
libvirt-client
ovs-client
ipmi-client
perccli-client
numa-client
sos-client
* libvirt-client
* ovs-client
* ipmi-client
* perccli-client
* numa-client
* sos-client
Commands for each client vary with the client.

View File

@ -1,20 +1,24 @@
# etcdctl utility Container
# Etcdctl Utility Container
## Prerequisites: Deploy Airship in a Bottle (AIAB)
To get started, run the following in a fresh Ubuntu 16.04 VM (minimum 4vCPU/20GB RAM/32GB disk).
This will deploy Airship and Openstack Helm (OSH).
To get started, deploy Airship and OpenStack Helm (OSH).
Execute the following in a fresh Ubuntu 16.04 VM having these minimum requirements:
1. Add the below to /etc/sudoers
* 4 vCPU
* 20 GB RAM
* 32 GB disk storage
1. Add the following entries to `/etc/sudoers`.
```
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
```
2. Install the latest versions of Git, CA Certs & bundle & Make if necessary
2. Install the latest versions of Git, CA Certs, and Make if necessary.
```
```bash
set -xe \
sudo apt-get update \
sudo apt-get install --no-install-recommends -y \
@ -29,9 +33,9 @@ uuid-runtime
## Deploy Airship in a Bottle (AIAB)
Deploy AirShip in a Bottle(AIAB) which will deploy etcdctl-utility pod.
Deploy Airship in a Bottle (AIAB), which deploys the etcdctl-utility pod.
```
```bash
sudo -i \
mkdir -p root/deploy && cd "$_" \
git clone https://opendev.org/airship/treasuremap \
@ -41,14 +45,14 @@ cd /root/deploy/treasuremap/tools/deployment/aiab \
## Usage and Test
Get in to the etcdctl-utility pod using kubectl exec.
To perform any operation use the below example.
Get into the etcdctl-utility pod using `kubectl exec`.
Perform an operation as in the following example.
```
$kubectl exec -it <POD_NAME> -n utility -- /bin/bash
kubectl exec -it <POD_NAME> -n utility -- /bin/bash
```
example:
Example:
```
utilscli etcdctl member list

View File

@ -1,34 +1,38 @@
# Mysqlclient-utility Container
This container allows users access to MariaDB pods remotely to perform db
functions. Authorized users in UCP keystone RBAC will able to run queries
through 'utilscli' helper.
This utility container allows Operations personnel to access MariaDB pods
remotely to perform database functions. Authorized users in UCP Keystone
RBAC will able to run queries through the `utilscli` helper.
## Usage & Test
## Usage
Get in to the utility pod using kubectl exec. Then perform the followings:
Get into the utility pod using `kubectl exec`.
## Case 1 - Execute into the pod
```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
- $kubectl exec -it <POD_NAME> -n utility /bin/bash
## Testing Connectivity to Mariadb (Optional)
## Case 2 - Test connectivity to Mariadb (optional)
1. Find the mariadb pod and its corresponding IP.
1. Find mariadb pod and its corresponding IP
---
- $kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \
```
kubectl get pods --all-namespaces | grep -i mariadb-server | awk '{print $1,$2}' \
| while read a b ; do kubectl get pod $b -n $a -o wide
done
---
```
2. Now connect to the pod as described in Case 1 by providing the arguments
as indicated for the CLI, as shown below
2. Connect to the indicated pod by providing the arguments
specified for the CLI as shown below.
- $kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> \
```
kubectl exec <POD_NAME> -it -n utility -- mysql -h <IP> -u root -p<PASSWORD> \
-e 'show databases;'
```
It's expected to see an output looks similar to below.
The output should resemble the following.
```
>--------------------+\
| Database |\
|--------------------|\
@ -45,3 +49,4 @@ done
| nova_cell0 |\
| performance_schema |\
+--------------------+\
```

View File

@ -1,24 +1,30 @@
# Openstack-utility Container
# OpenStack-utility Container
Utility container for Openstack shall enable Operations to trigger the command set for
Compute, Network, Identity, Image, Block Storage, Queueing service APIs together from
within a single shell with a uniform command structure. The access to Openstack shall
be controlled through Openstack RBAC role assigned to the user. User will have to set
the Openstack environment (openrc) in utility container to access the Openstack CLIs.
The generic environment file will be placed in Utility container with common setting except
username, password and project_ID. User needs to pass such parameters through command options.
The utility container for OpenStack shall enable Operations to access the
command set for Compute, Network, Identity, Image, Block Storage, and
Queueing service APIs together from within a single shell with a uniform
command structure. The access to OpenStack shall be controlled through an
OpenStack RBAC role assigned to the user. The user will have to set
the OpenStack environment (openrc) in the utility container to access the
OpenStack CLIs. The generic environment file will be placed in the utility
container with common settings except username, password, and project_ID.
The user needs to specify these parameters using command options.
## Usage
1. Get in to the utility pod using kubectl exec.
To perform any operation use the below example.
Please be ready with password for accessing below cli commands.
Get into the utility pod using `kubectl exec`.
Perform an operation as in the following example.
Please be ready with your password for accessing the CLI commands.
- kubectl exec -it <POD_NAME> -n utility /bin/bash
```
kubectl exec -it <POD_NAME> -n utility /bin/bash
```
example:
Example:
```bash
utilscli openstack server list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME
utilscli openstack user list --os-username <USER_NAME> --os-domain-name <DOMAIN_NAME> \
--os-project-name <PROJECT_NAME
```

View File

@ -1,13 +1,13 @@
# Jump host installation
# Jump Host Installation
The install will Kubernetes client and the corresponding dependencies in order
to able to connect to K8S cluster remotely. It will also create a generic
kubectl configuration file with appropriate attributes required.
This procedure installs the Kubernetes client and corresponding dependencies,
enabling remote access to the Kubernetes cluster. The procedure also creates
a generic `kubectl` configuration file having the appropriate attributes.
This revision covers the implementation as described. [k8s-keystone-auth](
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-keystone-webhook-authenticator-and-authorizer.md#new-kubectl-clients-from-v1110-and-later)
## 1. Pre-requisites
## 1. Prerequisites
* Ubuntu OS version 14.x or higher
* Connectivity to the Internet
@ -16,43 +16,13 @@ https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-ke
## 2. Installation
### 2.1 Clone Porthole main project
### 2.1 Clone Porthole Main Project
$git clone https://review.opendev.org/airship/porthole
Cloning into 'porthole'...
remote: Counting objects: 362, done
remote: Finding sources: 100% (362/362)
remote: Total 362 (delta 185), reused 311 (delta 185)
Receiving objects: 100% (362/362), 98.30 KiB | 0 bytes/s, done.
Resolving deltas: 100% (185/185), done.
Checking connectivity... done.
### 2.2 Pull PatchSet (optional)
$cd porthole
$git pull https://review.opendev.org/airship/porthole refs/changes/92/674892/[latest change set]
remote: Counting objects: 10, done
remote: Finding sources: 100% (8/8)
remote: Total 8 (delta 2), reused 7 (delta 2)
Unpacking objects: 100% (8/8), done.
From https://review.opendev.org/airship/porthole
branch refs/changes/92/674892/9 -> FETCH_HEAD
Merge made by the 'recursive' strategy.
jmphost/README.md | 130 ++++++++++++++++++++++++++++++++++++++++
jmphost/funs_uc.sh | 57 ++++++++++++++++++++++++++++++++++++++++
jmphost/setup-access.sh | 132 ++++++++++++++++++++++++++++++++++++++++
zuul.d/jmphost-utility.yaml | 35 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 354 insertions(+)
create mode 100644 jmphost/README.md
create mode 100755 jmphost/funs_uc.sh
create mode 100755 jmphost/setup-access.sh
create mode 100644 zuul.d/jmphost-utility.yaml
### 2.3 Run Setup
### 2.2 Run Setup
$cd $porthole
$sudo -s
$cd jmphost
$./setup-access.sh "site" "userid" "namespace"
@ -131,16 +101,20 @@ https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-ke
args:
- "--keystone-url=https://<FQDN TO UCP KEYSTONE>/v3"
## Validation
## 3. Validation
- Now log out and log back in as the user.
- Update the configuration file with user corresponding credentials.
To test, perform these steps.
For testing purposes:
- Replacing **"OS_USERNAME"** and **"OS_PASSWORD"** with UCP Keystone credentials
- Set the **"OS_PROJECT_NAME"** value accordingly
1. Log out and log back in as the user.
### List pods
2. Update the configuration file with the user's credentials.
* Replace *"OS_USERNAME"* and *"OS_PASSWORD"* with UCP Keystone
credentials.
* Set the *"OS_PROJECT_NAME"* value accordingly.
### 3.1 List Pods
$kubectl get pods -n utility
@ -152,7 +126,7 @@ For testing purposes:
clcp-ucp-ceph-utility-config-ceph-ns-key-generator-pvfcl 0/1 Completed 0 4h12m
clcp-ucp-ceph-utility-config-test 0/1 Completed 0 4h12m
### Execute into the pod
### 3.2 Execute into the Pod
$kubectl exec -it [pod-name] -n utility /bin/bash
@ -160,5 +134,6 @@ For testing purposes:
command terminated with exit code 126
Because the user id entered in the configuration file is not a member in UCP keystone
RBAC to execute into the pod, it's expecting to see "permission denied".
The "permission denied" error is expected in this case because the user ID
entered in the configuration file is not a member in the UCP Keystone
RBAC to execute into the pod.