Editorial changes to README.md files
Fixing some formatting and correcting markdown errors for readability. Change-Id: I6f21918338a539c1c8bf2048533b3b0a823fd2ee
This commit is contained in:
parent
f639e881df
commit
cd3df1b85c
105
README.md
105
README.md
@ -1,33 +1,31 @@
|
|||||||
# Utility Containers
|
# Utility Containers
|
||||||
|
|
||||||
Utility containers provide a component level, consolidated view of
|
Utility containers give Operations staff an interface to an Airship
|
||||||
running containers within Network Cloud infrastructure to members
|
environment that enables them to perform routine operations and
|
||||||
of the Operation team. This allows Operation team members access to
|
troubleshooting activities. Utility containers support Airship
|
||||||
check the state of various services running within the component
|
environments without exposing secrets and credentials while at
|
||||||
pods of Network Cloud.
|
the same time restricting access to the actual containers.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
Deploy OSH-AIO
|
Deploy OSH-AIO.
|
||||||
|
|
||||||
## System Requirements
|
## System Requirements
|
||||||
|
|
||||||
The recommended minimum system requirements for a full deployment are:
|
The recommended minimum system requirements for a full deployment are:
|
||||||
|
|
||||||
* 16GB of RAM
|
* 16 GB RAM
|
||||||
|
|
||||||
* 8 Cores
|
* 8 Cores
|
||||||
|
|
||||||
* 48 GB HDD
|
* 48 GB HDD
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
1. Add the below to /etc/sudoers
|
1. Add the below to `/etc/sudoers`.
|
||||||
|
|
||||||
root ALL=(ALL) NOPASSWD: ALL
|
root ALL=(ALL) NOPASSWD: ALL
|
||||||
ubuntu ALL=(ALL) NOPASSWD: ALL
|
ubuntu ALL=(ALL) NOPASSWD: ALL
|
||||||
|
|
||||||
2. Install the latest versions of Git, CA Certs & Make if necessary
|
2. Install the latest versions of Git, CA Certs, and Make if necessary.
|
||||||
|
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
sudo apt-get dist-upgrade -y
|
sudo apt-get dist-upgrade -y
|
||||||
@ -41,105 +39,120 @@ The recommended minimum system requirements for a full deployment are:
|
|||||||
uuid-runtime \
|
uuid-runtime \
|
||||||
bc
|
bc
|
||||||
|
|
||||||
3. Clone the OpenStack-Helm Repos
|
3. Clone the OpenStack-Helm repositories.
|
||||||
|
|
||||||
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
|
git clone https://git.openstack.org/openstack/openstack-helm-infra.git
|
||||||
git clone https://git.openstack.org/openstack/openstack-helm.git
|
git clone https://git.openstack.org/openstack/openstack-helm.git
|
||||||
|
|
||||||
|
4. Configure proxies.
|
||||||
4. Proxy Configuration
|
|
||||||
|
|
||||||
In order to deploy OpenStack-Helm behind corporate proxy servers,
|
In order to deploy OpenStack-Helm behind corporate proxy servers,
|
||||||
add the following entries to openstack-helm-infra/tools/gate/devel/local-vars.yaml.
|
add the following entries to `openstack-helm-infra/tools/gate/devel/local-vars.yaml`.
|
||||||
|
|
||||||
proxy:
|
proxy:
|
||||||
http: http://username:password@host:port
|
http: http://username:password@host:port
|
||||||
https: https://username:password@host:port
|
https: https://username:password@host:port
|
||||||
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
|
noproxy: 127.0.0.1,localhost,172.17.0.1,.svc.cluster.local
|
||||||
|
|
||||||
Add the address of the Kubernetes API, 172.17.0.1, and .svc.cluster.local to
|
Add the address of the Kubernetes API, `172.17.0.1`, and `.svc.cluster.local` to
|
||||||
your no_proxy and NO_PROXY environment variables.
|
your `no_proxy` and `NO_PROXY` environment variables.
|
||||||
|
|
||||||
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
|
export no_proxy=${no_proxy},172.17.0.1,.svc.cluster.local
|
||||||
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
|
export NO_PROXY=${NO_PROXY},172.17.0.1,.svc.cluster.local
|
||||||
|
|
||||||
5. Deploy Kubernetes & Helm
|
5. Deploy Kubernetes and Helm.
|
||||||
|
|
||||||
cd openstack-helm
|
cd openstack-helm
|
||||||
./tools/deployment/developer/common/010-deploy-k8s.sh
|
./tools/deployment/developer/common/010-deploy-k8s.sh
|
||||||
|
|
||||||
Please remove DNS nameserver (nameserver 10.96.0.10) from /etc/resolv.conf,
|
Edit `/etc/resolv.conf` and remove the DNS nameserver entry (`nameserver 10.96.0.10`).
|
||||||
Since python set-up client would fail without it.
|
The Python setup client fails if this nameserver entry is present.
|
||||||
|
|
||||||
|
6. Setup clients on the host, and assemble the charts.
|
||||||
|
|
||||||
6. Setup Clients on the host and assemble the charts
|
|
||||||
./tools/deployment/developer/common/020-setup-client.sh
|
./tools/deployment/developer/common/020-setup-client.sh
|
||||||
|
|
||||||
Re-add DNS nameservers back in /etc/resolv.conf so that keystone URL's DNS would resolve.
|
Re-add DNS nameservers back to `/etc/resolv.conf` so that the Keystone URLs DNS will resolve.
|
||||||
|
|
||||||
|
7. Deploy the ingress controller.
|
||||||
|
|
||||||
7. Deploy the ingress controller
|
|
||||||
./tools/deployment/developer/common/030-ingress.sh
|
./tools/deployment/developer/common/030-ingress.sh
|
||||||
|
|
||||||
8. Deploy Ceph
|
8. Deploy Ceph.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/040-ceph.sh
|
./tools/deployment/developer/ceph/040-ceph.sh
|
||||||
|
|
||||||
9. Activate the namespace to be able to use Ceph
|
9. Activate the namespace to be able to use Ceph.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
|
./tools/deployment/developer/ceph/045-ceph-ns-activate.sh
|
||||||
|
|
||||||
10. Deploy Keystone
|
10. Deploy Keystone.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/080-keystone.sh
|
./tools/deployment/developer/ceph/080-keystone.sh
|
||||||
|
|
||||||
11. Deploy Heat
|
11. Deploy Heat.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/090-heat.sh
|
./tools/deployment/developer/ceph/090-heat.sh
|
||||||
|
|
||||||
12. Deploy Horizon
|
12. Deploy Horizon.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/100-horizon.sh
|
./tools/deployment/developer/ceph/100-horizon.sh
|
||||||
|
|
||||||
13. Deploy Glance
|
13. Deploy Glance.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/120-glance.sh
|
./tools/deployment/developer/ceph/120-glance.sh
|
||||||
|
|
||||||
14. Deploy Cinder
|
14. Deploy Cinder.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/130-cinder.sh
|
./tools/deployment/developer/ceph/130-cinder.sh
|
||||||
|
|
||||||
15. Deploy LibVirt
|
15. Deploy LibVirt.
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/150-libvirt.sh
|
./tools/deployment/developer/ceph/150-libvirt.sh
|
||||||
|
|
||||||
16. Deploy Compute Kit (Nova and Neutron)
|
16. Deploy the compute kit (Nova and Neutron).
|
||||||
|
|
||||||
./tools/deployment/developer/ceph/160-compute-kit.sh
|
./tools/deployment/developer/ceph/160-compute-kit.sh
|
||||||
|
|
||||||
17. To run further commands from the CLI manually, execute the following
|
17. To run further commands from the CLI manually, execute the following
|
||||||
to set up authentication credentials
|
to set up authentication credentials.
|
||||||
|
|
||||||
export OS_CLOUD=openstack_helm
|
export OS_CLOUD=openstack_helm
|
||||||
|
|
||||||
18. Clone the Porthole repo to openstack-helm project
|
18. Clone the Porthole repository to the openstack-helm project.
|
||||||
|
|
||||||
git clone https://opendev.org/airship/porthole.git
|
git clone https://opendev.org/airship/porthole.git
|
||||||
|
|
||||||
## To deploy utility pods
|
## To deploy utility pods
|
||||||
|
|
||||||
1. cd porthole
|
1. Add and make the chart:
|
||||||
|
|
||||||
2. helm repo add <chartname> http://localhost:8879/charts
|
cd porthole
|
||||||
|
helm repo add <chartname> http://localhost:8879/charts
|
||||||
|
make all
|
||||||
|
|
||||||
3. make all
|
2. Deploy `Ceph-utility`.
|
||||||
|
|
||||||
4. Deploy Ceph-utility
|
|
||||||
./tools/deployment/utilities/010-ceph-utility.sh
|
./tools/deployment/utilities/010-ceph-utility.sh
|
||||||
|
|
||||||
5. Deploy Compute-utility
|
3. Deploy `Compute-utility`.
|
||||||
|
|
||||||
./tools/deployment/utilities/020-compute-utility.sh
|
./tools/deployment/utilities/020-compute-utility.sh
|
||||||
|
|
||||||
6. Deploy Etcdctl-utility
|
4. Deploy `Etcdctl-utility`.
|
||||||
|
|
||||||
./tools/deployment/utilities/030-etcdctl-utility.sh
|
./tools/deployment/utilities/030-etcdctl-utility.sh
|
||||||
|
|
||||||
7. Deploy Mysqlclient-utility.sh
|
5. Deploy `Mysqlclient-utility`.
|
||||||
|
|
||||||
./tools/deployment/utilities/040-Mysqlclient-utility.sh
|
./tools/deployment/utilities/040-Mysqlclient-utility.sh
|
||||||
|
|
||||||
8. Deploy Openstack-utility.sh
|
6. Deploy `Openstack-utility`.
|
||||||
|
|
||||||
./tools/deployment/utilities/050-openstack-utility.sh
|
./tools/deployment/utilities/050-openstack-utility.sh
|
||||||
|
|
||||||
## NOTE
|
## NOTE
|
||||||
|
|
||||||
For postgresql-utility please refer to below URL as per validation
|
The PostgreSQL utility container is deployed as a part of Airship-in-a-Bottle (AIAB).
|
||||||
postgresql-utility is deployed in AIAB
|
To deploy and test `postgresql-utility`, see the
|
||||||
|
[PostgreSQL README](https://opendev.org/airship/porthole/src/branch/master/images/postgresql-utility/README.md).
|
||||||
https://opendev.org/airship/porthole/src/branch/master/images/postgresql-utility/README.md
|
|
||||||
|
@ -4,14 +4,14 @@
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
1. Add the below to /etc/sudoers
|
1. Add the below to `/etc/sudoers`.
|
||||||
|
|
||||||
```
|
```
|
||||||
root ALL=(ALL) NOPASSWD: ALL
|
root ALL=(ALL) NOPASSWD: ALL
|
||||||
ubuntu ALL=(ALL) NOPASSWD: ALL
|
ubuntu ALL=(ALL) NOPASSWD: ALL
|
||||||
```
|
```
|
||||||
|
|
||||||
2. Install the latest versions of Git, CA Certs & bundle & Make if necessary
|
2. Install the latest versions of Git, CA Certs, and Make if necessary.
|
||||||
|
|
||||||
```
|
```
|
||||||
set -xe
|
set -xe
|
||||||
@ -27,20 +27,20 @@ curl \
|
|||||||
uuid-runtime
|
uuid-runtime
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Deploy Porthole
|
3. Deploy Porthole.
|
||||||
|
|
||||||
```
|
```
|
||||||
git clone https://opendev.org/airship/porthole
|
git clone https://opendev.org/airship/porthole
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Modify the test case test-postgresqlutility-running.yaml
|
4. Modify the test case `test-postgresqlutility-running.yaml`.
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
Get in to the utility pod using kubectl exec.
|
Get in to the utility pod using `kubectl exec`.
|
||||||
To perform any operation on the ucp PostgreSQL cluster use the below example.
|
To perform any operation on the ucp PostgreSQL cluster, use the below example.
|
||||||
|
|
||||||
example:
|
Example:
|
||||||
|
|
||||||
```
|
```
|
||||||
utilscli psql -h hostname -U username -d database
|
utilscli psql -h hostname -U username -d database
|
||||||
@ -56,7 +56,7 @@ Type "help" for help.
|
|||||||
postgresdb=# \d
|
postgresdb=# \d
|
||||||
List of relations
|
List of relations
|
||||||
Schema | Name | Type | Owner
|
Schema | Name | Type | Owner
|
||||||
--------+------------------+----------+---------------
|
-------+------------------+----------+---------------
|
||||||
public | company | table | postgresadmin
|
public | company | table | postgresadmin
|
||||||
public | role | table | postgresadmin
|
public | role | table | postgresadmin
|
||||||
public | role_role_id_seq | sequence | postgresadmin
|
public | role_role_id_seq | sequence | postgresadmin
|
||||||
|
Loading…
Reference in New Issue
Block a user