bda4071ae6
Sometime ago, all OpenStack related stuff moved from the main repository to the separate one - https://github.com/openstack/rally-openstack . All further development and maintance will be done only in a new repository. As for first major release, the structure is not changed, so only imports should be changed. Also, this patch fixes deprecated usage of validators. Change-Id: Ibd884b4686477ca990271868b0555fbea80e07f1 |
||
---|---|---|
docs/multi-virtual-undercloud | ||
infrared | ||
playbooks | ||
rally | ||
roles | ||
tools/ha-test-suite | ||
.gitignore | ||
.gitreview | ||
LICENSE | ||
plugin.spec | ||
README.md | ||
setup.cfg | ||
setup.py |
Utility roles and docs for TripleO
These Ansible roles are a set of useful tools to be used on top of TripleO deployments. They can also be used together with tripleo-quickstart (and tripleo-quickstart-extras).
The documentation of each role is located in the individual role folders. General usage information about tripleo-quickstart can be found in the project documentation.
Using the playbook on an existing TripleO environment
The playbooks can be launched directly from the undercloud machine of the TripleO deployment. The described steps are expected to be run inside the /home/stack directory.
First of all a clone of the tripleo-ha-utils repository must be created:
git clone https://github.com/openstack/tripleo-ha-utils
then three environment variables needs to be exported, pointing three files:
export ANSIBLE_CONFIG="/home/stack/ansible.cfg"
export ANSIBLE_INVENTORY="/home/stack/hosts"
export ANSIBLE_SSH_ARGS="-F /home/stack/ssh.config.ansible"
These files are:
ansible.cfg which must contain at least these lines:
[defaults]
roles_path = /home/stack/tripleo-ha-utils/roles
hosts which must be configured depending on the deployed environment, reflecting these sections:
undercloud ansible_host=undercloud ansible_user=stack ansible_private_key_file=/home/stack/.ssh/id_rsa
overcloud-compute-1 ansible_host=overcloud-compute-1 ansible_user=heat-admin ansible_private_key_file=/home/stack/.ssh/id_rsa
overcloud-compute-0 ansible_host=overcloud-compute-0 ansible_user=heat-admin ansible_private_key_file=/home/stack/.ssh/id_rsa
overcloud-controller-2 ansible_host=overcloud-controller-2 ansible_user=heat-admin ansible_private_key_file=/home/stack/.ssh/id_rsa
overcloud-controller-1 ansible_host=overcloud-controller-1 ansible_user=heat-admin ansible_private_key_file=/home/stack/.ssh/id_rsa
overcloud-controller-0 ansible_host=overcloud-controller-0 ansible_user=heat-admin ansible_private_key_file=/home/stack/.ssh/id_rsa
[compute]
overcloud-compute-1
overcloud-compute-0
[undercloud]
undercloud
[overcloud]
overcloud-compute-1
overcloud-compute-0
overcloud-controller-2
overcloud-controller-1
overcloud-controller-0
[controller]
overcloud-controller-2
overcloud-controller-1
overcloud-controller-0
ssh.config.ansible which can be generated by these code lines:
cat /home/stack/.ssh/id_rsa.pub >> /home/stack/.ssh/authorized_keys
echo -e "Host undercloud\n Hostname 127.0.0.1\n IdentityFile /home/stack/.ssh/id_rsa\n User stack\n StrictHostKeyChecking no\n UserKnownHostsFile=/dev/null\n" > ssh.config.ansible
. /home/stack/stackrc
openstack server list -c Name -c Networks | awk '/ctlplane/ {print $2, $4}' | sed s/ctlplane=//g | while read node; do node_name=$(echo $node | cut -f 1 -d " "); node_ip=$(echo $node | cut -f 2 -d " "); echo -e "Host $node_name\n Hostname $node_ip\n IdentityFile /home/stack/.ssh/id_rsa\n User heat-admin\n StrictHostKeyChecking no\n UserKnownHostsFile=/dev/null\n"; done >> ssh.config.ansible
It can optionally contain specific per-host connection options, like these:
...
...
Host overcloud-controller-0
ProxyCommand ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o ConnectTimeout=60 -F /home/stack/ssh.config.ansible undercloud -W 192.168.24.16:22
IdentityFile /home/stack/.ssh/id_rsa
User heat-admin
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null
...
...
In this example to connect to overcloud-controller-0 ansible will use undercloud as a ProxyHost.
With this setup in place is then possible to launch the playbook:
ansible-playbook /home/stack/tripleo-ha-utils/playbooks/overcloud-instance-ha.yml -e release=newton
Using the playbooks on tripleo-quickstart provided environment
tripleo-ha-utils project can be set as a tripleo-quickstart extra requirement, so all the code will be automatically downloaded and available. Inside the requirements.txt file you will need a line pointing to this repo:
echo "https://github.com/openstack/tripleo-ha-utils/#egg=tripleo-ha-utils" >> tripleo-quickstart/quickstart-extras-requirements.txt
Supposing the environment was successfully provided with a previous quickstart execution, to use one of the utils playbook a command line like this one can be used:
./quickstart.sh \
--retain-inventory \
--teardown none \
--playbook overcloud-instance-ha.yml \
--working-dir /path/to/workdir \
--config /path/to/config.yml \
--release <RELEASE> \
--tags all \
<VIRTHOST HOSTNAME or IP>
Basically this command:
- Keep existing data on the repo (by keeping the inventory and all the virtual machines)
- Uses the overcloud-instance-ha.yml playbook
- Uses the same workdir where quickstart was first deployed
- Select the specific config file (optionally)
- Specifies the release (mitaka, newton, or “master” for ocata)
- Performs all the tasks in the playbook overcloud-instance-ha.yml
Important note
You might need to export ANSIBLE_SSH_ARGS with the path of the ssh.config.ansible file to make the command work, like this:
export ANSIBLE_SSH_ARGS="-F /path/to/quickstart/workdir/ssh.config.ansible"
License
GPL
Author Information
Raoul Scarazzini rasca@redhat.com