First draft of the multi virtual undercloud docs

This is the first draft for the multi virtual undercloud project
documentation.
In addition to the README, it contains the two scripts and the network
topology picture.

Change-Id: Ibd5ea41593d79e09584109973b8c4ce051704ee0
This commit is contained in:
Raoul Scarazzini 2017-03-08 19:09:16 +01:00
parent 3d4edcaab6
commit 87aac5c2e5
4 changed files with 335 additions and 0 deletions

View File

@ -0,0 +1,174 @@
Multi Virtual Undercloud
========================
This document describes a way to deploy multiple virtual undercloud on the same
host. This is mainly for environments in which you want to manage multiple
baremetal overclouds without having one baremetal machine dedicated for each one
you deploy.
Requirements
------------
**Physical switches**
The switch(es) must support VLAN tagging and all the ports must be configured in
trunk, so that the dedicated network interface on the physical host (in the
examples the secondary interface, eth1) is able to offer PXE and dhcp to all the
overcloud machines via undercloud virtual machine's bridged interface.
**Host hardware**
The main requirement to make this kind of setup working is to have a host
powerful enough to run virtual machines with at least 16GB of RAM and 8 cpus.
The more power you have, the more undercloud machines you can spawn without
having impact on performances.
**Host Network topology**
Host is reachable via ssh from the machine launching quickstart and configured
with two main network interfaces:
- **eth0**: bridged on **br0**, pointing to LAN (underclouds will own an IP to
be reachable via ssh);
- **eth1**: connected to the dedicated switch that supports all the VLANs that
will be used in the deployment;
Over eth1, for each undercloud virtual machine two VLAN interfaces are created,
with associated bridges:
- **Control plane network bridge** (i.e. br2100) built over VLAN interface (i.e.
eth1.2100) that will be eth1 on the undercloud virtual machine, used by
TripleO as br-ctlplane;
- **External network bridge** (i.e. br2105) built over VLAN interface (i.e.
eth1.2105) that will be eth2 on the undercloud virtual machine, used by
TripleO as external network device;
![network-topology](./multi-virtual-undercloud_network-topology.png "Multi Virtual Undercloud - Network Topology")
Quickstart configuration
------------------------
Virtual undercloud machine is treated as a baremetal one and the Quickstart
command relies on the baremetal undercloud role, and its playbook.
This means that any playbook similar to [baremetal-undercloud.yml](https://github.com/openstack/tripleo-quickstart-extras/blob/master/playbooks/baremetal-undercloud.yml "Baremetal undercloud playbook") should be okay.
The configuration file has two specific sections that needs attention:
- Additional interface for external network to route overcloud traffic:
```yaml
undercloud_networks:
external:
address: 172.20.0.254
netmask: 255.255.255.0
device_type: ethernet
device_name: eth2
```
**NOTE:** in this configuration eth2 is acting also as a default router for
the external network.
- Baremetal provision script, which will be an helper for the
[multi-virtual-undercloud.sh](./multi-virtual-undercloud.sh) script on the <VIRTHOST>:
```yaml
baremetal_provisioning_script: "/path/to/multi-virtual-undercloud-provisioner.sh <VIRTHOST> <DISTRO> <UNDERCLOUD-NAME> <UNDERCLOUD IP> <UNDERCLOUD NETMASK> <UNDERCLOUD GATEWAY> <CTLPLANEV LAN> <EXTERNAL NETWORK VLAN>"
```
The supported parameters, with the exception of VIRTHOST, are the same ones
that are passed to the script that lives (and runs) on the VIRTHOST,
*multi-virtual-undercloud.sh*.
This helper script launches the remote command on VIRTHOST host and ensures
that the machine gets reachable via ssh before proceeding.
The multi virtual undercloud script
-----------------------------------
The [multi-virtual-undercloud.sh](./multi-virtual-undercloud.sh) script is
placed on the VIRTHOST and needs these parameters:
1. **DISTRO**: this must be the name (without extension) of one of the images
present inside the */images* dir on the VIRTHOST;
2. **VMNAME**: the name of the undercloud virtual machine (the name that will be
used by libvirt);
3. **VMETH0IP**: IP of the virtual undercloud primary interface to wich
quickstart (and users) will connect via ssh;
4. **VMETH0NM**: Netmask of the virtual undercloud primary interface;
5. **VMETH0GW**: Gateway of the virtual undercloud primary interface;
6. **VMSSHKEY**: Public key to be enabled on the virtual undercloud;
7. **UCVLAN**: VLAN of the overcloud's ctlplane network;
8. **UCEXTVLAN**: VLAN of the overcloud's external network;
The script's actions are basically:
1. Destroy and undefine any existing machine named as the one we want to create;
2. Prepare the image on which the virtual undercloud will be created by copying
the available distro image and preparing it to be ready for the TripleO
installation, it fix size, network interfaces, packages and ssh keys;
3. Create and launch the virtual undercloud machine;
**Note**: on the VIRTHOST there must exist an */images* directory containing
images suitable for the deploy.
Having this directory structure:
```console
[root@VIRTHOST ~]# ls -l /images/
total 1898320
lrwxrwxrwx. 1 root root 34 14 feb 09.20 centos-7.qcow2 -> CentOS-7-x86_64-GenericCloud.qcow2
-rw-r--r--. 1 root root 1361182720 15 feb 10.57 CentOS-7-x86_64-GenericCloud.qcow2
lrwxrwxrwx. 1 root root 36 14 feb 09.20 rhel-7.qcow2 -> rhel-guest-image-7.3-33.x86_64.qcow2
-rw-r--r--. 1 root root 582695936 19 ott 18.44 rhel-guest-image-7.3-33.x86_64.qcow2
```
Helps on updating the images, since one can leave config files pointing to
*centos-7* and, in case of updates, make the symlink point a newer image.
Quickstart command
------------------
A typical invocation of the TripleO Quickstart command is something similar to
this:
```console
/path/to/tripleo-quickstart/quickstart.sh \
--bootstrap \
--ansible-debug \
--no-clone \
--playbook baremetal-undercloud.yml \
--working-dir /path/to/workdir \
--config /path/to/config.yml \
--release $RELEASE \
--tags "all" \
$VIRTHOST
```
So nothing different from a normal quickstart deploy command line, the
difference here is made by the config.yml as described above, with its provision
script.
Conclusions
-----------
This approach can be considered useful in testing multi environments with
TripleO for three reasons:
* It is *fast*: it takes the same time to install the undercloud but less to
provide it, since you dont have to wait the physical undercloud provision;
* It is *isolated*: using VLANs to separate the traffic keeps each environment
completely isolated from the others;
* It is *reliable*: you can have the undercloud on a shared storage and think
about putting the undercloud vm in HA, live migrating it with libvirt,
pacemaker, whatever...
There are no macroscopic cons, except for the initial configuration on the
VIRTHOST, that is made only one time, at the beginning.
License
-------
GPL
Author Information
------------------
Raoul Scarazzini <rasca@redhat.com>

View File

@ -0,0 +1,46 @@
#!/bin/bash
set -eux
VIRTHOST=$1
DISTRO=$2
VMNAME=$3
VMETH0IP=$4
VMETH0NM=$5
VMETH0GW=$6
VMSSHKEY=$7
UCVLAN=$8
UCEXTVLAN=$9
function wait_machine_status {
UNDERCLOUD=$1
STATUS=$2
while true
do
nc $UNDERCLOUD 22 < /dev/null &> /dev/null
NCSTATUS=$?
if [ "$STATUS" == "up" ]
then
[ $NCSTATUS -eq 0 ] && break || (sleep 5; echo -n ".")
else
[ $NCSTATUS -ne 0 ] && break || (sleep 5; echo -n ".")
fi
done
}
# Copying public key on VIRTHOST
echo -n "$(date) - Copying $VMSSHKEY on $VIRTHOST: "
scp $VMSSHKEY root@$VIRTHOST:$VMNAME\_key.pub
echo "Done."
# Providing the machine
echo -n "$(date) - Starting provision of $VMNAME ($VMETH0IP) on $VIRTHOST: "
ssh root@$VIRTHOST /root/multi-virtual-undercloud.sh $DISTRO $VMNAME $VMETH0IP $VMETH0NM $VMETH0GW $VMNAME\_key.pub $UCVLAN $UCEXTVLAN
echo "Done."
set +e
# Wait for machine to come up
echo -n "$(date) - Waiting for $VMNAME to come up again after update: "
wait_machine_status $VMETH0IP "up"
echo "Done."

View File

@ -0,0 +1,115 @@
#!/bin/bash
set -eux
DISTRO=$1
CLONEFROM=/images/$DISTRO\.qcow2
VMNAME=$2
VMIMG=/vms/$VMNAME\.qcow2
VMIMGCOPY=/vms/ORIG-$VMNAME\.qcow2
VMETH0IP=$3
VMETH0NM=$4
VMETH0GW=$5
VMSSHKEY=$6
VMDISKADD=50G
UCVLAN=$7
UCEXTVLAN=$8
WORKDIR=/tmp/virt-undercloud-$(date +%s)
mkdir -p $WORKDIR
pushd $WORKDIR
# Destroy the machine if it is running
ISRUNNING=$(virsh list | grep $VMNAME || true)
[ "x$ISRUNNING" != "x" ] && virsh destroy $VMNAME
# Undefine the vm if it is defined
ISDEFINED=$(virsh list --all | grep $VMNAME || true)
[ "x$ISDEFINED" != "x" ] && virsh undefine $VMNAME
# Copy qcow2 base image
cp -v $CLONEFROM $VMIMG
echo "$(date) - Adding $VMDISKADD to $VMIMG: "
qemu-img resize $VMIMG +$VMDISKADD
echo "$(date) - Resizing filesystem of $VMIMG: "
cp -v $VMIMG $VMIMGCOPY
virt-resize --expand /dev/sda1 $VMIMGCOPY $VMIMG
rm -fv $VMIMGCOPY
echo "$(date) - Checking status of $VMIMG: "
qemu-img info $VMIMG
virt-filesystems --long -h --all -a $VMIMG
cat > ifcfg-eth0 <<EOF
NAME=eth0
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPADDR=$VMETH0IP
NETMASK=$VMETH0NM
GATEWAY=$VMETH0GW
PEERDNS=yes
DNS1=8.8.8.8
TYPE=Ethernet
EOF
cat > ifcfg-eth1 <<EOF
NAME=eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
EOF
cat $VMSSHKEY >> ./authorized_keys
case "$DISTRO" in
"centos-7") virt-customize -a $VMIMG \
--root-password password:redhat \
--install openssh-server \
--run-command "xfs_growfs /" \
--run-command "echo 'GRUB_CMDLINE_LINUX=\"console=tty0 crashkernel=auto no_timer_check net.ifnames=0 console=ttyS0,115200n8\"' >> /etc/default/grub" \
--run-command "grubby --update-kernel=ALL --args=net.ifnames=0" \
--run-command "systemctl enable sshd" \
--mkdir /root/.ssh \
--copy-in ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--copy-in ifcfg-eth1:/etc/sysconfig/network-scripts/ \
--copy-in ./authorized_keys:/root/.ssh/ \
--selinux-relabel
;;
"rhel-7") virt-customize -a $VMIMG \
--root-password password:redhat \
--run-command "curl -o rhos-release-latest.noarch.rpm http://rhos-release.virt.bos.redhat.com/repos/rhos-release/rhos-release-latest.noarch.rpm" \
--run-command "rpm -Uvh rhos-release-latest.noarch.rpm" \
--run-command "rhos-release rhel-7.3" \
--install openssh-server \
--run-command "systemctl enable sshd" \
--run-command "rpm -e rhos-release" \
--run-command "sed -i -e '/\[rhelosp-rhel-7.3-server-opt\]/,/^\[/s/enabled=0/enabled=1/' /etc/yum.repos.d/rhos-release-rhel-7.3.repo" \
--mkdir /root/.ssh \
--copy-in ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--copy-in ifcfg-eth1:/etc/sysconfig/network-scripts/ \
--copy-in ./authorized_keys:/root/.ssh/ \
--selinux-relabel
;;
esac
# Deploy the vm
virt-install \
--import \
--name $VMNAME \
--ram 16192 \
--disk path=$VMIMG \
--vcpus 8 \
--os-type linux \
--os-variant generic \
--network bridge=br0 \
--network bridge=br$UCVLAN \
--network bridge=br$UCEXTVLAN \
--graphics none \
--noautoconsole
rm -rf $WORKDIR
popd

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB