![Kevin Carter](/assets/img/avatar_default.png)
The original mnaio was built using a lot of bash and was tailored specifically for ubuntu 14.04. The new mnaio was built using a mix of bash and ansible and was tailored specifically for ubuntu 16.04. This patch takes the two code bases and combines the best things from each method and wraps it up into a single code path all written using ansible playbooks and basic variables. While underlying system has changed the bash environment variable syntax for overrides remains the same. This allows users to continue with what has become their normal work-flow while leveraging the new structure and capabilities. High level overview: * The general performance of the VMs running within the MNAIO will now be a lot better. Before the VMs were built within QCOW2 containers, while this was flexible and portable it was slower. The new capabilities will use RAW logical volumes and native IO. * New repo management starts with preseeds and allows the user to pin to specific repositories without having to worry about flipping them post build. * CPU overhead will be a lot less. The old VM system used an un-reasonable number of processors per VM which directly translated to sockets. The new system will use cores and a single socket allowing for generally better VM performance with a lot less overhead and resource contention on the host. * Memory consumption has been greatly reduced. Each VM is now following the memory restrictions we'd find in the gate, as a MAX. Most of the VMs are using 1 - 2 GiB of RAM which should be more than enough for our purposes. Overall the deployment process is simpler and more flexible and will work on both trusty and xenial out of the box with the hope to bring centos7 and suse into the fold some time in the future. Change-Id: Idc8924452c481b08fd3b9362efa32d10d1b8f707 Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
196 lines
7.0 KiB
ReStructuredText
196 lines
7.0 KiB
ReStructuredText
OpenStack-Ansible Multi-Node AIO
|
|
################################
|
|
:date: 2016-03-09
|
|
:tags: rackspace, openstack, ansible
|
|
:category: \*openstack, \*nix
|
|
|
|
|
|
About this repository
|
|
---------------------
|
|
|
|
Full OpenStack deployment using a single OnMetal host from the
|
|
Rackspace Public Cloud. This is a multi-node installation using
|
|
VMs that have been PXE booted which was done to provide an environment
|
|
that is almost exactly what is in production. This script will build, kick
|
|
and deploy OpenStack using KVM, OpenStack-Ansible within 12 Nodes
|
|
and 1 load balancer all using a Hyper Converged environment.
|
|
|
|
|
|
Process
|
|
-------
|
|
|
|
Create at least one physical host that has public network access and is running the
|
|
Ubuntu 14/6.04 LTS Operating system. System assumes that you have an unpartitioned
|
|
device with at least 1TB of storage, however you can customize the size of each VM
|
|
volume by setting the option ``${VM_DISK_SIZE}``. If you're using the Rackspace
|
|
OnMetal servers the drive partitioning will be done for you by detecting the largest
|
|
unpartitioned device. If you're doing the deployment on something other than a Rackspace
|
|
OnMetal server you may need to set the ``${DATA_DISK_DEVICE}`` variable accordingly.
|
|
the playbooks will look for a volume group named "vg01", if this volume group exists
|
|
no partitioning or setup on the data disk will take place. To effectively use this
|
|
process for testing it's recommended that the host machine have at least 32GiB of
|
|
RAM.
|
|
|
|
=========== ======== ============
|
|
Physical Host Specs known to work well
|
|
--------------------------------------
|
|
CPU CORES MEMORY DISK SPACE
|
|
=========== ======== ============
|
|
20 124GB 1.3TB
|
|
=========== ======== ============
|
|
|
|
These specs are covered by the Rackspace OnMetal-IO v1/2 Servers.
|
|
|
|
When your ready, run the build script by executing ``bash ./build.sh``. The build script
|
|
current executes a deployment of OpenStack Ansible using the master branch. If you want to
|
|
do something other than deploy master you can set the ``${OSA_BRANCH}`` variable to any
|
|
branch, tag, or SHA.
|
|
|
|
|
|
Post Deployment
|
|
---------------
|
|
|
|
Once deployed you can use virt-manager to manage the KVM instances on the host, similar to a DRAC or ILO.
|
|
|
|
LINUX:
|
|
If you're running a linux system as your workstation simply install virt-manager
|
|
from your package manager and connect to the host via QEMU/KVM:SSH
|
|
|
|
OSX:
|
|
If you're running a MAC you can get virt-manager via X11 forwarding to the host
|
|
or install it via BREW. http://stackoverflow.com/questions/3921814/is-there-a-virt-manager-alternative-for-mac-os-x
|
|
|
|
WINDOWS:
|
|
If you're running Windows, you can install virt-viewer from the KVM Download site.
|
|
https://virt-manager.org/download/
|
|
|
|
|
|
Deployment screenshot
|
|
^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
.. image:: screenshots/virt-manager-screenshot.jpeg
|
|
:scale: 50 %
|
|
:alt: Screen shot of virt-manager and deployment in action
|
|
:align: center
|
|
|
|
Deployments can be accessed and monitored via virt-manager
|
|
|
|
|
|
Console Access
|
|
^^^^^^^^^^^^^^
|
|
|
|
.. image:: screenshots/console-screenshot.jpeg
|
|
:scale: 50 %
|
|
:alt: Screen shot of virt-manager console
|
|
:align: center
|
|
|
|
The root password for all VMs is "**secrete**". This password is being set within the pre-seed files under the
|
|
"Users and Password" section. If you want to change this password please edit the pre-seed files.
|
|
|
|
|
|
``build.sh`` Options
|
|
--------------------
|
|
|
|
Set to instruct the preseed what the default network is expected to be:
|
|
``DEFAULT_NETWORK="${DEFAULT_NETWORK:-eth0}"``
|
|
|
|
Set the VM disk size in gigabytes:
|
|
``VM_DISK_SIZE="${VM_DISK_SIZE:-252}"``
|
|
|
|
|
|
Instruct the system do all of the required host setup:
|
|
``SETUP_HOST=${SETUP_HOST:-true}``
|
|
|
|
Instruct the system do all of the required PXE setup:
|
|
``SETUP_PXEBOOT=${SETUP_PXEBOOT:-true}``
|
|
|
|
Instruct the system do all of the required DHCPD setup:
|
|
``SETUP_DHCPD=${SETUP_DHCPD:-true}``
|
|
|
|
|
|
Instruct the system to Kick all of the VMs:
|
|
``DEPLOY_VMS=${DEPLOY_VMS:-true}``
|
|
|
|
Instruct the VM to use the selected image, eg. ubuntu-16.04-amd64:
|
|
``DEFAULT_IMAGE=${DEFAULT_IMAGE:-ubuntu-16.04-amd64}``
|
|
|
|
|
|
Set the OSA branch for this script to deploy:
|
|
``OSA_BRANCH=${OSA_BRANCH:-master}``
|
|
|
|
Instruct the system to deploy OpenStack Ansible:
|
|
``DEPLOY_OSA=${DEPLOY_OSA:-true}``
|
|
|
|
Instruct the system to pre-config the envs for running OSA playbooks:
|
|
``PRE_CONFIG_OSA=${PRE_CONFIG_OSA:-true}``
|
|
|
|
Instruct the system to run the OSA playbooks, if you want to deploy other OSA powered cloud, you can set it to false:
|
|
``RUN_OSA=${RUN_OSA:-true}``
|
|
|
|
|
|
Re-kicking VM(s)
|
|
----------------
|
|
|
|
Re-kicking a VM is as simple as stopping a VM, delete the logical volume, create a new logical volume, start the VM.
|
|
The VM will come back online, pxe boot, and install the base OS.
|
|
|
|
.. code-block:: bash
|
|
|
|
virsh destroy "${VM_NAME}"
|
|
lvremove "/dev/mapper/vg01--${VM_NAME}"
|
|
lvcreate -L 60G vg01 -n "${VM_NAME}"
|
|
virsh start "${VM_NAME}"
|
|
|
|
|
|
To rekick all VMs, the following command can be used on the host machine to cycle through all found VMs and re-provision them.
|
|
|
|
.. code-block:: bash
|
|
|
|
for VM_NAME in $(virsh list --all | awk '/running/ || /shut/ {print $2}'); do
|
|
virsh destroy "${VM_NAME}"
|
|
lvremove "/dev/mapper/vg01--${VM_NAME}"
|
|
lvcreate -L 60G vg01 -n "${VM_NAME}"
|
|
virsh start "${VM_NAME}"
|
|
done
|
|
|
|
|
|
Rerunning the build script
|
|
--------------------------
|
|
|
|
The build script can be rerun at any time. If you have a successful run before and simply want to re-kick everything I
|
|
recommend nuking VMs and then executing the build script.
|
|
|
|
|
|
Deploying OpenStack into the environment
|
|
----------------------------------------
|
|
|
|
While the build script will deploy OpenStack, you can choose to run this manually. To run a basic deploy using a given branch you can use the following snippet. Set the ansible option ``osa_branch`` or export the environment variable ``OSA_BRANCH`` when using the build.sh script.
|
|
|
|
.. code-block:: bash
|
|
|
|
ansible-playbook -i playbooks/inventory playbooks/deploy-osa.yml -vv -e 'osa_branch=master'
|
|
|
|
|
|
Snapshotting an environment before major testing
|
|
------------------------------------------------
|
|
|
|
Running a snapshot on all of the vms before doing major testing is wise as it'll give you a restore point without having to re-kick
|
|
the cloud. You can do this using some basic ``virsh`` commands and a little bash.
|
|
|
|
.. code-block:: bash
|
|
|
|
for instance in $(virsh list --all --name); do
|
|
virsh snapshot-create-as --atomic --name $instance-kilo-snap --description "saved kilo state before liberty upgrade" $instance
|
|
done
|
|
|
|
|
|
Once the previous command is complete you'll have a collection of snapshots within all of your infrastructure hosts. These snapshots
|
|
can be used to restore state to a previous point if needed. To restore the infrastructure hosts to a previous point,
|
|
using your snapshots, you can execute a simple ``virsh`` command or the following bash loop to restore everything to a known point.
|
|
|
|
.. code-block:: bash
|
|
|
|
for instance in $(virsh list --all --name); do
|
|
virsh snapshot-revert --snapshotname $instance-kilo-snap --running $instance
|
|
done
|