Docs: Appendix section - cleanup

As per discussion in the OSA docs summit session, clean up
of installation guide. This fixes typos, minor RST mark up
changes, and passive voice.

Change-Id: I6db03286dddb87218ceb8b6c0ee1ead9705151bf
This commit is contained in:
Alexandra 2016-04-28 16:14:47 +10:00 committed by Jesse Pretorius (odyssey4me)
parent 7a82904d61
commit afc9ec9815
5 changed files with 99 additions and 78 deletions

View File

@ -1,7 +1,8 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Appendix A. Configuration files
-------------------------------
===============================
Appendix A: Configuration files
===============================
`openstack_user_config.yml
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/openstack_user_config.yml.example>`_

View File

@ -1,120 +1,128 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Appendix C. Minor Upgrades
--------------------------
==========================
Appendix C: Minor upgrades
==========================
Upgrades between minor versions of OpenStack-Ansible are handled by simply
Upgrades between minor versions of OpenStack-Ansible are handled by
updating the repository clone to the latest tag, then executing playbooks
against the target hosts.
A minor upgrade will typically require the execution of the following:
A minor upgrade typically requires the execution of the following:
#. Change directory into the repository clone root directory
#. Change directory into the repository clone root directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible
#. Update the git remotes
#. Update the git remotes:
.. code-block:: shell-session
# git fetch --all
#. Checkout the latest tag (the below tag is an example)
#. Checkout the latest tag (the below tag is an example):
.. code-block:: shell-session
# git checkout 13.0.1
#. Update all the dependent roles to the latest versions
#. Update all the dependent roles to the latest versions:
.. code-block:: shell-session
# ./scripts/bootstrap-ansible.sh
#. Change into the playbooks directory
#. Change into the playbooks directory:
.. code-block:: shell-session
# cd playbooks
#. Update the Hosts
#. Update the hosts:
.. code-block:: shell-session
# openstack-ansible setup-hosts.yml
#. Update the Infrastructure
#. Update the infrastructure:
.. code-block:: shell-session
# openstack-ansible -e rabbitmq_upgrade=true \
setup-infrastructure.yml
#. Update all OpenStack Services
#. Update all OpenStack services:
.. code-block:: shell-session
# openstack-ansible setup-openstack.yml
Note that if you wish to scope the upgrades to specific OpenStack components
then each of the component playbooks may be executed and scoped using groups.
.. note::
Scope upgrades to specific OpenStack components by
executing each of the component playbooks using groups.
For example:
#. Update only the Compute Hosts
#. Update only the Compute hosts:
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute
#. Update only a single Compute Host. Note that skipping the 'nova-key' tag is
necessary as the keys on all compute hosts will not be gathered.
#. Update only a single Compute host:
.. note::
Skipping the ``nova-key`` tag is necessary as the keys on
all Compute hosts will not be gathered.
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit <node-name> \
--skip-tags 'nova-key'
If you wish to see which hosts belong to which groups, the
``inventory-manage.py`` script will show all groups and their hosts.
To see which hosts belong to which groups, the
``inventory-manage.py`` script shows all groups and their hosts.
For example:
#. Change directory into the repository clone root directory
#. Change directory into the repository clone root directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible
#. Show all groups and which hosts belong to them
#. Show all groups and which hosts belong to them:
.. code-block:: shell-session
# ./scripts/inventory-manage.py -G
#. Show all hosts and which groups they belong to
#. Show all hosts and which groups they belong:
.. code-block:: shell-session
# ./scripts/inventory-manage.py -g
You may also see which hosts a playbook will execute against, and which tasks
will be executed:
To see which hosts a playbook will execute against, and to see which
tasks will execute.
#. Change directory into the repository clone playbooks directory
#. Change directory into the repository clone playbooks directory:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
#. See the hosts in the nova_compute group which a playbook will execute against
#. See the hosts in the ``nova_compute`` group which a playbook executes against:
.. code-block:: shell-session
# openstack-ansible os-nova-install.yml --limit nova_compute \
--list-hosts
#. See the tasks which will be executed on hosts in the nova_compute group
#. See the tasks which will be executed on hosts in the ``nova_compute`` group:
.. code-block:: shell-session

View File

@ -1,9 +1,10 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Appendix E. Using PLUMgrid Neutron Plugin
-----------------------------------------
=========================================
Appendix E: Using PLUMgrid Neutron plugin
=========================================
Installing Source and Host Networking
Installing source and host networking
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Clone the PLUMgrid ansible repository under the ``/opt/`` directory:
@ -14,34 +15,34 @@ Installing Source and Host Networking
Replace *``TAG``* with the current stable release tag.
#. PLUMgrid will take over networking for the entire cluster; therefore the
bridges br-vxlan and br-vlan will only need to be present to avoid
#. PLUMgrid will take over networking for the entire cluster. The
bridges ``br-vxlan`` and ``br-vlan`` only need to be present to avoid
relevant containers from erroring out on infra hosts. They do not
need to be attached to any host interface or a valid network.
#. PLUMgrid requires two networks, a Management and a Fabric network.
Management is typically shared via the standard br-mgmt and Fabric
#. PLUMgrid requires two networks: a `Management` and a `Fabric` network.
Management is typically shared via the standard ``br-mgmt`` and Fabric
must be specified in the PLUMgrid configuration file described below.
Furthermore the Fabric interface must be untagged and unbridged.
The Fabric interface must be untagged and unbridged.
Neutron Configurations
Neutron configurations
~~~~~~~~~~~~~~~~~~~~~~
To setup the neutron configuration to install PLUMgrid as the
core neutron plugin, create a user space variable file
``/etc/openstack_deploy/user_pg_neutron.yml`` and insert the following
parameters:
parameters.
#. Set the ``neutron_plugin_type`` parameter to ``plumgrid`` in this file:
#. Set the ``neutron_plugin_type`` parameter to ``plumgrid``:
.. code-block:: yaml
# Neutron Plugins
neutron_plugin_type: plumgrid
#. Also in the same file, disable the installation of unnecessary neutron-agents
#. In the same file, disable the installation of unnecessary ``neutron-agents``
in the ``neutron_services`` dictionary, by setting their ``service_en``
parameters to ``False``
parameters to ``False``:
.. code-block:: yaml
@ -52,24 +53,24 @@ parameters:
neutron_vpnaas: False
PLUMgrid Configurations
PLUMgrid configurations
~~~~~~~~~~~~~~~~~~~~~~~
On the Deployment Host create a PLUMgrid user variables file, using the sample in
On the deployment host, create a PLUMgrid user variables file using the sample in
``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
``/etc/openstack_deploy/user_pg_vars.yml``. The following parameters must be
configured:
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
following parameters.
#. Replace ``PG_REPO_HOST`` with a valid repo URL hosting PLUMgrid
packages.
packages:
.. code-block:: yaml
plumgrid_repo: PG_REPO_HOST
#. Replace ``INFRA_IPs`` with comma separated Infrastructure Node IPs and
``PG_VIP`` with an unallocated IP on the management network, this will
be used to access the PLUMgrid UI.
``PG_VIP`` with an unallocated IP on the management network. This will
be used to access the PLUMgrid UI:
.. code-block:: yaml
@ -77,32 +78,35 @@ configured:
pg_vip: PG_VIP
#. Replace ``FABRIC_IFC`` with the name of the interface that will be used
for PLUMgrid Fabric. [Note: PLUMgrid Fabric must be an untagged unbridged
raw interface such as eth0]
for PLUMgrid Fabric.
.. note::
PLUMgrid Fabric must be an untagged unbridged raw interface such as ``eth0``.
.. code-block:: yaml
fabric_interface: FABRIC_IFC
#. To override the default interface names with another name for any
particular node fill in the ``fabric_ifc_override`` and ``mgmt_override``
dicts with node ``hostname: interface_name`` as shown in the example file.
#. Fill in the ``fabric_ifc_override`` and ``mgmt_override`` dicts with
node ``hostname: interface_name`` to override the default interface
names.
#. Obtain a PLUMgrid License file, rename to ``pg_license`` and place it under
``/var/lib/plumgrid/pg_license`` on the Deployment Host.
``/var/lib/plumgrid/pg_license`` on the deployment host.
Gateway Hosts
~~~~~~~~~~~~~
PLUMgrid enabled OpenStack clusters contain one or more Gateway Nodes
that are used for providing connectivity with external resources such as
external networks (Internet), bare-metal servers or network service
appliances. In addition to the Management and Fabric networks required
by PLUMgrid nodes, Gateways require dedicated external interfaces referred
to as gateway_devs in the configuration files.
PLUMgrid-enabled OpenStack clusters contain one or more gateway nodes
that are used for providing connectivity with external resources, such as
external networks, bare-metal servers, or network service
appliances. In addition to the `Management` and `Fabric` networks required
by PLUMgrid nodes, gateways require dedicated external interfaces referred
to as ``gateway_devs`` in the configuration files.
#. To add Gateways Hosts, add a ``gateway_hosts`` section to
``/etc/openstack_deploy/openstack_user_config.yml`` as shown below:
#. Add a ``gateway_hosts`` section to
``/etc/openstack_deploy/openstack_user_config.yml``:
.. code-block:: yaml
@ -115,9 +119,13 @@ to as gateway_devs in the configuration files.
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container management
bridge on each Gateway host.
#. Also add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml``
file described in the section above. This must contain hostnames and gateway_dev
names for each Gateway in the cluster.
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid ``user_pg_vars.yml``
file:
.. note::
This must contain hostnames and ``gateway_dev`` names for each
gateway in the cluster.
.. code-block:: yaml
@ -130,7 +138,7 @@ to as gateway_devs in the configuration files.
Installation
~~~~~~~~~~~~
#. Run the PLUMgrid playbooks with (do this before the openstack-setup.yml
#. Run the PLUMgrid playbooks (do this before the ``openstack-setup.yml``
playbook is run):
.. code-block:: shell-session
@ -138,9 +146,11 @@ Installation
# cd /opt/plumgrid-ansible/plumgrid_playbooks
# openstack-ansible plumgrid_all.yml
Note: Contact PLUMgrid for an Installation Pack info@plumgrid.com
(includes full/trial license, packages, deployment documentation and
automation scripts for the entire workflow described above)
.. note::
Contact PLUMgrid for an Installation Pack: info@plumgrid.com
This includes a full trial commercial license, packages, deployment documentation,
and automation scripts for the entire workflow described above.
--------------

View File

@ -1,7 +1,8 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Appendix B. Additional Resources
--------------------------------
================================
Appendix B: Additional resources
================================
The following Ansible resources are useful to reference:

View File

@ -1,15 +1,16 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Appendix D. Tips and Tricks
---------------------------
===========================
Appendix D: Tips and tricks
===========================
Ansible Forks
Ansible forks
~~~~~~~~~~~~~
The default MaxSessions setting for the OpenSSH Daemon is 10. Each Ansible
fork makes use of a Session. By default Ansible sets the number of forks to 5,
but a deployer may wish to increase the number of forks used in order to
improve deployment performance in large environments.
fork makes use of a Session. By default, Ansible sets the number of forks to 5.
However, you can increase the number of forks used in order to improve deployment
performance in large environments.
This may be done on a permanent basis by adding the `forks`_ configuration
entry in ``ansible.cfg``, or for a particular playbook execution by using the