[DOCS] Moving the draft install guide to the install-guide folder
This patch removes the old install guide. It is still accessible in the Mitaka section. Change-Id: I47ce62523edd14a1bb20deba3f40e1e0b2df223c Implements: blueprint osa-install-guide-overhaul
@ -1,22 +0,0 @@
|
|||||||
=================================
|
|
||||||
Appendix A: Configuration files
|
|
||||||
=================================
|
|
||||||
|
|
||||||
`openstack_user_config.yml
|
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/openstack_user_config.yml.example>`_
|
|
||||||
|
|
||||||
`user_variables.yml
|
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/user_variables.yml>`_
|
|
||||||
|
|
||||||
`user_secrets.yml
|
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/user_secrets.yml>`_
|
|
||||||
|
|
||||||
`swift.yml
|
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/conf.d/swift.yml.example>`_
|
|
||||||
|
|
||||||
`extra_container.yml
|
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/env.d/extra_container.yml.example>`_
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,192 +0,0 @@
|
|||||||
==================================================
|
|
||||||
Appendix C: Customizing host and service layouts
|
|
||||||
==================================================
|
|
||||||
|
|
||||||
Understanding the default layout
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The default layout of containers and services in OpenStack-Ansible is driven
|
|
||||||
by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
|
|
||||||
contents of both the ``/etc/openstack_deploy/conf.d/`` and
|
|
||||||
``/etc/openstack_deploy/env.d/`` directories. Use these sources to define the
|
|
||||||
group mappings used by the playbooks to target hosts and containers for roles
|
|
||||||
used in the deploy.
|
|
||||||
|
|
||||||
Conceptually, these can be thought of as mapping from two directions. You
|
|
||||||
define host groups, which gather the target hosts into inventory groups,
|
|
||||||
through the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
|
|
||||||
contents of the ``/etc/openstack_deploy/conf.d/`` directory. You define
|
|
||||||
container groups, which can map from the service components to be deployed up
|
|
||||||
to host groups, through files in the ``/etc/openstack_deploy/env.d/``
|
|
||||||
directory.
|
|
||||||
|
|
||||||
To customize the layout of components for your deployment, modify the
|
|
||||||
host groups and container groups appropriately to represent the layout you
|
|
||||||
desire before running the installation playbooks.
|
|
||||||
|
|
||||||
Understanding host groups
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
As part of initial configuration, each target host appears in either the
|
|
||||||
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
|
|
||||||
the ``/etc/openstack_deploy/conf.d/`` directory. We use a format for files in
|
|
||||||
``conf.d/`` which is identical to the syntax used in the
|
|
||||||
``openstack_user_config.yml`` file. These hosts are listed under one or more
|
|
||||||
headings, such as ``shared-infra_hosts`` or ``storage_hosts``, which serve as
|
|
||||||
Ansible group mappings. We treat these groupings as mappings to the physical
|
|
||||||
hosts.
|
|
||||||
|
|
||||||
The example file ``haproxy.yml.example`` in the ``conf.d/`` directory provides
|
|
||||||
a simple example of defining a host group (``haproxy_hosts``) with two hosts
|
|
||||||
(``infra1`` and ``infra2``).
|
|
||||||
|
|
||||||
A more complex example file is ``swift.yml.example``. Here, we
|
|
||||||
specify host variables for a target host using the ``container_vars`` key.
|
|
||||||
OpenStack-Ansible applies all entries under this key as host-specific
|
|
||||||
variables to any component containers on the specific host.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
We recommend new inventory groups, particularly for new
|
|
||||||
services, to be defined using a new file in the ``conf.d/`` directory in
|
|
||||||
order to manage file size.
|
|
||||||
|
|
||||||
Understanding container groups
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
Additional group mappings can be found within files in the
|
|
||||||
``/etc/openstack_deploy/env.d/`` directory. These groupings are treated as
|
|
||||||
virtual mappings from the host groups (described above) onto the container
|
|
||||||
groups which define where each service deploys. By reviewing files within the
|
|
||||||
``env.d/`` directory, you can begin to see the nesting of groups represented
|
|
||||||
in the default layout.
|
|
||||||
|
|
||||||
We begin our review with ``shared-infra.yml``. In this file we define a
|
|
||||||
new container group (``shared-infra_containers``) as a subset of the
|
|
||||||
``all_containers`` group. This new container group is mapped to a new
|
|
||||||
(``shared-infra_hosts``) host group. This means you deploy all service
|
|
||||||
components under the new (``shared-infra_containers``) container group to each
|
|
||||||
target host in the host group (``shared-infra_hosts``).
|
|
||||||
|
|
||||||
Within a ``physical_skel`` segment, the OpenStack-Ansible dynamic inventory
|
|
||||||
expects to find a pair of keys. The first key maps to items in the
|
|
||||||
``container_skel`` and the second key maps to the target host groups
|
|
||||||
(described above) which are responsible for hosting the service component.
|
|
||||||
|
|
||||||
Next, we review ``memcache.yml``. Here, we define the new group
|
|
||||||
``memcache_container``. In this case we identify the new group as a
|
|
||||||
subset of the ``shared-infra_containers`` group, which is itself a subset of
|
|
||||||
the ``all_containers`` inventory group.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``all_containers`` group is automatically defined by OpenStack-Ansible.
|
|
||||||
Any service component managed by OpenStack-Ansible maps to a subset of the
|
|
||||||
``all_containers`` inventory group, whether directly or indirectly through
|
|
||||||
another intermediate container group.
|
|
||||||
|
|
||||||
The default layout does not rely exclusively on groups being subsets of other
|
|
||||||
groups. The ``memcache`` component group is part of the ``memcache_container``
|
|
||||||
group, as well as the ``memcache_all`` group and also contains a ``memcached``
|
|
||||||
component group. If you review the ``playbooks/memcached-install.yml``
|
|
||||||
playbook, you see that the playbook applies to hosts in the ``memcached``
|
|
||||||
group. Other services may have more complex deployment needs. They define and
|
|
||||||
consume inventory container groups differently. Mapping components to several
|
|
||||||
groups in this way allows flexible targeting of roles and tasks.
|
|
||||||
|
|
||||||
Customizing existing components
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Deploying directly on hosts
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
To deploy a component directly on the host instead of within a container, set
|
|
||||||
the ``is_metal`` property to ``true`` for the container group under the
|
|
||||||
``container_skel`` in the appropriate file.
|
|
||||||
|
|
||||||
The use of ``container_vars`` and mapping from container groups to host groups
|
|
||||||
is the same for a service deployed directly onto the host.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``cinder-volume`` component is also deployed directly on the host by
|
|
||||||
default. See the ``env.d/cinder.yml`` file for this example.
|
|
||||||
|
|
||||||
Omit a service or component from the deployment
|
|
||||||
-----------------------------------------------
|
|
||||||
|
|
||||||
To omit a component from a deployment, several options exist:
|
|
||||||
|
|
||||||
- Remove the ``physical_skel`` link between the container group and
|
|
||||||
the host group. The simplest way to do this is to delete the related
|
|
||||||
file located in the ``env.d/`` directory.
|
|
||||||
- Do not run the playbook which installs the component.
|
|
||||||
Unless you specify the component to run directly on a host using
|
|
||||||
``is_metal``, a container creates for this component.
|
|
||||||
- Adjust the `affinity`_ to 0 for the host group. Unless you
|
|
||||||
specify the component to run directly on a host using ``is_metal``,
|
|
||||||
a container creates for this component.
|
|
||||||
|
|
||||||
.. _affinity: app-advanced-config-affinity.rst
|
|
||||||
|
|
||||||
Deploying existing components on dedicated hosts
|
|
||||||
------------------------------------------------
|
|
||||||
|
|
||||||
To deploy a shared-infra component onto dedicated hosts, modify both the
|
|
||||||
files specifying the host groups and container groups for the component.
|
|
||||||
|
|
||||||
For example, to run Galera directly on dedicated hosts the ``container_skel``
|
|
||||||
segment of the ``env.d/galera.yml`` file might look like:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
container_skel:
|
|
||||||
galera_container:
|
|
||||||
belongs_to:
|
|
||||||
- db_containers
|
|
||||||
contains:
|
|
||||||
- galera
|
|
||||||
properties:
|
|
||||||
log_directory: mysql_logs
|
|
||||||
service_name: galera
|
|
||||||
is_metal: true
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
If you want to deploy within containers on these dedicated hosts, omit the
|
|
||||||
``is_metal: true`` property. We include it here as a recipe for the more
|
|
||||||
commonly requested layout.
|
|
||||||
|
|
||||||
Since we define the new container group (``db_containers`` above), we must
|
|
||||||
assign that container group to a host group. To assign the new container
|
|
||||||
group to a new host group, provide a ``physical_skel`` for the new host group
|
|
||||||
(in a new or existing file, such as ``env.d/galera.yml``). For example:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
physical_skel:
|
|
||||||
db_containers:
|
|
||||||
belongs_to:
|
|
||||||
- all_containers
|
|
||||||
db_hosts:
|
|
||||||
belongs_to:
|
|
||||||
- hosts
|
|
||||||
|
|
||||||
Lastly, define the host group (db_hosts above) in a ``conf.d/`` file (such as
|
|
||||||
``galera.yml``):
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
db_hosts:
|
|
||||||
db-host1:
|
|
||||||
ip: 172.39.123.11
|
|
||||||
db-host2:
|
|
||||||
ip: 172.39.123.12
|
|
||||||
db-host3:
|
|
||||||
ip: 172.39.123.13
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Each of the custom group names in this example (``db_containers``
|
|
||||||
and ``db_hosts``) were arbitrary. You can choose your own group names
|
|
||||||
but be sure the references are consistent between all relevant files.
|
|
@ -1,31 +0,0 @@
|
|||||||
=================================
|
|
||||||
Appendix B: Additional resources
|
|
||||||
=================================
|
|
||||||
|
|
||||||
The following Ansible resources are useful to reference:
|
|
||||||
|
|
||||||
- `Ansible Documentation
|
|
||||||
<http://docs.ansible.com/ansible/>`_
|
|
||||||
|
|
||||||
- `Ansible Best Practices
|
|
||||||
<http://docs.ansible.com/ansible/playbooks_best_practices.html>`_
|
|
||||||
|
|
||||||
- `Ansible Configuration
|
|
||||||
<http://docs.ansible.com/ansible/intro_configuration.html>`_
|
|
||||||
|
|
||||||
The following OpenStack resources are useful to reference:
|
|
||||||
|
|
||||||
- `OpenStack Documentation <http://docs.openstack.org>`_
|
|
||||||
|
|
||||||
- `OpenStack SDK, CLI and API Documentation
|
|
||||||
<http://developer.openstack.org/>`_
|
|
||||||
|
|
||||||
- `OpenStack API Guide
|
|
||||||
<http://developer.openstack.org/api-guide/quick-start>`_
|
|
||||||
|
|
||||||
- `OpenStack Project Developer Documentation
|
|
||||||
<http://docs.openstack.org/developer/>`_
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,40 +0,0 @@
|
|||||||
===============================
|
|
||||||
Configuring service credentials
|
|
||||||
===============================
|
|
||||||
|
|
||||||
Configure credentials for each service in the
|
|
||||||
``/etc/openstack_deploy/*_secrets.yml`` files. Consider using `Ansible
|
|
||||||
Vault <http://docs.ansible.com/playbooks_vault.html>`_ to increase
|
|
||||||
security by encrypting any files containing credentials.
|
|
||||||
|
|
||||||
Adjust permissions on these files to restrict access by non-privileged
|
|
||||||
users.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The following options configure passwords for the web interfaces.
|
|
||||||
|
|
||||||
* ``keystone_auth_admin_password`` configures the ``admin`` tenant
|
|
||||||
password for both the OpenStack API and dashboard access.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
We recommend using the ``pw-token-gen.py`` script to generate random
|
|
||||||
values for the variables in each file that contains service credentials:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# cd /opt/openstack-ansible/scripts
|
|
||||||
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
|
|
||||||
|
|
||||||
To regenerate existing passwords, add the ``--regen`` flag.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
The playbooks do not currently manage changing passwords in an existing
|
|
||||||
environment. Changing passwords and re-running the playbooks will fail
|
|
||||||
and may break your OpenStack environment.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,42 +0,0 @@
|
|||||||
=================================
|
|
||||||
Initial environment configuration
|
|
||||||
=================================
|
|
||||||
|
|
||||||
OpenStack-Ansible depends on various files that are used to build an inventory
|
|
||||||
for Ansible. Start by getting those files into the correct places:
|
|
||||||
|
|
||||||
#. Copy the contents of the
|
|
||||||
``/opt/openstack-ansible/etc/openstack_deploy`` directory to the
|
|
||||||
``/etc/openstack_deploy`` directory.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
As of Newton, the ``env.d`` directory has been moved from this source
|
|
||||||
directory to ``playbooks/inventory/``.
|
|
||||||
|
|
||||||
#. Change to the ``/etc/openstack_deploy`` directory.
|
|
||||||
|
|
||||||
#. Copy the ``openstack_user_config.yml.example`` file to
|
|
||||||
``/etc/openstack_deploy/openstack_user_config.yml``.
|
|
||||||
|
|
||||||
You can review the ``openstack_user_config.yml`` file and make changes
|
|
||||||
to the deployment of your OpenStack environment.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The file is heavily commented with details about the various options.
|
|
||||||
|
|
||||||
Configuration in ``openstack_user_config.yml`` defines which hosts
|
|
||||||
will run the containers and services deployed by OpenStack-Ansible. For
|
|
||||||
example, hosts listed in the ``shared-infra_hosts`` run containers for many of
|
|
||||||
the shared services that your OpenStack environment requires. Some of these
|
|
||||||
services include databases, memcached, and RabbitMQ. There are several other
|
|
||||||
host types that contain other types of containers and all of these are listed
|
|
||||||
in ``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
For details about how the inventory is generated from the environment
|
|
||||||
configuration, see :ref:`developer-inventory`.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,29 +0,0 @@
|
|||||||
========================
|
|
||||||
Deployment configuration
|
|
||||||
========================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
configure-initial.rst
|
|
||||||
configure-user-config-examples.rst
|
|
||||||
configure-creds.rst
|
|
||||||
|
|
||||||
.. figure:: figures/installation-workflow-configure-deployment.png
|
|
||||||
:width: 100%
|
|
||||||
|
|
||||||
Ansible references a handful of files containing mandatory and optional
|
|
||||||
configuration directives. Modify these files to define the
|
|
||||||
target environment before running the Ansible playbooks. Configuration
|
|
||||||
tasks include:
|
|
||||||
|
|
||||||
* Target host networking to define bridge interfaces and
|
|
||||||
networks.
|
|
||||||
* A list of target hosts on which to install the software.
|
|
||||||
* Virtual and physical network relationships for OpenStack
|
|
||||||
Networking (neutron).
|
|
||||||
* Passwords for all services.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,86 +0,0 @@
|
|||||||
===============
|
|
||||||
Deployment host
|
|
||||||
===============
|
|
||||||
|
|
||||||
.. figure:: figures/installation-workflow-deploymenthost.png
|
|
||||||
:width: 100%
|
|
||||||
|
|
||||||
When installing OpenStack in a production environment, we recommend using a
|
|
||||||
separate deployment host which contains Ansible and orchestrates the
|
|
||||||
OpenStack-Ansible installation on the target hosts. In a test environment, we
|
|
||||||
prescribe using one of the infrastructure target hosts as the deployment host.
|
|
||||||
|
|
||||||
To use a target host as a deployment host, follow the steps in `Chapter 3,
|
|
||||||
Target hosts <targethosts.html>`_ on the deployment host.
|
|
||||||
|
|
||||||
Installing the operating system
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit
|
|
||||||
<http://releases.ubuntu.com/14.04/>`_ operating system on the
|
|
||||||
deployment host. Configure at least one network interface to
|
|
||||||
access the internet or suitable local repositories.
|
|
||||||
|
|
||||||
Configuring the operating system
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install additional software packages and configure NTP.
|
|
||||||
|
|
||||||
#. Install additional software packages if not already installed
|
|
||||||
during operating system installation:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# apt-get install aptitude build-essential git ntp ntpdate \
|
|
||||||
openssh-server python-dev sudo
|
|
||||||
|
|
||||||
#. Configure NTP to synchronize with a suitable time source.
|
|
||||||
|
|
||||||
Configuring the network
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Ansible deployments fail if the deployment server is unable to SSH to the
|
|
||||||
containers. Configure the deployment host to be on the same network designated
|
|
||||||
for container management. This configuration reduces the rate of failure due
|
|
||||||
to connectivity issues.
|
|
||||||
|
|
||||||
The following network information is used as an example:
|
|
||||||
|
|
||||||
Container management: 172.29.236.0/22 (VLAN 10)
|
|
||||||
|
|
||||||
Select an IP from this range to assign to the deployment host.
|
|
||||||
|
|
||||||
Installing source and dependencies
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install the source and dependencies for the deployment host.
|
|
||||||
|
|
||||||
#. Clone the OSA repository into the ``/opt/openstack-ansible``
|
|
||||||
directory:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
|
|
||||||
|
|
||||||
Replace ``TAG`` with the current stable release tag : |my_conf_val|
|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible`` directory, and run the
|
|
||||||
Ansible bootstrap script:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# scripts/bootstrap-ansible.sh
|
|
||||||
|
|
||||||
Configuring Secure Shell (SSH) keys
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Ansible uses Secure Shell (SSH) with public key authentication for
|
|
||||||
connectivity between the deployment and target hosts. To reduce user
|
|
||||||
interaction during Ansible operations, do not include pass phrases with
|
|
||||||
key pairs. However, if a pass phrase is required, consider using the
|
|
||||||
``ssh-agent`` and ``ssh-add`` commands to temporarily store the
|
|
||||||
pass phrase before performing Ansible operations.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
Before Width: | Height: | Size: 104 KiB |
Before Width: | Height: | Size: 107 KiB |
Before Width: | Height: | Size: 174 KiB |
Before Width: | Height: | Size: 180 KiB |
Before Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 114 KiB |
Before Width: | Height: | Size: 134 KiB |
@ -1,23 +0,0 @@
|
|||||||
============================================
|
|
||||||
OpenStack-Ansible Installation Guide - DRAFT
|
|
||||||
============================================
|
|
||||||
|
|
||||||
This is a draft revision of the install guide for Newton
|
|
||||||
and is currently under development.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
overview.rst
|
|
||||||
deploymenthost.rst
|
|
||||||
targethosts.rst
|
|
||||||
configure.rst
|
|
||||||
installation.rst
|
|
||||||
app.rst
|
|
||||||
|
|
||||||
Third-party trademarks and tradenames appearing in this document are the
|
|
||||||
property of their respective owners. Such third-party trademarks have
|
|
||||||
been printed in caps or initial caps and are used for referential
|
|
||||||
purposes only. We do not intend our use or display of other companies'
|
|
||||||
tradenames, trademarks, or service marks to imply a relationship with,
|
|
||||||
or endorsement or sponsorship of us by these other companies.
|
|
@ -1,4 +0,0 @@
|
|||||||
* `Documentation Home <../index.html>`_
|
|
||||||
* `Installation Guide <index.html>`_
|
|
||||||
* `Upgrade Guide <../upgrade-guide/index.html>`_
|
|
||||||
* `Developer Documentation <../developer-docs/index.html>`_
|
|
@ -1,54 +0,0 @@
|
|||||||
=======================
|
|
||||||
About OpenStack-Ansible
|
|
||||||
=======================
|
|
||||||
|
|
||||||
OpenStack-Ansible (OSA) uses the `Ansible IT <https://www.ansible.com/how-ansible-works>`_
|
|
||||||
automation engine to deploy an OpenStack environment on Ubuntu Linux.
|
|
||||||
For isolation and ease of maintenance, you can install OpenStack components
|
|
||||||
into Linux containers (LXC).
|
|
||||||
|
|
||||||
This documentation is intended for deployers, and walks through an
|
|
||||||
OpenStack-Ansible installation for a test and production environments.
|
|
||||||
|
|
||||||
Ansible
|
|
||||||
~~~~~~~
|
|
||||||
|
|
||||||
Ansible provides an automation platform to simplify system and application
|
|
||||||
deployment. Ansible manages systems using Secure Shell (SSH)
|
|
||||||
instead of unique protocols that require remote daemons or agents.
|
|
||||||
|
|
||||||
Ansible uses playbooks written in the YAML language for orchestration.
|
|
||||||
For more information, see `Ansible - Intro to
|
|
||||||
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
|
|
||||||
|
|
||||||
In this guide, we refer to two types of hosts:
|
|
||||||
|
|
||||||
* The host running Ansible playbooks is the `deployment host`.
|
|
||||||
* The hosts where Ansible installs OpenStack services and infrastructure
|
|
||||||
components are the `target host`.
|
|
||||||
|
|
||||||
Linux containers (LXC)
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Containers provide operating-system level virtualization by enhancing
|
|
||||||
the concept of ``chroot`` environments. These isolate resources and file
|
|
||||||
systems for a particular group of processes without the overhead and
|
|
||||||
complexity of virtual machines. They access the same kernel, devices,
|
|
||||||
and file systems on the underlying host and provide a thin operational
|
|
||||||
layer built around a set of rules.
|
|
||||||
|
|
||||||
The LXC project implements operating system level
|
|
||||||
virtualization on Linux using kernel namespaces and includes the
|
|
||||||
following features:
|
|
||||||
|
|
||||||
* Resource isolation including CPU, memory, block I/O, and network
|
|
||||||
using ``cgroups``.
|
|
||||||
* Selective connectivity to physical and virtual network devices on the
|
|
||||||
underlying physical host.
|
|
||||||
* Support for a variety of backing stores including LVM.
|
|
||||||
* Built on a foundation of stable Linux technologies with an active
|
|
||||||
development and support community.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,128 +0,0 @@
|
|||||||
=========================
|
|
||||||
Installation requirements
|
|
||||||
=========================
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
These are the minimum requirements for OpenStack-Ansible. Larger
|
|
||||||
deployments require additional resources.
|
|
||||||
|
|
||||||
CPU requirements
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
* Compute hosts with multi-core processors that have `hardware-assisted
|
|
||||||
virtualization extensions`_ available. These extensions provide a
|
|
||||||
significant performance boost and improve security in virtualized
|
|
||||||
environments.
|
|
||||||
|
|
||||||
* Infrastructure hosts with multi-core processors for best
|
|
||||||
performance. Some services, such as MySQL, greatly benefit from additional
|
|
||||||
CPU cores and other technologies, such as `Hyper-threading`_.
|
|
||||||
|
|
||||||
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
|
|
||||||
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
|
|
||||||
|
|
||||||
Disk requirements
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Different hosts have different disk space requirements based on the
|
|
||||||
services running on each host:
|
|
||||||
|
|
||||||
Deployment hosts
|
|
||||||
10GB of disk space is sufficient for holding the OpenStack-Ansible
|
|
||||||
repository content and additional required software.
|
|
||||||
|
|
||||||
Compute hosts
|
|
||||||
Disk space requirements vary depending on the total number of instances
|
|
||||||
running on each host and the amount of disk space allocated to each instance.
|
|
||||||
Compute hosts need to have at least 100GB of disk space available. Consider
|
|
||||||
disks that provide higher throughput with lower latency, such as SSD drives
|
|
||||||
in a RAID array.
|
|
||||||
|
|
||||||
Storage hosts
|
|
||||||
Hosts running the Block Storage (cinder) service often consume the most disk
|
|
||||||
space in OpenStack environments. As with compute hosts,
|
|
||||||
choose disks that provide the highest I/O throughput with the lowest latency
|
|
||||||
for storage hosts. Storage hosts need to have 1TB of disk space at a
|
|
||||||
minimum.
|
|
||||||
|
|
||||||
Infrastructure hosts
|
|
||||||
The OpenStack control plane contains storage-intensive services, such as
|
|
||||||
the Image (glance) service as well as MariaDB. These control plane hosts
|
|
||||||
need to have 100GB of disk space available at a minimum.
|
|
||||||
|
|
||||||
Logging hosts
|
|
||||||
An OpenStack-Ansible deployment generates a significant amount of logging.
|
|
||||||
Logs come from a variety of sources, including services running in
|
|
||||||
containers, the containers themselves, and the physical hosts. Logging hosts
|
|
||||||
need sufficient disk space to hold live, and rotated (historical), log files.
|
|
||||||
In addition, the storage performance must be enough to keep pace with the
|
|
||||||
log traffic coming from various hosts and containers within the OpenStack
|
|
||||||
environment. Reserve a minimum of 50GB of disk space for storing
|
|
||||||
logs on the logging hosts.
|
|
||||||
|
|
||||||
Hosts that provide Block Storage volumes must have logical volume
|
|
||||||
manager (LVM) support. Ensure those hosts have a ``cinder-volume`` volume
|
|
||||||
group that OpenStack-Ansible can configure for use with cinder.
|
|
||||||
|
|
||||||
Each control plane host runs services inside LXC containers. The container
|
|
||||||
filesystems are deployed by default onto the root filesystem of each control
|
|
||||||
plane hosts. You have the option to deploy those container filesystems
|
|
||||||
into logical volumes by creating a volume group called ``lxc``.
|
|
||||||
OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
|
|
||||||
container running on the host.
|
|
||||||
|
|
||||||
Network requirements
|
|
||||||
~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You can deploy an OpenStack environment with only one physical
|
|
||||||
network interface. This works for small environments, but it can cause
|
|
||||||
problems when your environment grows.
|
|
||||||
|
|
||||||
For the best performance, reliability, and scalability in a production
|
|
||||||
environment, deployers should consider a network configuration that contains
|
|
||||||
the following features:
|
|
||||||
|
|
||||||
* Bonded network interfaces: Increases performance and/or reliability
|
|
||||||
(dependent on bonding architecture).
|
|
||||||
|
|
||||||
* VLAN offloading: Increases performance by adding and removing VLAN tags in
|
|
||||||
hardware, rather than in the server's main CPU.
|
|
||||||
|
|
||||||
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
|
|
||||||
also improve storage performance when using the Block Storage service.
|
|
||||||
|
|
||||||
* Jumbo frames: Increases network performance by allowing more data to be sent
|
|
||||||
in each packet.
|
|
||||||
|
|
||||||
Software requirements
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Ensure all hosts within an OpenStack-Ansible environment meet the following
|
|
||||||
minimum requirements:
|
|
||||||
|
|
||||||
* Ubuntu 14.04 LTS (Trusty Tahr)
|
|
||||||
|
|
||||||
* OSA is tested regularly against the latest Ubuntu 14.04 LTS point
|
|
||||||
releases.
|
|
||||||
* Linux kernel version ``3.13.0-34-generic`` or later.
|
|
||||||
* For swift storage hosts, you must enable the ``trusty-backports``
|
|
||||||
repositories in ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/``
|
|
||||||
See the `Ubuntu documentation
|
|
||||||
<https://help.ubuntu.com/community/UbuntuBackports#Enabling_Backports_Manually>`_ for more detailed instructions.
|
|
||||||
|
|
||||||
* Secure Shell (SSH) client and server that supports public key
|
|
||||||
authentication
|
|
||||||
|
|
||||||
* Network Time Protocol (NTP) client for time synchronization (such as
|
|
||||||
``ntpd`` or ``chronyd``)
|
|
||||||
|
|
||||||
* Python 2.7 or later
|
|
||||||
|
|
||||||
* en_US.UTF-8 as locale
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,22 +0,0 @@
|
|||||||
=====================
|
|
||||||
Installation workflow
|
|
||||||
=====================
|
|
||||||
|
|
||||||
This diagram shows the general workflow associated with an
|
|
||||||
OpenStack-Ansible installation.
|
|
||||||
|
|
||||||
|
|
||||||
.. figure:: figures/installation-workflow-overview.png
|
|
||||||
:width: 100%
|
|
||||||
|
|
||||||
**Installation workflow**
|
|
||||||
|
|
||||||
#. :doc:`Prepare deployment host <deploymenthost>`
|
|
||||||
#. :doc:`Prepare target hosts <targethosts>`
|
|
||||||
#. :doc:`Configure deployment <configure>`
|
|
||||||
#. :doc:`Run playbooks <installation#run-playbooks>`
|
|
||||||
#. :doc:`Verify OpenStack operation <installation>`
|
|
||||||
|
|
||||||
-----------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,18 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
========
|
|
||||||
Overview
|
|
||||||
========
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
overview-osa.rst
|
|
||||||
overview-host-layout
|
|
||||||
overview-network-arch.rst
|
|
||||||
overview-storage-arch.rst
|
|
||||||
overview-requirements.rst
|
|
||||||
overview-workflow.rst
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,138 +0,0 @@
|
|||||||
=====================
|
|
||||||
Network configuration
|
|
||||||
=====================
|
|
||||||
|
|
||||||
Production environment
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This example allows you to use your own parameters for the deployment.
|
|
||||||
|
|
||||||
If you followed the previously proposed design, the following table shows
|
|
||||||
bridges that are to be configured on hosts.
|
|
||||||
|
|
||||||
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| Bridge name | Best configured on | With a static IP |
|
|
||||||
+=============+=======================+=====================================+
|
|
||||||
| br-mgmt | On every node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every storage node | When component is deployed on metal |
|
|
||||||
+ br-storage +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every network node | When component is deployed on metal |
|
|
||||||
+ br-vxlan +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every network node | Never |
|
|
||||||
+ br-vlan +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Never |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
|
|
||||||
|
|
||||||
Example for 3 controller nodes and 2 compute nodes
|
|
||||||
--------------------------------------------------
|
|
||||||
|
|
||||||
* VLANs:
|
|
||||||
|
|
||||||
* Host management: Untagged/Native
|
|
||||||
* Container management: 10
|
|
||||||
* Tunnels: 30
|
|
||||||
* Storage: 20
|
|
||||||
|
|
||||||
* Networks:
|
|
||||||
|
|
||||||
* Host management: 10.240.0.0/22
|
|
||||||
* Container management: 172.29.236.0/22
|
|
||||||
* Tunnel: 172.29.240.0/22
|
|
||||||
* Storage: 172.29.244.0/22
|
|
||||||
|
|
||||||
* Addresses for the controller nodes:
|
|
||||||
|
|
||||||
* Host management: 10.240.0.11 - 10.240.0.13
|
|
||||||
* Host management gateway: 10.240.0.1
|
|
||||||
* DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
* Container management: 172.29.236.11 - 172.29.236.13
|
|
||||||
* Tunnel: no IP (because IP exist in the containers, when the components
|
|
||||||
are not deployed directly on metal)
|
|
||||||
* Storage: no IP (because IP exist in the containers, when the components
|
|
||||||
are not deployed directly on metal)
|
|
||||||
|
|
||||||
* Addresses for the compute nodes:
|
|
||||||
|
|
||||||
* Host management: 10.240.0.21 - 10.240.0.22
|
|
||||||
* Host management gateway: 10.240.0.1
|
|
||||||
* DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
* Container management: 172.29.236.21 - 172.29.236.22
|
|
||||||
* Tunnel: 172.29.240.21 - 172.29.240.22
|
|
||||||
* Storage: 172.29.244.21 - 172.29.244.22
|
|
||||||
|
|
||||||
|
|
||||||
.. TODO Update this section. Should this information be moved to the overview
|
|
||||||
chapter / network architecture section?
|
|
||||||
|
|
||||||
Modifying the network interfaces file
|
|
||||||
-------------------------------------
|
|
||||||
|
|
||||||
After establishing initial host management network connectivity using
|
|
||||||
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
|
|
||||||
An example is provided on this `Link to Production Environment`_ based
|
|
||||||
on the production environment described in `host layout for production
|
|
||||||
environment`_.
|
|
||||||
|
|
||||||
.. _host layout for production environment: overview-host-layout.html#production-environment
|
|
||||||
.. _Link to Production Environment: app-targethosts-networkexample.html#production-environment
|
|
||||||
|
|
||||||
Test environment
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This example uses the following parameters to configure networking on a
|
|
||||||
single target host. See `Figure 3.2`_ for a visual representation of these
|
|
||||||
parameters in the architecture.
|
|
||||||
|
|
||||||
* VLANs:
|
|
||||||
|
|
||||||
* Host management: Untagged/Native
|
|
||||||
* Container management: 10
|
|
||||||
* Tunnels: 30
|
|
||||||
* Storage: 20
|
|
||||||
|
|
||||||
* Networks:
|
|
||||||
|
|
||||||
* Host management: 10.240.0.0/22
|
|
||||||
* Container management: 172.29.236.0/22
|
|
||||||
* Tunnel: 172.29.240.0/22
|
|
||||||
* Storage: 172.29.244.0/22
|
|
||||||
|
|
||||||
* Addresses:
|
|
||||||
|
|
||||||
* Host management: 10.240.0.11
|
|
||||||
* Host management gateway: 10.240.0.1
|
|
||||||
* DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
* Container management: 172.29.236.11
|
|
||||||
* Tunnel: 172.29.240.11
|
|
||||||
* Storage: 172.29.244.11
|
|
||||||
|
|
||||||
.. _Figure 3.2: targethosts-networkconfig.html#fig_hosts-target-network-containerexample
|
|
||||||
|
|
||||||
**Figure 3.2. Target host for infrastructure, networking, compute, and
|
|
||||||
storage services**
|
|
||||||
|
|
||||||
.. image:: figures/networkarch-container-external-example.png
|
|
||||||
|
|
||||||
Modifying the network interfaces file
|
|
||||||
-------------------------------------
|
|
||||||
|
|
||||||
After establishing initial host management network connectivity using
|
|
||||||
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
|
|
||||||
An example is provided below on this `link to Test Environment`_ based
|
|
||||||
on the test environment described in `host layout for testing
|
|
||||||
environment`_.
|
|
||||||
|
|
||||||
.. _Link to Test Environment: app-targethosts-networkexample.html#test-environment
|
|
||||||
.. _host layout for testing environment: overview-host-layout.html#test-environment
|
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,115 +0,0 @@
|
|||||||
==========================
|
|
||||||
Preparing the target hosts
|
|
||||||
==========================
|
|
||||||
|
|
||||||
The following section describes the installation and configuration of
|
|
||||||
operating systems for the target hosts.
|
|
||||||
|
|
||||||
Installing the operating system
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install the Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit operating
|
|
||||||
system on the target host. Configure at least one network interface
|
|
||||||
to access the internet or suitable local repositories.
|
|
||||||
|
|
||||||
We recommend adding the Secure Shell (SSH) server packages to the
|
|
||||||
installation on target hosts without local (console) access.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
We also recommend setting your locale to `en_US.UTF-8`. Other locales may
|
|
||||||
work, but they are not tested or supported.
|
|
||||||
|
|
||||||
Configuring the operating system
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Upgrade system packages and kernel:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# apt-get dist-upgrade
|
|
||||||
|
|
||||||
#. Ensure the kernel version is ``3.13.0-34-generic`` or later.
|
|
||||||
|
|
||||||
#. Install additional software packages:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# apt-get install bridge-utils debootstrap ifenslave ifenslave-2.6 \
|
|
||||||
lsof lvm2 ntp ntpdate openssh-server sudo tcpdump vlan
|
|
||||||
|
|
||||||
#. Add the appropriate kernel modules to the ``/etc/modules`` file to
|
|
||||||
enable VLAN and bond interfaces:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# echo 'bonding' >> /etc/modules
|
|
||||||
# echo '8021q' >> /etc/modules
|
|
||||||
|
|
||||||
#. Configure NTP to synchronize with a suitable time source.
|
|
||||||
|
|
||||||
#. Reboot the host to activate the changes and use new kernel.
|
|
||||||
|
|
||||||
Deploying Secure Shell (SSH) keys
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Ansible uses SSH for connectivity between the deployment and target hosts.
|
|
||||||
|
|
||||||
#. Copy the contents of the public key file on the deployment host to
|
|
||||||
the ``/root/.ssh/authorized_keys`` file on each target host.
|
|
||||||
|
|
||||||
#. Test public key authentication from the deployment host to each
|
|
||||||
target host. SSH provides a shell without asking for a
|
|
||||||
password.
|
|
||||||
|
|
||||||
For more information on how to generate an SSH keypair as well as best
|
|
||||||
practices, refer to `GitHub's documentation on generating SSH keys`_.
|
|
||||||
|
|
||||||
.. _GitHub's documentation on generating SSH keys: https://help.github.com/articles/generating-ssh-keys/
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
OpenStack-Ansible deployments expect the presence of a
|
|
||||||
``/root/.ssh/id_rsa.pub`` file on the deployment host.
|
|
||||||
The contents of this file is inserted into an
|
|
||||||
``authorized_keys`` file for the containers, which is a
|
|
||||||
necessary step for the Ansible playbooks. You can
|
|
||||||
override this behavior by setting the
|
|
||||||
``lxc_container_ssh_key`` variable to the public key for
|
|
||||||
the container.
|
|
||||||
|
|
||||||
Configuring storage
|
|
||||||
~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into
|
|
||||||
multiple logical volumes which appear as a physical storage device to the
|
|
||||||
operating system. The Block Storage (cinder) service, as well as the LXC
|
|
||||||
containers that run the OpenStack infrastructure, can optionally use LVM for
|
|
||||||
their data storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
OpenStack-Ansible automatically configures LVM on the nodes, and
|
|
||||||
overrides any existing LVM configuration. If you had a customized LVM
|
|
||||||
configuration, edit the generated configuration file as needed.
|
|
||||||
|
|
||||||
#. To use the optional Block Storage (cinder) service, create an LVM
|
|
||||||
volume group named ``cinder-volume`` on the Block Storage host. A
|
|
||||||
metadata size of 2048 must be specified during physical volume
|
|
||||||
creation. For example:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# pvcreate --metadatasize 2048 physical_volume_device_path
|
|
||||||
# vgcreate cinder-volumes physical_volume_device_path
|
|
||||||
|
|
||||||
#. Optionally, create an LVM volume group named ``lxc`` for container file
|
|
||||||
systems. If the ``lxc`` volume group does not exist, containers are
|
|
||||||
automatically installed into the file system under ``/var/lib/lxc`` by
|
|
||||||
default.
|
|
||||||
|
|
||||||
.. _Logical Volume Manager (LVM): https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,25 +0,0 @@
|
|||||||
============
|
|
||||||
Target hosts
|
|
||||||
============
|
|
||||||
|
|
||||||
.. figure:: figures/installation-workflow-targethosts.png
|
|
||||||
:width: 100%
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
targethosts-prepare.rst
|
|
||||||
targethosts-networkconfig.rst
|
|
||||||
|
|
||||||
On each target host, perform the following tasks:
|
|
||||||
|
|
||||||
* Name the target hosts
|
|
||||||
* Install the operating system
|
|
||||||
* Generate and set up security measures
|
|
||||||
* Update the operating system and install additional software packages
|
|
||||||
* Create LVM volume groups
|
|
||||||
* Configure networking devices
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -92,6 +92,3 @@ fine-tune certain security configurations.
|
|||||||
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
||||||
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -9,7 +9,3 @@ Appendix E: Advanced configuration
|
|||||||
app-advanced-config-security
|
app-advanced-config-security
|
||||||
app-advanced-config-sslcertificates
|
app-advanced-config-sslcertificates
|
||||||
app-advanced-config-affinity
|
app-advanced-config-affinity
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -201,6 +201,3 @@ Tempest:
|
|||||||
pip:
|
pip:
|
||||||
* pip_global_conf_overrides
|
* pip_global_conf_overrides
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -41,7 +41,3 @@ fine-tune certain security configurations.
|
|||||||
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
|
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
|
||||||
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
||||||
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -137,7 +137,3 @@ The process is identical to the other services. Replace
|
|||||||
``rabbitmq`` in the configuration variables shown above with ``horizon``,
|
``rabbitmq`` in the configuration variables shown above with ``horizon``,
|
||||||
``haproxy``, or ``keystone`` to deploy user-provided certificates to those
|
``haproxy``, or ``keystone`` to deploy user-provided certificates to those
|
||||||
services.
|
services.
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,5 +1,3 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=================================
|
=================================
|
||||||
Appendix A: Configuration files
|
Appendix A: Configuration files
|
||||||
=================================
|
=================================
|
||||||
@ -19,6 +17,3 @@ Appendix A: Configuration files
|
|||||||
`extra_container.yml
|
`extra_container.yml
|
||||||
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/env.d/extra_container.yml.example>`_
|
<https://raw.githubusercontent.com/openstack/openstack-ansible/master/etc/openstack_deploy/env.d/extra_container.yml.example>`_
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,17 +1,16 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
==================================================
|
||||||
|
Appendix C: Customizing host and service layouts
|
||||||
================================================
|
==================================================
|
||||||
Appendix E: Customizing host and service layouts
|
|
||||||
================================================
|
|
||||||
|
|
||||||
Understanding the default layout
|
Understanding the default layout
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The default layout of containers and services in OpenStack-Ansible is driven
|
The default layout of containers and services in OpenStack-Ansible is driven
|
||||||
by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
|
by the ``/etc/openstack_deploy/openstack_user_config.yml`` file and the
|
||||||
contents of the ``/etc/openstack_deploy/conf.d/``,
|
contents of both the ``/etc/openstack_deploy/conf.d/`` and
|
||||||
``playbooks/inventory/env.d/`` and ``/etc/openstack_deploy/env.d/``
|
``/etc/openstack_deploy/env.d/`` directories. Use these sources to define the
|
||||||
directories. Use these sources to define the group mappings used by the
|
group mappings used by the playbooks to target hosts and containers for roles
|
||||||
playbooks to target hosts and containers for roles used in the deploy.
|
used in the deploy.
|
||||||
|
|
||||||
Conceptually, these can be thought of as mapping from two directions. You
|
Conceptually, these can be thought of as mapping from two directions. You
|
||||||
define host groups, which gather the target hosts into inventory groups,
|
define host groups, which gather the target hosts into inventory groups,
|
||||||
@ -27,12 +26,13 @@ desire before running the installation playbooks.
|
|||||||
|
|
||||||
Understanding host groups
|
Understanding host groups
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
As part of initial configuration, each target host appears in either the
|
As part of initial configuration, each target host appears in either the
|
||||||
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
|
``/etc/openstack_deploy/openstack_user_config.yml`` file or in files within
|
||||||
the ``/etc/openstack_deploy/conf.d/`` directory. We use a format for files in
|
the ``/etc/openstack_deploy/conf.d/`` directory. We use a format for files in
|
||||||
``conf.d/`` which is identical to the syntax used in the
|
``conf.d/`` which is identical to the syntax used in the
|
||||||
``openstack_user_config.yml`` file. These hosts are listed under one or more
|
``openstack_user_config.yml`` file. These hosts are listed under one or more
|
||||||
headings such as ``shared-infra_hosts`` or ``storage_hosts`` which serve as
|
headings, such as ``shared-infra_hosts`` or ``storage_hosts``, which serve as
|
||||||
Ansible group mappings. We treat these groupings as mappings to the physical
|
Ansible group mappings. We treat these groupings as mappings to the physical
|
||||||
hosts.
|
hosts.
|
||||||
|
|
||||||
@ -40,21 +40,22 @@ The example file ``haproxy.yml.example`` in the ``conf.d/`` directory provides
|
|||||||
a simple example of defining a host group (``haproxy_hosts``) with two hosts
|
a simple example of defining a host group (``haproxy_hosts``) with two hosts
|
||||||
(``infra1`` and ``infra2``).
|
(``infra1`` and ``infra2``).
|
||||||
|
|
||||||
A more complex example file is ``swift.yml.example``. Here, in addition, we
|
A more complex example file is ``swift.yml.example``. Here, we
|
||||||
specify host variables for a target host using the ``container_vars`` key.
|
specify host variables for a target host using the ``container_vars`` key.
|
||||||
OpenStack-Ansible applies all entries under this key as host-specific
|
OpenStack-Ansible applies all entries under this key as host-specific
|
||||||
variables to any component containers on the specific host.
|
variables to any component containers on the specific host.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Our current recommendation is for new inventory groups, particularly for new
|
We recommend new inventory groups, particularly for new
|
||||||
services, to be defined using a new file in the ``conf.d/`` directory in
|
services, to be defined using a new file in the ``conf.d/`` directory in
|
||||||
order to manage file size.
|
order to manage file size.
|
||||||
|
|
||||||
Understanding container groups
|
Understanding container groups
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
Additional group mappings can be found within files in the
|
Additional group mappings can be found within files in the
|
||||||
``playbooks/inventory/env.d/`` directory. These groupings are treated as
|
``/etc/openstack_deploy/env.d/`` directory. These groupings are treated as
|
||||||
virtual mappings from the host groups (described above) onto the container
|
virtual mappings from the host groups (described above) onto the container
|
||||||
groups which define where each service deploys. By reviewing files within the
|
groups which define where each service deploys. By reviewing files within the
|
||||||
``env.d/`` directory, you can begin to see the nesting of groups represented
|
``env.d/`` directory, you can begin to see the nesting of groups represented
|
||||||
@ -88,7 +89,7 @@ The default layout does not rely exclusively on groups being subsets of other
|
|||||||
groups. The ``memcache`` component group is part of the ``memcache_container``
|
groups. The ``memcache`` component group is part of the ``memcache_container``
|
||||||
group, as well as the ``memcache_all`` group and also contains a ``memcached``
|
group, as well as the ``memcache_all`` group and also contains a ``memcached``
|
||||||
component group. If you review the ``playbooks/memcached-install.yml``
|
component group. If you review the ``playbooks/memcached-install.yml``
|
||||||
playbook you see that the playbook applies to hosts in the ``memcached``
|
playbook, you see that the playbook applies to hosts in the ``memcached``
|
||||||
group. Other services may have more complex deployment needs. They define and
|
group. Other services may have more complex deployment needs. They define and
|
||||||
consume inventory container groups differently. Mapping components to several
|
consume inventory container groups differently. Mapping components to several
|
||||||
groups in this way allows flexible targeting of roles and tasks.
|
groups in this way allows flexible targeting of roles and tasks.
|
||||||
@ -96,62 +97,37 @@ groups in this way allows flexible targeting of roles and tasks.
|
|||||||
Customizing existing components
|
Customizing existing components
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Numerous customization scenarios are possible, but three popular ones are
|
|
||||||
presented here as starting points and also as common recipes.
|
|
||||||
|
|
||||||
Modifying the default environment
|
|
||||||
---------------------------------
|
|
||||||
|
|
||||||
In order to avoid conflicts between deployer and project changes, the base
|
|
||||||
configuration for the OpenStack-Ansible components resides in the
|
|
||||||
``playbooks/inventory/env.d/`` directory.
|
|
||||||
|
|
||||||
The ``/etc/openstack_deploy/env.d`` directory is used to override and extend
|
|
||||||
the environment to the deployer's needs. To modify an existing configuration,
|
|
||||||
copy the relevant service file to the ``/etc/openstack_deploy/env.d``
|
|
||||||
directory. Then, modify the values under the relevant keys. Only the keys
|
|
||||||
and the modified value are required to be present; other information can
|
|
||||||
safely be omitted.
|
|
||||||
|
|
||||||
|
|
||||||
Deploying directly on hosts
|
Deploying directly on hosts
|
||||||
---------------------------
|
---------------------------
|
||||||
|
|
||||||
To deploy a component directly on the host instead of within a container, copy
|
To deploy a component directly on the host instead of within a container, set
|
||||||
the relevant file to ``/etc/openstack_deploy/env.d/`` and set the ``is_metal``
|
the ``is_metal`` property to ``true`` for the container group under the
|
||||||
property to ``true`` for the container group under the ``container_skel``.
|
``container_skel`` in the appropriate file.
|
||||||
|
|
||||||
The use of ``container_vars`` and mapping from container groups to host groups
|
The use of ``container_vars`` and mapping from container groups to host groups
|
||||||
is the same for a service deployed directly onto the host.
|
is the same for a service deployed directly onto the host.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The ``cinder_volume`` component is also deployed directly on the host by
|
The ``cinder-volume`` component is also deployed directly on the host by
|
||||||
default. See the ``playbooks/inventory/env.d/cinder.yml`` file for this example.
|
default. See the ``env.d/cinder.yml`` file for this example.
|
||||||
|
|
||||||
Omit a service or component from the deployment
|
Omit a service or component from the deployment
|
||||||
-----------------------------------------------
|
-----------------------------------------------
|
||||||
|
|
||||||
To omit a component from a deployment, several options exist.
|
To omit a component from a deployment, several options exist:
|
||||||
|
|
||||||
- You could remove the ``physical_skel`` link between the container group and
|
- Remove the ``physical_skel`` link between the container group and
|
||||||
the host group. The simplest way to do this is to simply copy the relevant
|
the host group. The simplest way to do this is to delete the related
|
||||||
file to the ``/etc/openstack_deploy/env.d/`` directory, and set the
|
file located in the ``env.d/`` directory.
|
||||||
following information:
|
- Do not run the playbook which installs the component.
|
||||||
|
Unless you specify the component to run directly on a host using
|
||||||
|
``is_metal``, a container creates for this component.
|
||||||
|
- Adjust the `affinity`_ to 0 for the host group. Unless you
|
||||||
|
specify the component to run directly on a host using ``is_metal``,
|
||||||
|
a container creates for this component.
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. _affinity: app-advanced-config-affinity.rst
|
||||||
|
|
||||||
physical_skel: {}
|
|
||||||
|
|
||||||
- You could choose to not run the playbook which installs the component.
|
|
||||||
Unless you specify the component to run directly on a host using is_metal, a
|
|
||||||
container creates for this component.
|
|
||||||
- You could adjust the ``affinity`` to 0 for the host group. Unless you
|
|
||||||
specify the component to run directly on a host using is_metal, a container
|
|
||||||
creates for this component. `Affinity`_ is discussed in the initial
|
|
||||||
environment configuration section of the install guide.
|
|
||||||
|
|
||||||
.. _Affinity: configure-initial.html#affinity
|
|
||||||
|
|
||||||
Deploying existing components on dedicated hosts
|
Deploying existing components on dedicated hosts
|
||||||
------------------------------------------------
|
------------------------------------------------
|
||||||
@ -181,10 +157,10 @@ segment of the ``env.d/galera.yml`` file might look like:
|
|||||||
``is_metal: true`` property. We include it here as a recipe for the more
|
``is_metal: true`` property. We include it here as a recipe for the more
|
||||||
commonly requested layout.
|
commonly requested layout.
|
||||||
|
|
||||||
Since we define the new container group (``db_containers`` above) we must
|
Since we define the new container group (``db_containers`` above), we must
|
||||||
assign that container group to a host group. To assign the new container
|
assign that container group to a host group. To assign the new container
|
||||||
group to a new host group, provide a ``physical_skel`` for the new host group
|
group to a new host group, provide a ``physical_skel`` for the new host group
|
||||||
(in a new or existing file, such as ``env.d/galera.yml``) like the following:
|
(in a new or existing file, such as ``env.d/galera.yml``). For example:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -197,7 +173,7 @@ group to a new host group, provide a ``physical_skel`` for the new host group
|
|||||||
- hosts
|
- hosts
|
||||||
|
|
||||||
Lastly, define the host group (db_hosts above) in a ``conf.d/`` file (such as
|
Lastly, define the host group (db_hosts above) in a ``conf.d/`` file (such as
|
||||||
``galera.yml``).
|
``galera.yml``):
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
@ -1,147 +0,0 @@
|
|||||||
`Home <index.html>`__ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Appendix D. Using Nuage Neutron Plugin
|
|
||||||
--------------------------------------
|
|
||||||
|
|
||||||
Introduction
|
|
||||||
============
|
|
||||||
|
|
||||||
This document describes the steps required to deploy Nuage Networks VCS
|
|
||||||
with OpenStack-Ansible (OSA). These steps include:
|
|
||||||
|
|
||||||
- Install prerequisites.
|
|
||||||
|
|
||||||
- Configure Neutron to use the Nuage Networks Neutron plugin.
|
|
||||||
|
|
||||||
- Configure the Nuage Networks Neutron plugin.
|
|
||||||
|
|
||||||
- Download Nuage Networks VCS components and playbooks.
|
|
||||||
|
|
||||||
- Execute the playbooks.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
=============
|
|
||||||
|
|
||||||
#. The deployment environment has been configured according to OSA
|
|
||||||
best-practices. This includes cloning OSA software and bootstrapping
|
|
||||||
Ansible. See `OpenStack-Ansible Install Guide <index.html>`_
|
|
||||||
#. VCS stand-alone components, VSD and VSC, have been configured and
|
|
||||||
deployed. (See Nuage Networks VSD and VSC Install Guides.)
|
|
||||||
#. Nuage VRS playbooks have been cloned to the deployment host from
|
|
||||||
`https://github.com/nuagenetworks/nuage-openstack-ansible <https://github.com/nuagenetworks/nuage-openstack-ansible>`_.
|
|
||||||
This guide assumes a deployment host path of
|
|
||||||
/opt/nuage-openstack-ansible
|
|
||||||
|
|
||||||
Configure Nuage Neutron Plugin
|
|
||||||
==============================
|
|
||||||
|
|
||||||
Configuring the Neutron plugin requires creating/editing of parameters
|
|
||||||
in following two files:
|
|
||||||
|
|
||||||
- ``/etc/openstack_deploy/user_nuage_vars.yml``
|
|
||||||
|
|
||||||
- ``/etc/openstack_deploy/user_variables.yml``
|
|
||||||
|
|
||||||
On the deployment host, copy the Nuage user variables file from
|
|
||||||
``/opt/nuage-openstack-ansible/etc/user_nuage_vars.yml`` to
|
|
||||||
``/etc/openstack_deploy/`` folder.
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# cp /opt/nuage-openstack-ansible/etc/user_nuage_vars.yml /etc/openstack_deploy/
|
|
||||||
|
|
||||||
Also modify the following parameters in this file as per your Nuage VCS
|
|
||||||
environment:
|
|
||||||
|
|
||||||
#. Replace *VSD Enterprise Name* parameter with user desired name of VSD
|
|
||||||
Enterprise:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_net_partition_name: "<VSD Enterprise Name>"
|
|
||||||
|
|
||||||
#. Replace *VSD IP* and *VSD GUI Port* parameters as per your VSD
|
|
||||||
configuration:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_vsd_ip: "<VSD IP>:<VSD GUI Port>"
|
|
||||||
|
|
||||||
#. Replace *VSD Username, VSD Password* and *VSD Organization Name* with
|
|
||||||
login credentials for VSD GUI as per your environment:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_vsd_username: "<VSD Username>"
|
|
||||||
|
|
||||||
nuage_vsd_password: "<VSD Password>"
|
|
||||||
|
|
||||||
nuage_vsd_organization: "<VSD Organization Name>"
|
|
||||||
|
|
||||||
#. Replace *Nuage VSP Version* with the Nuage VSP release you plan on
|
|
||||||
using for Integration; For eg: If you seem to use Nuage VSP release 3.2;
|
|
||||||
this value would be *v3\_2*
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_base_uri_version: "<Nuage VSP Version>"
|
|
||||||
|
|
||||||
#. Replace *Nuage VSD CMS Id* with the CMS-Id generated by VSD to manage
|
|
||||||
your OpenStack cluster:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_cms_id: "<Nuage VSD CMS Id>"
|
|
||||||
|
|
||||||
#. Replace *Active VSC-IP* with the IP address of your active VSC node
|
|
||||||
and *Standby VSC-IP* with the IP address of your standby VSC node.
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
active_controller: "<Active VSC-IP>"
|
|
||||||
|
|
||||||
standby_controller: "<Standby VSC-IP>"
|
|
||||||
|
|
||||||
#. Replace *Local Package Repository* with the link of your local
|
|
||||||
repository hosting the Nuage VRS packages, e.g. ``http://192.0.2.10/debs/3.2/vrs/``
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nuage_vrs_debs_repo: "deb <Local Package Repository>"
|
|
||||||
|
|
||||||
#. On the Deployment host, add the following lines to your
|
|
||||||
``/etc/openstack_deploy/user_variables.yml`` file, replacing the
|
|
||||||
*Local PyPi Mirror URL* with the link to the pypi server hosting your
|
|
||||||
Nuage OpenStack Python packages in “.whl” format.
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
neutron_plugin_type: "nuage"
|
|
||||||
nova_network_type: "nuage"
|
|
||||||
pip_links:
|
|
||||||
- { name: "openstack_release", link: "{{ openstack_repo_url }}/os-releases/{{ openstack_release }}/" }
|
|
||||||
- { name: "nuage_repo", link: "<Local PyPi Mirror URL>" }
|
|
||||||
|
|
||||||
Installation
|
|
||||||
============
|
|
||||||
|
|
||||||
#. After multi-node OpenStack cluster is setup as detailed above; start
|
|
||||||
the OpenStack deployment as listed in the OpenStack-Ansible Install guide by
|
|
||||||
running all playbooks in sequence on the deployment host
|
|
||||||
|
|
||||||
#. After OpenStack deployment is complete; run the Nuage VRS playbooks
|
|
||||||
in ``/opt/nuage-openstack-ansible/nuage_playbook`` on
|
|
||||||
your deployment host to deploy Nuage VRS on all compute target hosts in
|
|
||||||
the OpenStack cluster:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# cd /opt/nuage-openstack-ansible/nuage_playbooks
|
|
||||||
# openstack-ansible nuage_all.yml
|
|
||||||
|
|
||||||
Note: For Nuage Networks VSP software packages, user documentation and licenses
|
|
||||||
please reach out with a query to info@nuagenetworks.net
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,157 +0,0 @@
|
|||||||
`Home <index.html>`__ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
==========================================
|
|
||||||
Appendix D: Using PLUMgrid Neutron plugin
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
Installing source and host networking
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Clone the PLUMgrid ansible repository under the ``/opt/`` directory:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# git clone -b TAG https://github.com/plumgrid/plumgrid-ansible.git /opt/plumgrid-ansible
|
|
||||||
|
|
||||||
Replace *``TAG``* with the current stable release tag.
|
|
||||||
|
|
||||||
#. PLUMgrid will take over networking for the entire cluster. The
|
|
||||||
bridges ``br-vxlan`` and ``br-vlan`` only need to be present to avoid
|
|
||||||
relevant containers from erroring out on infra hosts. They do not
|
|
||||||
need to be attached to any host interface or a valid network.
|
|
||||||
|
|
||||||
#. PLUMgrid requires two networks: a `Management` and a `Fabric` network.
|
|
||||||
Management is typically shared via the standard ``br-mgmt`` and Fabric
|
|
||||||
must be specified in the PLUMgrid configuration file described below.
|
|
||||||
The Fabric interface must be untagged and unbridged.
|
|
||||||
|
|
||||||
Neutron configurations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
To setup the neutron configuration to install PLUMgrid as the
|
|
||||||
core neutron plugin, create a user space variable file
|
|
||||||
``/etc/openstack_deploy/user_pg_neutron.yml`` and insert the following
|
|
||||||
parameters.
|
|
||||||
|
|
||||||
#. Set the ``neutron_plugin_type`` parameter to ``plumgrid``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Neutron Plugins
|
|
||||||
neutron_plugin_type: plumgrid
|
|
||||||
|
|
||||||
#. In the same file, disable the installation of unnecessary ``neutron-agents``
|
|
||||||
in the ``neutron_services`` dictionary, by setting their ``service_en``
|
|
||||||
parameters to ``False``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
neutron_metering: False
|
|
||||||
neutron_l3: False
|
|
||||||
neutron_lbaas: False
|
|
||||||
neutron_lbaasv2: False
|
|
||||||
neutron_vpnaas: False
|
|
||||||
|
|
||||||
|
|
||||||
PLUMgrid configurations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
On the deployment host, create a PLUMgrid user variables file using the sample
|
|
||||||
in ``/opt/plumgrid-ansible/etc/user_pg_vars.yml.example`` and copy it to
|
|
||||||
``/etc/openstack_deploy/user_pg_vars.yml``. You must configure the
|
|
||||||
following parameters.
|
|
||||||
|
|
||||||
#. Replace ``PG_REPO_HOST`` with a valid repo URL hosting PLUMgrid
|
|
||||||
packages:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
plumgrid_repo: PG_REPO_HOST
|
|
||||||
|
|
||||||
#. Replace ``INFRA_IPs`` with comma separated Infrastructure Node IPs and
|
|
||||||
``PG_VIP`` with an unallocated IP on the management network. This will
|
|
||||||
be used to access the PLUMgrid UI:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
plumgrid_ip: INFRA_IPs
|
|
||||||
pg_vip: PG_VIP
|
|
||||||
|
|
||||||
#. Replace ``FABRIC_IFC`` with the name of the interface that will be used
|
|
||||||
for PLUMgrid Fabric.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
PLUMgrid Fabric must be an untagged unbridged raw interface such as ``eth0``.
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
fabric_interface: FABRIC_IFC
|
|
||||||
|
|
||||||
#. Fill in the ``fabric_ifc_override`` and ``mgmt_override`` dicts with
|
|
||||||
node ``hostname: interface_name`` to override the default interface
|
|
||||||
names.
|
|
||||||
|
|
||||||
#. Obtain a PLUMgrid License file, rename to ``pg_license`` and place it under
|
|
||||||
``/var/lib/plumgrid/pg_license`` on the deployment host.
|
|
||||||
|
|
||||||
Gateway Hosts
|
|
||||||
~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
PLUMgrid-enabled OpenStack clusters contain one or more gateway nodes
|
|
||||||
that are used for providing connectivity with external resources, such as
|
|
||||||
external networks, bare-metal servers, or network service
|
|
||||||
appliances. In addition to the `Management` and `Fabric` networks required
|
|
||||||
by PLUMgrid nodes, gateways require dedicated external interfaces referred
|
|
||||||
to as ``gateway_devs`` in the configuration files.
|
|
||||||
|
|
||||||
#. Add a ``gateway_hosts`` section to
|
|
||||||
``/etc/openstack_deploy/openstack_user_config.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
gateway_hosts:
|
|
||||||
gateway1:
|
|
||||||
ip: GW01_IP_ADDRESS
|
|
||||||
gateway2:
|
|
||||||
ip: GW02_IP_ADDRESS
|
|
||||||
|
|
||||||
Replace ``*_IP_ADDRESS`` with the IP address of the ``br-mgmt`` container
|
|
||||||
management bridge on each Gateway host.
|
|
||||||
|
|
||||||
#. Add a ``gateway_hosts`` section to the end of the PLUMgrid
|
|
||||||
``user_pg_vars.yml`` file:
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This must contain hostnames and ``gateway_dev`` names for each
|
|
||||||
gateway in the cluster.
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
gateway_hosts:
|
|
||||||
- hostname: gateway1
|
|
||||||
gateway_devs:
|
|
||||||
- eth3
|
|
||||||
- eth4
|
|
||||||
|
|
||||||
Installation
|
|
||||||
~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Run the PLUMgrid playbooks (do this before the ``openstack-setup.yml``
|
|
||||||
playbook is run):
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# cd /opt/plumgrid-ansible/plumgrid_playbooks
|
|
||||||
# openstack-ansible plumgrid_all.yml
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Contact PLUMgrid for an Installation Pack: info@plumgrid.com
|
|
||||||
This includes a full trial commercial license, packages, deployment documentation,
|
|
||||||
and automation scripts for the entire workflow described above.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,5 +1,3 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=================================
|
=================================
|
||||||
Appendix B: Additional resources
|
Appendix B: Additional resources
|
||||||
=================================
|
=================================
|
||||||
@ -27,7 +25,3 @@ The following OpenStack resources are useful to reference:
|
|||||||
|
|
||||||
- `OpenStack Project Developer Documentation
|
- `OpenStack Project Developer Documentation
|
||||||
<http://docs.openstack.org/developer/>`_
|
<http://docs.openstack.org/developer/>`_
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -119,7 +119,3 @@ to the `API endpoint process isolation and policy`_ section from the
|
|||||||
|
|
||||||
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
|
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
|
||||||
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide
|
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -254,7 +254,3 @@ Contents of the ``/etc/network/interfaces`` file:
|
|||||||
address 172.29.244.11
|
address 172.29.244.11
|
||||||
netmask 255.255.252.0
|
netmask 255.255.252.0
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -11,7 +11,3 @@ Appendices
|
|||||||
app-security.rst
|
app-security.rst
|
||||||
app-advanced-config-options.rst
|
app-advanced-config-options.rst
|
||||||
app-targethosts-networkexample.rst
|
app-targethosts-networkexample.rst
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,30 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Checking the integrity of your configuration files
|
|
||||||
==================================================
|
|
||||||
|
|
||||||
Execute the following steps before running any playbook:
|
|
||||||
|
|
||||||
#. Ensure all files edited in ``/etc/`` are Ansible
|
|
||||||
YAML compliant. Guidelines can be found here:
|
|
||||||
`<http://docs.ansible.com/ansible/YAMLSyntax.html>`_
|
|
||||||
|
|
||||||
#. Check the integrity of your YAML files:
|
|
||||||
|
|
||||||
.. note:: Here is an online linter: `<http://www.yamllint.com/>`_
|
|
||||||
|
|
||||||
#. Run your command with ``syntax-check``:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible setup-infrastructure.yml --syntax-check
|
|
||||||
|
|
||||||
#. Recheck that all indentation is correct.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
The syntax of the configuration files can be correct
|
|
||||||
while not being meaningful for OpenStack-Ansible.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,5 +1,4 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
===============================
|
||||||
|
|
||||||
Configuring service credentials
|
Configuring service credentials
|
||||||
===============================
|
===============================
|
||||||
|
|
||||||
@ -11,10 +10,11 @@ security by encrypting any files containing credentials.
|
|||||||
Adjust permissions on these files to restrict access by non-privileged
|
Adjust permissions on these files to restrict access by non-privileged
|
||||||
users.
|
users.
|
||||||
|
|
||||||
Note that the following options configure passwords for the web
|
.. note::
|
||||||
interfaces:
|
|
||||||
|
|
||||||
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
|
The following options configure passwords for the web interfaces.
|
||||||
|
|
||||||
|
* ``keystone_auth_admin_password`` configures the ``admin`` tenant
|
||||||
password for both the OpenStack API and dashboard access.
|
password for both the OpenStack API and dashboard access.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -34,7 +34,3 @@ To regenerate existing passwords, add the ``--regen`` flag.
|
|||||||
The playbooks do not currently manage changing passwords in an existing
|
The playbooks do not currently manage changing passwords in an existing
|
||||||
environment. Changing passwords and re-running the playbooks will fail
|
environment. Changing passwords and re-running the playbooks will fail
|
||||||
and may break your OpenStack environment.
|
and may break your OpenStack environment.
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,132 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Configuring target hosts
|
|
||||||
========================
|
|
||||||
|
|
||||||
Modify the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
|
||||||
configure the target hosts.
|
|
||||||
|
|
||||||
Do not assign the same IP address to different target hostnames.
|
|
||||||
Unexpected results may occur. Each IP address and hostname must be a
|
|
||||||
matching pair. To use the same host in multiple roles, for example
|
|
||||||
infrastructure and networking, specify the same hostname and IP in each
|
|
||||||
section.
|
|
||||||
|
|
||||||
Use short hostnames rather than fully-qualified domain names (FQDN) to
|
|
||||||
prevent length limitation issues with LXC and SSH. For example, a
|
|
||||||
suitable short hostname for a compute host might be:
|
|
||||||
``123456-Compute001``.
|
|
||||||
|
|
||||||
Unless otherwise stated, replace ``*_IP_ADDRESS`` with the IP address of
|
|
||||||
the ``br-mgmt`` container management bridge on each target host.
|
|
||||||
|
|
||||||
#. Configure a list containing at least three infrastructure target
|
|
||||||
hosts in the ``shared-infra_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
shared-infra_hosts:
|
|
||||||
infra01:
|
|
||||||
ip: INFRA01_IP_ADDRESS
|
|
||||||
infra02:
|
|
||||||
ip: INFRA02_IP_ADDRESS
|
|
||||||
infra03:
|
|
||||||
ip: INFRA03_IP_ADDRESS
|
|
||||||
infra04: ...
|
|
||||||
|
|
||||||
#. Configure a list containing at least two infrastructure target
|
|
||||||
hosts in the ``os-infra_hosts`` section (you can reuse
|
|
||||||
previous hosts as long as their name and ip is consistent):
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
os-infra_hosts:
|
|
||||||
infra01:
|
|
||||||
ip: INFRA01_IP_ADDRESS
|
|
||||||
infra02:
|
|
||||||
ip: INFRA02_IP_ADDRESS
|
|
||||||
infra03:
|
|
||||||
ip: INFRA03_IP_ADDRESS
|
|
||||||
infra04: ...
|
|
||||||
|
|
||||||
#. Configure a list of at least one keystone target host in the
|
|
||||||
``identity_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
identity_hosts:
|
|
||||||
infra1:
|
|
||||||
ip: IDENTITY01_IP_ADDRESS
|
|
||||||
infra2: ...
|
|
||||||
|
|
||||||
#. Configure a list containing at least one network target host in the
|
|
||||||
``network_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
network_hosts:
|
|
||||||
network01:
|
|
||||||
ip: NETWORK01_IP_ADDRESS
|
|
||||||
network02: ...
|
|
||||||
|
|
||||||
Providing more than one network host in the ``network_hosts`` block will
|
|
||||||
enable `L3HA support using VRRP`_ in the ``neutron-agent`` containers.
|
|
||||||
|
|
||||||
.. _L3HA support using VRRP: http://docs.openstack.org/liberty/networking-guide/scenario_l3ha_lb.html
|
|
||||||
|
|
||||||
#. Configure a list containing at least one compute target host in the
|
|
||||||
``compute_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
compute_hosts:
|
|
||||||
compute001:
|
|
||||||
ip: COMPUTE001_IP_ADDRESS
|
|
||||||
compute002: ...
|
|
||||||
|
|
||||||
#. Configure a list containing at least one logging target host in the
|
|
||||||
``log_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
log_hosts:
|
|
||||||
logging01:
|
|
||||||
ip: LOGGER1_IP_ADDRESS
|
|
||||||
logging02: ...
|
|
||||||
|
|
||||||
#. Configure a list containing at least one repository target host in the
|
|
||||||
``repo-infra_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
repo-infra_hosts:
|
|
||||||
repo01:
|
|
||||||
ip: REPO01_IP_ADDRESS
|
|
||||||
repo02:
|
|
||||||
ip: REPO02_IP_ADDRESS
|
|
||||||
repo03:
|
|
||||||
ip: REPO03_IP_ADDRESS
|
|
||||||
repo04: ...
|
|
||||||
|
|
||||||
The repository typically resides on one or more infrastructure hosts.
|
|
||||||
|
|
||||||
#. Configure a list containing at least one optional storage host in the
|
|
||||||
``storage_hosts`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
storage_hosts:
|
|
||||||
storage01:
|
|
||||||
ip: STORAGE01_IP_ADDRESS
|
|
||||||
storage02: ...
|
|
||||||
|
|
||||||
Each storage host requires additional configuration to define the back end
|
|
||||||
driver.
|
|
||||||
|
|
||||||
The default configuration includes an optional storage host. To
|
|
||||||
install without storage hosts, comment out the stanza beginning with
|
|
||||||
the *storage_hosts:* line.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,5 +1,4 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
=================================
|
||||||
|
|
||||||
Initial environment configuration
|
Initial environment configuration
|
||||||
=================================
|
=================================
|
||||||
|
|
||||||
@ -13,8 +12,7 @@ for Ansible. Start by getting those files into the correct places:
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
As of Newton, the ``env.d`` directory has been moved from this source
|
As of Newton, the ``env.d`` directory has been moved from this source
|
||||||
directory to ``playbooks/inventory/``. See `Appendix H`_ for more
|
directory to ``playbooks/inventory/``.
|
||||||
details on this change.
|
|
||||||
|
|
||||||
#. Change to the ``/etc/openstack_deploy`` directory.
|
#. Change to the ``/etc/openstack_deploy`` directory.
|
||||||
|
|
||||||
@ -28,110 +26,14 @@ to the deployment of your OpenStack environment.
|
|||||||
|
|
||||||
The file is heavily commented with details about the various options.
|
The file is heavily commented with details about the various options.
|
||||||
|
|
||||||
There are various types of physical hardware that are able to use containers
|
Configuration in ``openstack_user_config.yml`` defines which hosts
|
||||||
deployed by OpenStack-Ansible. For example, hosts listed in the
|
will run the containers and services deployed by OpenStack-Ansible. For
|
||||||
`shared-infra_hosts` run containers for many of the shared services that
|
example, hosts listed in the ``shared-infra_hosts`` run containers for many of
|
||||||
your OpenStack environments requires. Some of these services include databases,
|
the shared services that your OpenStack environment requires. Some of these
|
||||||
memcached, and RabbitMQ. There are several other host types that contain
|
services include databases, memcached, and RabbitMQ. There are several other
|
||||||
other types of containers and all of these are listed in
|
host types that contain other types of containers and all of these are listed
|
||||||
``openstack_user_config.yml``.
|
in ``openstack_user_config.yml``.
|
||||||
|
|
||||||
For details about how the inventory is generated from the environment
|
For details about how the inventory is generated from the environment
|
||||||
configuration, see :ref:`developer-inventory`.
|
configuration, see :ref:`developer-inventory`.
|
||||||
|
|
||||||
Affinity
|
|
||||||
~~~~~~~~
|
|
||||||
|
|
||||||
OpenStack-Ansible's dynamic inventory generation has a concept called
|
|
||||||
`affinity`. This determines how many containers of a similar type are deployed
|
|
||||||
onto a single physical host.
|
|
||||||
|
|
||||||
Using `shared-infra_hosts` as an example, consider this
|
|
||||||
``openstack_user_config.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
shared-infra_hosts:
|
|
||||||
infra1:
|
|
||||||
ip: 172.29.236.101
|
|
||||||
infra2:
|
|
||||||
ip: 172.29.236.102
|
|
||||||
infra3:
|
|
||||||
ip: 172.29.236.103
|
|
||||||
|
|
||||||
Three hosts are assigned to the `shared-infra_hosts` group,
|
|
||||||
OpenStack-Ansible ensures that each host runs a single database container,
|
|
||||||
a single memcached container, and a single RabbitMQ container. Each host has
|
|
||||||
an affinity of 1 by default, and that means each host will run one of each
|
|
||||||
container type.
|
|
||||||
|
|
||||||
You can skip the deployment of RabbitMQ altogether. This is
|
|
||||||
helpful when deploying a standalone swift environment. If you need
|
|
||||||
this configuration, your ``openstack_user_config.yml`` would look like this:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
shared-infra_hosts:
|
|
||||||
infra1:
|
|
||||||
affinity:
|
|
||||||
rabbit_mq_container: 0
|
|
||||||
ip: 172.29.236.101
|
|
||||||
infra2:
|
|
||||||
affinity:
|
|
||||||
rabbit_mq_container: 0
|
|
||||||
ip: 172.29.236.102
|
|
||||||
infra3:
|
|
||||||
affinity:
|
|
||||||
rabbit_mq_container: 0
|
|
||||||
ip: 172.29.236.103
|
|
||||||
|
|
||||||
The configuration above deploys a memcached container and a database
|
|
||||||
container on each host, without the RabbitMQ containers.
|
|
||||||
|
|
||||||
|
|
||||||
.. _security_hardening:
|
|
||||||
|
|
||||||
Security hardening
|
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
OpenStack-Ansible automatically applies host security hardening configurations
|
|
||||||
using the `openstack-ansible-security`_ role. The role uses a version of the
|
|
||||||
`Security Technical Implementation Guide (STIG)`_ that has been adapted for
|
|
||||||
Ubuntu 14.04 and OpenStack.
|
|
||||||
|
|
||||||
The role is applicable to physical hosts within an OpenStack-Ansible deployment
|
|
||||||
that are operating as any type of node, infrastructure or compute. By
|
|
||||||
default, the role is enabled. You can disable it by changing a variable
|
|
||||||
within ``user_variables.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
apply_security_hardening: false
|
|
||||||
|
|
||||||
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
|
|
||||||
the role during deployments.
|
|
||||||
|
|
||||||
You can apply security configurations to an existing environment or audit
|
|
||||||
an environment using a playbook supplied with OpenStack-Ansible:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
# Perform a quick audit using Ansible's check mode
|
|
||||||
openstack-ansible --check security-hardening.yml
|
|
||||||
|
|
||||||
# Apply security hardening configurations
|
|
||||||
openstack-ansible security-hardening.yml
|
|
||||||
|
|
||||||
For more details on the security configurations that will be applied, refer to
|
|
||||||
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
|
|
||||||
section of the openstack-ansible-security documentation to find out how to
|
|
||||||
fine-tune certain security configurations.
|
|
||||||
|
|
||||||
.. _openstack-ansible-security: http://docs.openstack.org/developer/openstack-ansible-security/
|
|
||||||
.. _Security Technical Implementation Guide (STIG): https://en.wikipedia.org/wiki/Security_Technical_Implementation_Guide
|
|
||||||
.. _Configuration: http://docs.openstack.org/developer/openstack-ansible-security/configuration.html
|
|
||||||
.. _Appendix H: ../install-guide/app-custom-layouts.html
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,291 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
.. _network_configuration:
|
|
||||||
|
|
||||||
Configuring target host networking
|
|
||||||
==================================
|
|
||||||
|
|
||||||
Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
|
||||||
configure target host networking.
|
|
||||||
|
|
||||||
#. Configure the IP address ranges associated with each network in the
|
|
||||||
``cidr_networks`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
cidr_networks:
|
|
||||||
# Management (same range as br-mgmt on the target hosts)
|
|
||||||
container: CONTAINER_MGMT_CIDR
|
|
||||||
# Tunnel endpoints for VXLAN tenant networks
|
|
||||||
# (same range as br-vxlan on the target hosts)
|
|
||||||
tunnel: TUNNEL_CIDR
|
|
||||||
# Storage (same range as br-storage on the target hosts)
|
|
||||||
storage: STORAGE_CIDR
|
|
||||||
|
|
||||||
Replace ``*_CIDR`` with the appropriate IP address range in CIDR
|
|
||||||
notation. For example, 203.0.113.0/24.
|
|
||||||
|
|
||||||
Use the same IP address ranges as the underlying physical network
|
|
||||||
interfaces or bridges in `the section called "Configuring
|
|
||||||
the network" <targethosts-network.html>`_. For example, if the
|
|
||||||
container network uses 203.0.113.0/24, the ``CONTAINER_MGMT_CIDR``
|
|
||||||
also uses 203.0.113.0/24.
|
|
||||||
|
|
||||||
The default configuration includes the optional storage and service
|
|
||||||
networks. To remove one or both of them, comment out the appropriate
|
|
||||||
network name.
|
|
||||||
|
|
||||||
#. Configure the existing IP addresses in the ``used_ips`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
used_ips:
|
|
||||||
- EXISTING_IP_ADDRESSES
|
|
||||||
|
|
||||||
Replace ``EXISTING_IP_ADDRESSES`` with a list of existing IP
|
|
||||||
addresses in the ranges defined in the previous step. This list
|
|
||||||
should include all IP addresses manually configured on target hosts,
|
|
||||||
internal load balancers, service network bridge, deployment hosts and
|
|
||||||
any other devices to avoid conflicts during the automatic IP address
|
|
||||||
generation process.
|
|
||||||
|
|
||||||
Add individual IP addresses on separate lines. For example, to
|
|
||||||
prevent use of 203.0.113.101 and 201:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
used_ips:
|
|
||||||
- 203.0.113.101
|
|
||||||
- 203.0.113.201
|
|
||||||
|
|
||||||
Add a range of IP addresses using a comma. For example, to prevent
|
|
||||||
use of 203.0.113.101-201:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
used_ips:
|
|
||||||
- "203.0.113.101,203.0.113.201"
|
|
||||||
|
|
||||||
#. Configure load balancing in the ``global_overrides`` section:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
global_overrides:
|
|
||||||
# Internal load balancer VIP address
|
|
||||||
internal_lb_vip_address: INTERNAL_LB_VIP_ADDRESS
|
|
||||||
# External (DMZ) load balancer VIP address
|
|
||||||
external_lb_vip_address: EXTERNAL_LB_VIP_ADDRESS
|
|
||||||
# Container network bridge device
|
|
||||||
management_bridge: "MGMT_BRIDGE"
|
|
||||||
# Tunnel network bridge device
|
|
||||||
tunnel_bridge: "TUNNEL_BRIDGE"
|
|
||||||
|
|
||||||
Replace ``INTERNAL_LB_VIP_ADDRESS`` with the internal IP address of
|
|
||||||
the load balancer. Infrastructure and OpenStack services use this IP
|
|
||||||
address for internal communication.
|
|
||||||
|
|
||||||
Replace ``EXTERNAL_LB_VIP_ADDRESS`` with the external, public, or
|
|
||||||
DMZ IP address of the load balancer. Users primarily use this IP
|
|
||||||
address for external API and web interfaces access.
|
|
||||||
|
|
||||||
Replace ``MGMT_BRIDGE`` with the container bridge device name,
|
|
||||||
typically ``br-mgmt``.
|
|
||||||
|
|
||||||
Replace ``TUNNEL_BRIDGE`` with the tunnel/overlay bridge device
|
|
||||||
name, typically ``br-vxlan``.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Only use ``global_overrides`` for networking overrides.
|
|
||||||
Edit all other variable overrides in ``user_variables.yml``.
|
|
||||||
|
|
||||||
``global_overrides`` are not removed from ``inventory.json``
|
|
||||||
and can conflict.
|
|
||||||
|
|
||||||
#. Configure the management network in the ``provider_networks`` subsection:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- all_containers
|
|
||||||
- hosts
|
|
||||||
type: "raw"
|
|
||||||
container_bridge: "br-mgmt"
|
|
||||||
container_interface: "eth1"
|
|
||||||
container_type: "veth"
|
|
||||||
ip_from_q: "container"
|
|
||||||
is_container_address: true
|
|
||||||
is_ssh_address: true
|
|
||||||
|
|
||||||
#. Configure optional networks in the ``provider_networks`` subsection. For
|
|
||||||
example, a storage network:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- glance_api
|
|
||||||
- cinder_api
|
|
||||||
- cinder_volume
|
|
||||||
- nova_compute
|
|
||||||
type: "raw"
|
|
||||||
container_bridge: "br-storage"
|
|
||||||
container_type: "veth"
|
|
||||||
container_interface: "eth2"
|
|
||||||
ip_from_q: "storage"
|
|
||||||
|
|
||||||
The default configuration includes the optional storage and service
|
|
||||||
networks. To remove one or both of them, comment out the entire
|
|
||||||
associated stanza beginning with the ``- network:`` line.
|
|
||||||
|
|
||||||
#. Configure OpenStack Networking VXLAN tunnel/overlay networks in the
|
|
||||||
``provider_networks`` subsection:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- neutron_linuxbridge_agent
|
|
||||||
container_bridge: "br-vxlan"
|
|
||||||
container_type: "veth"
|
|
||||||
container_interface: "eth10"
|
|
||||||
ip_from_q: "tunnel"
|
|
||||||
type: "vxlan"
|
|
||||||
range: "TUNNEL_ID_RANGE"
|
|
||||||
net_name: "vxlan"
|
|
||||||
|
|
||||||
Replace ``TUNNEL_ID_RANGE`` with the tunnel ID range. For example,
|
|
||||||
1:1000.
|
|
||||||
|
|
||||||
#. Configure OpenStack Networking flat (untagged) and VLAN (tagged) networks
|
|
||||||
in the ``provider_networks`` subsection:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- neutron_linuxbridge_agent
|
|
||||||
container_bridge: "br-vlan"
|
|
||||||
container_type: "veth"
|
|
||||||
container_interface: "eth12"
|
|
||||||
host_bind_override: "PHYSICAL_NETWORK_INTERFACE"
|
|
||||||
type: "flat"
|
|
||||||
net_name: "flat"
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- neutron_linuxbridge_agent
|
|
||||||
container_bridge: "br-vlan"
|
|
||||||
container_type: "veth"
|
|
||||||
container_interface: "eth11"
|
|
||||||
type: "vlan"
|
|
||||||
range: VLAN_ID_RANGE
|
|
||||||
net_name: "vlan"
|
|
||||||
|
|
||||||
Replace ``VLAN_ID_RANGE`` with the VLAN ID range for each VLAN network.
|
|
||||||
For example, 1:1000. Supports more than one range of VLANs on a particular
|
|
||||||
network. For example, 1:1000,2001:3000. Create a similar stanza for each
|
|
||||||
additional network.
|
|
||||||
|
|
||||||
Replace ``PHYSICAL_NETWORK_INTERFACE`` with the network interface used for
|
|
||||||
flat networking. Ensure this is a physical interface on the same L2 network
|
|
||||||
being used with the ``br-vlan`` devices. If no additional network interface is
|
|
||||||
available, a veth pair plugged into the ``br-vlan`` bridge can provide the
|
|
||||||
necessary interface.
|
|
||||||
|
|
||||||
The following is an example of creating a ``veth-pair`` within an existing bridge:
|
|
||||||
|
|
||||||
.. code-block:: text
|
|
||||||
|
|
||||||
# Create veth pair, do not abort if already exists
|
|
||||||
pre-up ip link add br-vlan-veth type veth peer name PHYSICAL_NETWORK_INTERFACE || true
|
|
||||||
# Set both ends UP
|
|
||||||
pre-up ip link set br-vlan-veth up
|
|
||||||
pre-up ip link set PHYSICAL_NETWORK_INTERFACE up
|
|
||||||
# Delete veth pair on DOWN
|
|
||||||
post-down ip link del br-vlan-veth || true
|
|
||||||
bridge_ports br-vlan-veth
|
|
||||||
|
|
||||||
|
|
||||||
Adding static routes to network interfaces
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Optionally, you can add one or more static routes to interfaces within
|
|
||||||
containers. Each route requires a destination network in CIDR notation
|
|
||||||
and a gateway. For example:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- glance_api
|
|
||||||
- cinder_api
|
|
||||||
- cinder_volume
|
|
||||||
- nova_compute
|
|
||||||
type: "raw"
|
|
||||||
container_bridge: "br-storage"
|
|
||||||
container_interface: "eth2"
|
|
||||||
container_type: "veth"
|
|
||||||
ip_from_q: "storage"
|
|
||||||
static_routes:
|
|
||||||
- cidr: 10.176.0.0/12
|
|
||||||
gateway: 172.29.248.1
|
|
||||||
|
|
||||||
This example adds the following content to the
|
|
||||||
``/etc/network/interfaces.d/eth2.cfg`` file in the appropriate
|
|
||||||
containers:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
post-up ip route add 10.176.0.0/12 via 172.29.248.1 || true
|
|
||||||
|
|
||||||
The ``cidr`` and ``gateway`` values must *both* be specified, or the
|
|
||||||
inventory script will raise an error. Accuracy of the network information
|
|
||||||
is not checked within the inventory script, just that the keys and values
|
|
||||||
are present.
|
|
||||||
|
|
||||||
Setting an MTU on a network interface
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Larger MTU's can be useful on certain networks, especially storage networks.
|
|
||||||
Add a ``container_mtu`` attribute within the ``provider_networks`` dictionary
|
|
||||||
to set a custom MTU on the container network interfaces that attach to a
|
|
||||||
particular network:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
provider_networks:
|
|
||||||
- network:
|
|
||||||
group_binds:
|
|
||||||
- glance_api
|
|
||||||
- cinder_api
|
|
||||||
- cinder_volume
|
|
||||||
- nova_compute
|
|
||||||
type: "raw"
|
|
||||||
container_bridge: "br-storage"
|
|
||||||
container_interface: "eth2"
|
|
||||||
container_type: "veth"
|
|
||||||
container_mtu: "9000"
|
|
||||||
ip_from_q: "storage"
|
|
||||||
static_routes:
|
|
||||||
- cidr: 10.176.0.0/12
|
|
||||||
gateway: 172.29.248.1
|
|
||||||
|
|
||||||
The example above enables `jumbo frames`_ by setting the MTU on the storage
|
|
||||||
network to 9000.
|
|
||||||
|
|
||||||
.. note:: If you are editing the MTU for your networks, you may also want
|
|
||||||
to adapt your neutron MTU settings (depending on your needs). Please
|
|
||||||
refer to the neutron documentation (`Neutron MTU considerations`_).
|
|
||||||
|
|
||||||
.. _jumbo frames: https://en.wikipedia.org/wiki/Jumbo_frame
|
|
||||||
.. _neutron MTU considerations: http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,207 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Overriding OpenStack configuration defaults
|
|
||||||
===========================================
|
|
||||||
|
|
||||||
OpenStack has many configuration options available in configuration files
|
|
||||||
which are in the form of ``.conf`` files (in a standard ``INI`` file format),
|
|
||||||
policy files (in a standard ``JSON`` format) and also ``YAML`` files.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
``YAML`` files are only in the ceilometer project at this time.
|
|
||||||
|
|
||||||
OpenStack-Ansible provides the facility to include reference to any options in
|
|
||||||
the `OpenStack Configuration Reference`_ through the use of a simple set of
|
|
||||||
configuration entries in ``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
|
|
||||||
This section provides guidance for how to make use of this facility. Further
|
|
||||||
guidance is available in the developer documentation in the section titled
|
|
||||||
`Setting overrides in configuration files`_.
|
|
||||||
|
|
||||||
.. _OpenStack Configuration Reference: http://docs.openstack.org/draft/config-reference/
|
|
||||||
.. _Setting overrides in configuration files: ../developer-docs/extending.html#setting-overrides-in-configuration-files
|
|
||||||
|
|
||||||
Overriding .conf files
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The most common use-case for implementing overrides are for the
|
|
||||||
``<service>.conf`` files (for example, ``nova.conf``). These files use a
|
|
||||||
standard ``INI`` file format.
|
|
||||||
|
|
||||||
For example, if you add the following parameters to ``nova.conf``:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
remove_unused_original_minimum_age_seconds = 43200
|
|
||||||
|
|
||||||
[libvirt]
|
|
||||||
cpu_mode = host-model
|
|
||||||
disk_cachemodes = file=directsync,block=none
|
|
||||||
|
|
||||||
[database]
|
|
||||||
idle_timeout = 300
|
|
||||||
max_pool_size = 10
|
|
||||||
|
|
||||||
This is accomplished through the use of the following configuration
|
|
||||||
entry in ``/etc/openstack_deploy/user_variables.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
nova_nova_conf_overrides:
|
|
||||||
DEFAULT:
|
|
||||||
remove_unused_original_minimum_age_seconds: 43200
|
|
||||||
libvirt:
|
|
||||||
cpu_mode: host-model
|
|
||||||
disk_cachemodes: file=directsync,block=none
|
|
||||||
database:
|
|
||||||
idle_timeout: 300
|
|
||||||
max_pool_size: 10
|
|
||||||
|
|
||||||
Overrides may also be applied on a per host basis with the following
|
|
||||||
configuration in ``/etc/openstack_deploy/openstack_user_config.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
compute_hosts:
|
|
||||||
900089-compute001:
|
|
||||||
ip: 192.0.2.10
|
|
||||||
host_vars:
|
|
||||||
nova_nova_conf_overrides:
|
|
||||||
DEFAULT:
|
|
||||||
remove_unused_original_minimum_age_seconds: 43200
|
|
||||||
libvirt:
|
|
||||||
cpu_mode: host-model
|
|
||||||
disk_cachemodes: file=directsync,block=none
|
|
||||||
database:
|
|
||||||
idle_timeout: 300
|
|
||||||
max_pool_size: 10
|
|
||||||
|
|
||||||
Use this method for any ``INI`` file format for all OpenStack projects
|
|
||||||
deployed in OpenStack-Ansible.
|
|
||||||
|
|
||||||
To assist you in finding the appropriate variable name to use for
|
|
||||||
overrides, the general format for the variable name is:
|
|
||||||
``<service>_<filename>_<file extension>_overrides``.
|
|
||||||
|
|
||||||
Overriding .json files
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
You can adjust the default policies applied by services in order
|
|
||||||
to implement access controls which are different to a standard OpenStack
|
|
||||||
environment. Policy files are in a ``JSON`` format.
|
|
||||||
|
|
||||||
For example, you can add the following policy in keystone's ``policy.json``:
|
|
||||||
|
|
||||||
.. code-block:: json
|
|
||||||
|
|
||||||
{
|
|
||||||
"identity:foo": "rule:admin_required",
|
|
||||||
"identity:bar": "rule:admin_required"
|
|
||||||
}
|
|
||||||
|
|
||||||
Accomplish this through the use of the following configuration
|
|
||||||
entry in ``/etc/openstack_deploy/user_variables.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
keystone_policy_overrides:
|
|
||||||
identity:foo: "rule:admin_required"
|
|
||||||
identity:bar: "rule:admin_required"
|
|
||||||
|
|
||||||
Use this method for any ``JSON`` file format for all OpenStack projects
|
|
||||||
deployed in OpenStack-Ansible.
|
|
||||||
|
|
||||||
To assist you in finding the appropriate variable name to use for
|
|
||||||
overrides, the general format for the variable name is
|
|
||||||
``<service>_policy_overrides``.
|
|
||||||
|
|
||||||
Currently available overrides
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The following is a list of overrides available:
|
|
||||||
|
|
||||||
Galera:
|
|
||||||
* galera_client_my_cnf_overrides
|
|
||||||
* galera_my_cnf_overrides
|
|
||||||
* galera_cluster_cnf_overrides
|
|
||||||
* galera_debian_cnf_overrides
|
|
||||||
|
|
||||||
Ceilometer:
|
|
||||||
* ceilometer_policy_overrides
|
|
||||||
* ceilometer_ceilometer_conf_overrides
|
|
||||||
* ceilometer_api_paste_ini_overrides
|
|
||||||
* ceilometer_event_definitions_yaml_overrides
|
|
||||||
* ceilometer_event_pipeline_yaml_overrides
|
|
||||||
* ceilometer_pipeline_yaml_overrides
|
|
||||||
|
|
||||||
Cinder:
|
|
||||||
* cinder_policy_overrides
|
|
||||||
* cinder_rootwrap_conf_overrides
|
|
||||||
* cinder_api_paste_ini_overrides
|
|
||||||
* cinder_cinder_conf_overrides
|
|
||||||
|
|
||||||
Glance:
|
|
||||||
* glance_glance_api_paste_ini_overrides
|
|
||||||
* glance_glance_api_conf_overrides
|
|
||||||
* glance_glance_cache_conf_overrides
|
|
||||||
* glance_glance_manage_conf_overrides
|
|
||||||
* glance_glance_registry_paste_ini_overrides
|
|
||||||
* glance_glance_registry_conf_overrides
|
|
||||||
* glance_glance_scrubber_conf_overrides
|
|
||||||
* glance_glance_scheme_json_overrides
|
|
||||||
* glance_policy_overrides
|
|
||||||
|
|
||||||
Heat:
|
|
||||||
* heat_heat_conf_overrides
|
|
||||||
* heat_api_paste_ini_overrides
|
|
||||||
* heat_default_yaml_overrides
|
|
||||||
* heat_aws_cloudwatch_alarm_yaml_overrides
|
|
||||||
* heat_aws_rds_dbinstance_yaml_overrides
|
|
||||||
* heat_policy_overrides
|
|
||||||
|
|
||||||
Keystone:
|
|
||||||
* keystone_keystone_conf_overrides
|
|
||||||
* keystone_keystone_default_conf_overrides
|
|
||||||
* keystone_keystone_paste_ini_overrides
|
|
||||||
* keystone_policy_overrides
|
|
||||||
|
|
||||||
Neutron:
|
|
||||||
* neutron_neutron_conf_overrides
|
|
||||||
* neutron_ml2_conf_ini_overrides
|
|
||||||
* neutron_dhcp_agent_ini_overrides
|
|
||||||
* neutron_api_paste_ini_overrides
|
|
||||||
* neutron_rootwrap_conf_overrides
|
|
||||||
* neutron_policy_overrides
|
|
||||||
* neutron_dnsmasq_neutron_conf_overrides
|
|
||||||
* neutron_l3_agent_ini_overrides
|
|
||||||
* neutron_metadata_agent_ini_overrides
|
|
||||||
* neutron_metering_agent_ini_overrides
|
|
||||||
|
|
||||||
Nova:
|
|
||||||
* nova_nova_conf_overrides
|
|
||||||
* nova_rootwrap_conf_overrides
|
|
||||||
* nova_api_paste_ini_overrides
|
|
||||||
* nova_policy_overrides
|
|
||||||
|
|
||||||
Swift:
|
|
||||||
* swift_swift_conf_overrides
|
|
||||||
* swift_swift_dispersion_conf_overrides
|
|
||||||
* swift_proxy_server_conf_overrides
|
|
||||||
* swift_account_server_conf_overrides
|
|
||||||
* swift_account_server_replicator_conf_overrides
|
|
||||||
* swift_container_server_conf_overrides
|
|
||||||
* swift_container_server_replicator_conf_overrides
|
|
||||||
* swift_object_server_conf_overrides
|
|
||||||
* swift_object_server_replicator_conf_overrides
|
|
||||||
|
|
||||||
Tempest:
|
|
||||||
* tempest_tempest_conf_overrides
|
|
||||||
|
|
||||||
pip:
|
|
||||||
* pip_global_conf_overrides
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,144 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Securing services with SSL certificates
|
|
||||||
=======================================
|
|
||||||
|
|
||||||
The `OpenStack Security Guide`_ recommends providing secure communication
|
|
||||||
between various services in an OpenStack deployment.
|
|
||||||
|
|
||||||
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide/secure-communication.html
|
|
||||||
|
|
||||||
The OpenStack-Ansible project currently offers the ability to configure SSL
|
|
||||||
certificates for secure communication with the following services:
|
|
||||||
|
|
||||||
* HAProxy
|
|
||||||
* Horizon
|
|
||||||
* Keystone
|
|
||||||
* RabbitMQ
|
|
||||||
|
|
||||||
For each service, you have the option to use self-signed certificates
|
|
||||||
generated during the deployment process or provide SSL certificates,
|
|
||||||
keys, and CA certificates from your own trusted certificate authority. Highly
|
|
||||||
secured environments use trusted, user-provided, certificates for as
|
|
||||||
many services as possible.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Conduct all SSL certificate configuration in
|
|
||||||
``/etc/openstack_deploy/user_variables.yml`` and not in the playbook
|
|
||||||
roles themselves.
|
|
||||||
|
|
||||||
Self-signed certificates
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Self-signed certificates ensure you are able to start quickly and you are able
|
|
||||||
to encrypt data in transit. However, they do not provide a high level of trust
|
|
||||||
for highly secure environments. The use of self-signed certificates is
|
|
||||||
currently the default in OpenStack-Ansible. When self-signed certificates are
|
|
||||||
being used, certificate verification must be disabled using the following
|
|
||||||
user variables depending on your configuration. Add these variables
|
|
||||||
in ``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
keystone_service_adminuri_insecure: true
|
|
||||||
keystone_service_internaluri_insecure: true
|
|
||||||
|
|
||||||
Setting self-signed certificate subject data
|
|
||||||
--------------------------------------------
|
|
||||||
|
|
||||||
Change the subject data of any self-signed certificate using
|
|
||||||
configuration variables. The configuration variable for each service is
|
|
||||||
``<servicename>_ssl_self_signed_subject``. To change the SSL certificate
|
|
||||||
subject data for HAProxy, adjust ``/etc/openstack_deploy/user_variables.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
haproxy_ssl_self_signed_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN=haproxy.example.com"
|
|
||||||
|
|
||||||
For more information about the available fields in the certificate subject,
|
|
||||||
refer to OpenSSL's documentation on the `req subcommand`_.
|
|
||||||
|
|
||||||
.. _req subcommand: https://www.openssl.org/docs/manmaster/apps/req.html
|
|
||||||
|
|
||||||
Generating and regenerating self-signed certificates
|
|
||||||
----------------------------------------------------
|
|
||||||
|
|
||||||
Generate self-signed certificates for each service during the first run
|
|
||||||
of the playbook.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Subsequent runs of the playbook do not generate new SSL
|
|
||||||
certificates unless you set ``<servicename>_ssl_self_signed_regen`` to
|
|
||||||
``true``.
|
|
||||||
|
|
||||||
To force a self-signed certificate to regenerate, you can pass the variable to
|
|
||||||
``openstack-ansible`` on the command line:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible -e "horizon_ssl_self_signed_regen=true" os-horizon-install.yml
|
|
||||||
|
|
||||||
To force a self-signed certificate to regenerate with every playbook run,
|
|
||||||
set the appropriate regeneration option to ``true``. For example, if
|
|
||||||
you have already run the ``os-horizon`` playbook, but you want to regenerate
|
|
||||||
the self-signed certificate, set the ``horizon_ssl_self_signed_regen`` variable
|
|
||||||
to ``true`` in ``/etc/openstack_deploy/user_variables.yml``:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
horizon_ssl_self_signed_regen: true
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Regenerating self-signed certificates replaces the existing
|
|
||||||
certificates whether they are self-signed or user-provided.
|
|
||||||
|
|
||||||
|
|
||||||
User-provided certificates
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
You can provide your own SSL certificates, keys, and CA certificates
|
|
||||||
for added trust in highly secure environments. Acquiring certificates from a
|
|
||||||
trusted certificate authority is outside the scope of this document, but the
|
|
||||||
`Certificate Management`_ section of the Linux Documentation Project explains
|
|
||||||
how to create your own certificate authority and sign certificates.
|
|
||||||
|
|
||||||
.. _Certificate Management: http://www.tldp.org/HOWTO/SSL-Certificates-HOWTO/c118.html
|
|
||||||
|
|
||||||
Deploying user-provided SSL certificates is a three step process:
|
|
||||||
|
|
||||||
#. Copy your SSL certificate, key, and CA certificate to the deployment host.
|
|
||||||
#. Specify the path to your SSL certificate, key, and CA certificate in
|
|
||||||
``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
#. Run the playbook for that service.
|
|
||||||
|
|
||||||
For example, to deploy user-provided certificates for RabbitMQ,
|
|
||||||
copy the certificates to the deployment host, edit
|
|
||||||
``/etc/openstack_deploy/user_variables.yml`` and set the following three
|
|
||||||
variables:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
rabbitmq_user_ssl_cert: /tmp/example.com.crt
|
|
||||||
rabbitmq_user_ssl_key: /tmp/example.com.key
|
|
||||||
rabbitmq_user_ssl_ca_cert: /tmp/ExampleCA.crt
|
|
||||||
|
|
||||||
Run the playbook to apply the certificates:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible rabbitmq-install.yml
|
|
||||||
|
|
||||||
The playbook deploys your user-provided SSL certificate, key, and CA
|
|
||||||
certificate to each RabbitMQ container.
|
|
||||||
|
|
||||||
The process is identical to the other services. Replace
|
|
||||||
``rabbitmq`` in the configuration variables shown above with ``horizon``,
|
|
||||||
``haproxy``, or ``keystone`` to deploy user-provided certificates to those
|
|
||||||
services.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -16,6 +16,3 @@ Production environment
|
|||||||
|
|
||||||
.. TODO include openstack_user_config.yml examples when done.
|
.. TODO include openstack_user_config.yml examples when done.
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,47 +1,26 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
========================
|
||||||
|
Deployment configuration
|
||||||
Chapter 4. Deployment configuration
|
========================
|
||||||
-----------------------------------
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
:maxdepth: 2
|
||||||
|
|
||||||
configure-initial.rst
|
configure-initial.rst
|
||||||
configure-networking.rst
|
configure-user-config-examples.rst
|
||||||
configure-hostlist.rst
|
|
||||||
configure-creds.rst
|
configure-creds.rst
|
||||||
configure-openstack.rst
|
|
||||||
configure-sslcertificates.rst
|
|
||||||
configure-configurationintegrity.rst
|
|
||||||
|
|
||||||
|
.. figure:: figures/installation-workflow-configure-deployment.png
|
||||||
**Figure 4.1. Installation work flow**
|
:width: 100%
|
||||||
|
|
||||||
.. image:: figures/workflow-configdeployment.png
|
|
||||||
|
|
||||||
Ansible references a handful of files containing mandatory and optional
|
Ansible references a handful of files containing mandatory and optional
|
||||||
configuration directives. These files must be modified to define the
|
configuration directives. Modify these files to define the
|
||||||
target environment before running the Ansible playbooks. Perform the
|
target environment before running the Ansible playbooks. Configuration
|
||||||
following tasks:
|
tasks include:
|
||||||
|
|
||||||
- Configure Target host networking to define bridge interfaces and
|
* Target host networking to define bridge interfaces and
|
||||||
networks
|
networks.
|
||||||
|
* A list of target hosts on which to install the software.
|
||||||
|
* Virtual and physical network relationships for OpenStack
|
||||||
|
Networking (neutron).
|
||||||
|
* Passwords for all services.
|
||||||
|
|
||||||
- Configure a list of target hosts on which to install the software
|
|
||||||
|
|
||||||
- Configure virtual and physical network relationships for OpenStack
|
|
||||||
Networking (neutron)
|
|
||||||
|
|
||||||
- (Optional) Configure the hypervisor
|
|
||||||
|
|
||||||
- (Optional) Configure Block Storage (cinder) to use the NetApp back
|
|
||||||
end
|
|
||||||
|
|
||||||
- (Optional) Configure Block Storage (cinder) backups.
|
|
||||||
|
|
||||||
- (Optional) Configure Block Storage availability zones
|
|
||||||
|
|
||||||
- Configure passwords for all services
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,19 +1,17 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
===============
|
||||||
|
Deployment host
|
||||||
|
===============
|
||||||
|
|
||||||
Chapter 2. Deployment host
|
.. figure:: figures/installation-workflow-deploymenthost.png
|
||||||
==========================
|
:width: 100%
|
||||||
|
|
||||||
**Figure 2.1. Installation work flow**
|
When installing OpenStack in a production environment, we recommend using a
|
||||||
|
separate deployment host which contains Ansible and orchestrates the
|
||||||
|
OpenStack-Ansible installation on the target hosts. In a test environment, we
|
||||||
|
prescribe using one of the infrastructure target hosts as the deployment host.
|
||||||
|
|
||||||
.. image:: figures/workflow-deploymenthost.png
|
To use a target host as a deployment host, follow the steps in `Chapter 3,
|
||||||
|
Target hosts <targethosts.html>`_ on the deployment host.
|
||||||
The OSA installation process recommends one deployment host. The
|
|
||||||
deployment host contains Ansible and orchestrates the OpenStack-Ansible
|
|
||||||
installation on the target hosts. We recommend using separate deployment and
|
|
||||||
target hosts. You could alternatively use one of the target hosts, preferably
|
|
||||||
one of the infrastructure variants, as the deployment host. To use a
|
|
||||||
deployment host as a target host, follow the steps in `Chapter 3, Target
|
|
||||||
hosts <targethosts.html>`_ on the deployment host.
|
|
||||||
|
|
||||||
Installing the operating system
|
Installing the operating system
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -21,7 +19,7 @@ Installing the operating system
|
|||||||
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit
|
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit
|
||||||
<http://releases.ubuntu.com/14.04/>`_ operating system on the
|
<http://releases.ubuntu.com/14.04/>`_ operating system on the
|
||||||
deployment host. Configure at least one network interface to
|
deployment host. Configure at least one network interface to
|
||||||
access the Internet or suitable local repositories.
|
access the internet or suitable local repositories.
|
||||||
|
|
||||||
Configuring the operating system
|
Configuring the operating system
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -64,7 +62,7 @@ Install the source and dependencies for the deployment host.
|
|||||||
|
|
||||||
# git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
|
# git clone -b TAG https://github.com/openstack/openstack-ansible.git /opt/openstack-ansible
|
||||||
|
|
||||||
Replace ``TAG`` with the current stable release tag.
|
Replace ``TAG`` with the current stable release tag : |my_conf_val|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible`` directory, and run the
|
#. Change to the ``/opt/openstack-ansible`` directory, and run the
|
||||||
Ansible bootstrap script:
|
Ansible bootstrap script:
|
||||||
@ -83,6 +81,3 @@ key pairs. However, if a pass phrase is required, consider using the
|
|||||||
``ssh-agent`` and ``ssh-add`` commands to temporarily store the
|
``ssh-agent`` and ``ssh-add`` commands to temporarily store the
|
||||||
pass phrase before performing Ansible operations.
|
pass phrase before performing Ansible operations.
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
Before Width: | Height: | Size: 213 KiB After Width: | Height: | Size: 213 KiB |
Before Width: | Height: | Size: 41 KiB After Width: | Height: | Size: 41 KiB |
Before Width: | Height: | Size: 215 KiB After Width: | Height: | Size: 215 KiB |
Before Width: | Height: | Size: 38 KiB After Width: | Height: | Size: 38 KiB |
Before Width: | Height: | Size: 71 KiB |
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 48 KiB |
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 48 KiB |
Before Width: | Height: | Size: 46 KiB After Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 47 KiB |
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 47 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 26 KiB |
Before Width: | Height: | Size: 28 KiB |
@ -1,56 +1,23 @@
|
|||||||
OpenStack-Ansible Installation Guide
|
==================
|
||||||
====================================
|
Installation Guide
|
||||||
|
==================
|
||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
|
|
||||||
Overview
|
|
||||||
^^^^^^^^
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
:maxdepth: 2
|
||||||
|
|
||||||
overview.rst
|
overview.rst
|
||||||
|
|
||||||
Deployment host
|
|
||||||
^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
deploymenthost.rst
|
deploymenthost.rst
|
||||||
|
|
||||||
|
|
||||||
Target hosts
|
|
||||||
^^^^^^^^^^^^
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
targethosts.rst
|
targethosts.rst
|
||||||
|
|
||||||
Configuration
|
|
||||||
^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
configure.rst
|
configure.rst
|
||||||
|
installation.rst
|
||||||
|
app.rst
|
||||||
|
|
||||||
|
Disclaimer
|
||||||
|
~~~~~~~~~~
|
||||||
|
|
||||||
Installation
|
Third-party trademarks and tradenames appearing in this document are the
|
||||||
^^^^^^^^^^^^
|
property of their respective owners. Such third-party trademarks have
|
||||||
|
been printed in caps or initial caps and are used for referential
|
||||||
.. toctree::
|
purposes only. We do not intend our use or display of other companies'
|
||||||
|
tradenames, trademarks, or service marks to imply a relationship with,
|
||||||
install-foundation.rst
|
or endorsement or sponsorship of us by these other companies.
|
||||||
install-infrastructure.rst
|
|
||||||
install-openstack.rst
|
|
||||||
|
|
||||||
Appendices
|
|
||||||
^^^^^^^^^^
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
app-configfiles.rst
|
|
||||||
app-resources.rst
|
|
||||||
app-plumgrid.rst
|
|
||||||
app-nuage.rst
|
|
||||||
app-custom-layouts.rst
|
|
||||||
|
|
||||||
|
@ -1,53 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
===============================
|
|
||||||
Chapter 5. Foundation playbooks
|
|
||||||
===============================
|
|
||||||
|
|
||||||
**Figure 5.1. Installation work flow**
|
|
||||||
|
|
||||||
.. image:: figures/workflow-foundationplaybooks.png
|
|
||||||
|
|
||||||
The main Ansible foundation playbook prepares the target hosts for
|
|
||||||
infrastructure and OpenStack services and performs the following
|
|
||||||
operations:
|
|
||||||
|
|
||||||
- Performs deployment host initial setup
|
|
||||||
|
|
||||||
- Builds containers on target hosts
|
|
||||||
|
|
||||||
- Restarts containers on target hosts
|
|
||||||
|
|
||||||
- Installs common components into containers on target hosts
|
|
||||||
|
|
||||||
Running the foundation playbook
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Before continuing, validate the configuration files using the
|
|
||||||
guidance in `Checking the integrity of your configuration files`_.
|
|
||||||
|
|
||||||
.. _Checking the integrity of your configuration files: ../install-guide/configure-configurationintegrity.html
|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
|
|
||||||
|
|
||||||
#. Run the host setup playbook:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible setup-hosts.yml
|
|
||||||
|
|
||||||
Confirm satisfactory completion with zero items unreachable or
|
|
||||||
failed:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
PLAY RECAP ********************************************************************
|
|
||||||
...
|
|
||||||
deployment_host : ok=18 changed=11 unreachable=0 failed=0
|
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,96 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
===================================
|
|
||||||
Chapter 6. Infrastructure playbooks
|
|
||||||
===================================
|
|
||||||
|
|
||||||
**Figure 6.1. Installation workflow**
|
|
||||||
|
|
||||||
.. image:: figures/workflow-infraplaybooks.png
|
|
||||||
|
|
||||||
The main Ansible infrastructure playbook installs infrastructure
|
|
||||||
services and performs the following operations:
|
|
||||||
|
|
||||||
- Installs Memcached
|
|
||||||
|
|
||||||
- Installs the repository server
|
|
||||||
|
|
||||||
- Installs Galera
|
|
||||||
|
|
||||||
- Installs RabbitMQ
|
|
||||||
|
|
||||||
- Installs Rsyslog
|
|
||||||
|
|
||||||
- Configures Rsyslog
|
|
||||||
|
|
||||||
Running the infrastructure playbook
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Before continuing, validate the configuration files using the
|
|
||||||
guidance in `Checking the integrity of your configuration files`_.
|
|
||||||
|
|
||||||
.. _Checking the integrity of your configuration files: ../install-guide/configure-configurationintegrity.html
|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
|
|
||||||
|
|
||||||
#. Run the infrastructure setup playbook:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible setup-infrastructure.yml
|
|
||||||
|
|
||||||
Confirm satisfactory completion with zero items unreachable or
|
|
||||||
failed:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
PLAY RECAP ********************************************************************
|
|
||||||
...
|
|
||||||
deployment_host : ok=27 changed=0 unreachable=0 failed=0
|
|
||||||
|
|
||||||
Verify the database cluster
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
|
|
||||||
|
|
||||||
#. Execute the following command to show the current cluster state:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# ansible galera_container -m shell -a "mysql \
|
|
||||||
-h localhost -e 'show status like \"%wsrep_cluster_%\";'"
|
|
||||||
|
|
||||||
Example output:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
node3_galera_container-3ea2cbd3 | success | rc=0 >>
|
|
||||||
Variable_name Value
|
|
||||||
wsrep_cluster_conf_id 17
|
|
||||||
wsrep_cluster_size 3
|
|
||||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
|
||||||
wsrep_cluster_status Primary
|
|
||||||
|
|
||||||
node2_galera_container-49a47d25 | success | rc=0 >>
|
|
||||||
Variable_name Value
|
|
||||||
wsrep_cluster_conf_id 17
|
|
||||||
wsrep_cluster_size 3
|
|
||||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
|
||||||
wsrep_cluster_status Primary
|
|
||||||
|
|
||||||
node4_galera_container-76275635 | success | rc=0 >>
|
|
||||||
Variable_name Value
|
|
||||||
wsrep_cluster_conf_id 17
|
|
||||||
wsrep_cluster_size 3
|
|
||||||
wsrep_cluster_state_uuid 338b06b0-2948-11e4-9d06-bef42f6c52f1
|
|
||||||
wsrep_cluster_status Primary
|
|
||||||
|
|
||||||
The ``wsrep_cluster_size`` field indicates the number of nodes
|
|
||||||
in the cluster and the ``wsrep_cluster_status`` field indicates
|
|
||||||
primary.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,132 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
==============================
|
|
||||||
Chapter 7. OpenStack playbooks
|
|
||||||
==============================
|
|
||||||
|
|
||||||
**Figure 7.1. Installation work flow**
|
|
||||||
|
|
||||||
.. image:: figures/workflow-openstackplaybooks.png
|
|
||||||
|
|
||||||
The ``setup-openstack.yml`` playbook installs OpenStack services and
|
|
||||||
performs the following operations:
|
|
||||||
|
|
||||||
- Installs Identity (keystone)
|
|
||||||
|
|
||||||
- Installs the Image service (glance)
|
|
||||||
|
|
||||||
- Installs Block Storage (cinder)
|
|
||||||
|
|
||||||
- Installs Compute (nova)
|
|
||||||
|
|
||||||
- Installs Networking (neutron)
|
|
||||||
|
|
||||||
- Installs Orchestration (heat)
|
|
||||||
|
|
||||||
- Installs Dashboard (horizon)
|
|
||||||
|
|
||||||
- Installs Telemetry (ceilometer and aodh)
|
|
||||||
|
|
||||||
- Installs Object Storage (swift)
|
|
||||||
|
|
||||||
- Installs Ironic
|
|
||||||
|
|
||||||
Running the OpenStack playbook
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
#. Change to the ``/opt/openstack-ansible/playbooks`` directory.
|
|
||||||
|
|
||||||
#. Run the OpenStack setup playbook:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack-ansible setup-openstack.yml
|
|
||||||
|
|
||||||
Confirm satisfactory completion with zero items unreachable or
|
|
||||||
failed.
|
|
||||||
|
|
||||||
Utility container
|
|
||||||
~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The utility container provides a space where miscellaneous tools and
|
|
||||||
software are installed. Tools and objects are placed in a
|
|
||||||
utility container if they do not require a dedicated container or if it
|
|
||||||
is impractical to create a new container for a single tool or object.
|
|
||||||
Utility containers are also used when tools cannot be installed
|
|
||||||
directly onto a host.
|
|
||||||
|
|
||||||
For example, the tempest playbooks are installed on the utility
|
|
||||||
container since tempest testing does not need a container of its own.
|
|
||||||
|
|
||||||
Verifying OpenStack operation
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify basic operation of the OpenStack API and dashboard.
|
|
||||||
|
|
||||||
**Procedure 8.1. Verifying the API**
|
|
||||||
|
|
||||||
The utility container provides a CLI environment for additional
|
|
||||||
configuration and testing.
|
|
||||||
|
|
||||||
#. Determine the utility container name:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-ls | grep utility
|
|
||||||
infra1_utility_container-161a4084
|
|
||||||
|
|
||||||
#. Access the utility container:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-attach -n infra1_utility_container-161a4084
|
|
||||||
|
|
||||||
#. Source the ``admin`` tenant credentials:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# source /root/openrc
|
|
||||||
|
|
||||||
#. Run an OpenStack command that uses one or more APIs. For example:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# openstack user list
|
|
||||||
+----------------------------------+--------------------+
|
|
||||||
| ID | Name |
|
|
||||||
+----------------------------------+--------------------+
|
|
||||||
| 08fe5eeeae314d578bba0e47e7884f3a | alt_demo |
|
|
||||||
| 0aa10040555e47c09a30d2240e474467 | dispersion |
|
|
||||||
| 10d028f9e47b4d1c868410c977abc3df | glance |
|
|
||||||
| 249f9ad93c024f739a17ca30a96ff8ee | demo |
|
|
||||||
| 39c07b47ee8a47bc9f9214dca4435461 | swift |
|
|
||||||
| 3e88edbf46534173bc4fd8895fa4c364 | cinder |
|
|
||||||
| 41bef7daf95a4e72af0986ec0583c5f4 | neutron |
|
|
||||||
| 4f89276ee4304a3d825d07b5de0f4306 | admin |
|
|
||||||
| 943a97a249894e72887aae9976ca8a5e | nova |
|
|
||||||
| ab4f0be01dd04170965677e53833e3c3 | stack_domain_admin |
|
|
||||||
| ac74be67a0564722b847f54357c10b29 | heat |
|
|
||||||
| b6b1d5e76bc543cda645fa8e778dff01 | ceilometer |
|
|
||||||
| dc001a09283a404191ff48eb41f0ffc4 | aodh |
|
|
||||||
| e59e4379730b41209f036bbeac51b181 | keystone |
|
|
||||||
+----------------------------------+--------------------+
|
|
||||||
|
|
||||||
**Procedure 8.2. Verifying the dashboard**
|
|
||||||
|
|
||||||
#. With a web browser, access the dashboard using the external load
|
|
||||||
balancer IP address defined by the ``external_lb_vip_address`` option
|
|
||||||
in the ``/etc/openstack_deploy/openstack_user_config.yml`` file. The
|
|
||||||
dashboard uses HTTPS on port 443.
|
|
||||||
|
|
||||||
#. Authenticate using the username ``admin`` and password defined by the
|
|
||||||
``keystone_auth_admin_password`` option in the
|
|
||||||
``/etc/openstack_deploy/user_variables.yml`` file.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Only users with administrator privileges can upload public images
|
|
||||||
using the dashboard or CLI.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -212,6 +212,3 @@ Verifying the Dashboard (horizon)
|
|||||||
|
|
||||||
.. TODO Add troubleshooting information to resolve common installation issues
|
.. TODO Add troubleshooting information to resolve common installation issues
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,90 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
===========
|
|
||||||
Host layout
|
|
||||||
===========
|
|
||||||
|
|
||||||
We recommend a layout that contains a minimum of five hosts (or servers):
|
|
||||||
|
|
||||||
- Three control plane infrastructure hosts
|
|
||||||
|
|
||||||
- One logging infrastructure host
|
|
||||||
|
|
||||||
- One compute host
|
|
||||||
|
|
||||||
If using the optional Block Storage (cinder) service, we recommend
|
|
||||||
the use of a sixth host. Block Storage hosts require an LVM volume group named
|
|
||||||
``cinder-volumes``. See `the section called "Installation
|
|
||||||
requirements" <overview-requirements.html>`_ and `the section
|
|
||||||
called "Configuring LVM" <targethosts-prepare.html#configuring-lvm>`_
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
If using the optional Object Storage (swift) service, we recommend the use of
|
|
||||||
three additional hosts (or some other odd number). See the section
|
|
||||||
:ref:`configure-swift` for more information.
|
|
||||||
|
|
||||||
The hosts are called target hosts because Ansible deploys the OSA
|
|
||||||
environment within these hosts. We recommend a
|
|
||||||
deployment host from which Ansible orchestrates the deployment
|
|
||||||
process. One of the target hosts can function as the deployment host.
|
|
||||||
|
|
||||||
Use at least one load balancer to manage the traffic among
|
|
||||||
the target hosts. You can use any type of load balancer such as a hardware
|
|
||||||
appliance or HAProxy. We recommend using physical load balancers for
|
|
||||||
production environments.
|
|
||||||
|
|
||||||
Infrastructure Control Plane target hosts contain the following
|
|
||||||
services:
|
|
||||||
|
|
||||||
- Infrastructure:
|
|
||||||
|
|
||||||
- Galera
|
|
||||||
|
|
||||||
- RabbitMQ
|
|
||||||
|
|
||||||
- Memcached
|
|
||||||
|
|
||||||
- Logging
|
|
||||||
|
|
||||||
- Repository
|
|
||||||
|
|
||||||
- OpenStack:
|
|
||||||
|
|
||||||
- Identity (keystone)
|
|
||||||
|
|
||||||
- Image service (glance)
|
|
||||||
|
|
||||||
- Compute management (nova)
|
|
||||||
|
|
||||||
- Networking (neutron)
|
|
||||||
|
|
||||||
- Orchestration (heat)
|
|
||||||
|
|
||||||
- Dashboard (horizon)
|
|
||||||
|
|
||||||
- Object storage (swift)
|
|
||||||
|
|
||||||
Infrastructure Logging target hosts contain the following services:
|
|
||||||
|
|
||||||
- Rsyslog
|
|
||||||
|
|
||||||
Compute target hosts contain the following services:
|
|
||||||
|
|
||||||
- Compute virtualization
|
|
||||||
|
|
||||||
- Logging
|
|
||||||
|
|
||||||
(Optional) Storage target hosts contain the following services:
|
|
||||||
|
|
||||||
- Block Storage scheduler
|
|
||||||
|
|
||||||
- Block Storage volumes
|
|
||||||
|
|
||||||
|
|
||||||
**Figure 1.1. Host Layout Overview**
|
|
||||||
|
|
||||||
.. image:: figures/environment-overview.png
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -239,6 +239,3 @@ IP assignments
|
|||||||
| log1 | 172.29.236.171 | | |
|
| log1 | 172.29.236.171 | | |
|
||||||
+------------------+----------------+-------------------+----------------+
|
+------------------+----------------+-------------------+----------------+
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,30 +1,19 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=======================
|
=======================
|
||||||
About OpenStack-Ansible
|
About OpenStack-Ansible
|
||||||
=======================
|
=======================
|
||||||
|
|
||||||
OpenStack-Ansible (OSA) uses the Ansible IT automation framework to
|
OpenStack-Ansible (OSA) uses the `Ansible IT <https://www.ansible.com/how-ansible-works>`_
|
||||||
deploy an OpenStack environment on Ubuntu Linux. OpenStack components are
|
automation engine to deploy an OpenStack environment on Ubuntu Linux.
|
||||||
installed into Linux Containers (LXC) for isolation and ease of
|
For isolation and ease of maintenance, you can install OpenStack components
|
||||||
maintenance.
|
into Linux containers (LXC).
|
||||||
|
|
||||||
This documentation is intended for deployers of the OpenStack-Ansible
|
This documentation is intended for deployers, and walks through an
|
||||||
deployment system who are interested in installing an OpenStack environment.
|
OpenStack-Ansible installation for a test and production environments.
|
||||||
|
|
||||||
Third-party trademarks and tradenames appearing in this document are the
|
|
||||||
property of their respective owners. Such third-party trademarks have
|
|
||||||
been printed in caps or initial caps and are used for referential
|
|
||||||
purposes only. We do not intend our use or display of other companies'
|
|
||||||
tradenames, trademarks, or service marks to imply a relationship with,
|
|
||||||
or endorsement or sponsorship of us by, these other companies.
|
|
||||||
|
|
||||||
Ansible
|
Ansible
|
||||||
~~~~~~~
|
~~~~~~~
|
||||||
|
|
||||||
OpenStack-Ansible Deployment uses a combination of Ansible and
|
Ansible provides an automation platform to simplify system and application
|
||||||
Linux Containers (LXC) to install and manage OpenStack. Ansible
|
|
||||||
provides an automation platform to simplify system and application
|
|
||||||
deployment. Ansible manages systems using Secure Shell (SSH)
|
deployment. Ansible manages systems using Secure Shell (SSH)
|
||||||
instead of unique protocols that require remote daemons or agents.
|
instead of unique protocols that require remote daemons or agents.
|
||||||
|
|
||||||
@ -32,83 +21,30 @@ Ansible uses playbooks written in the YAML language for orchestration.
|
|||||||
For more information, see `Ansible - Intro to
|
For more information, see `Ansible - Intro to
|
||||||
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
|
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
|
||||||
|
|
||||||
In this guide, we refer to the host running Ansible playbooks as
|
In this guide, we refer to two types of hosts:
|
||||||
the deployment host and the hosts on which Ansible installs OSA as the
|
|
||||||
target hosts.
|
|
||||||
|
|
||||||
A recommended minimal layout for deployments involves five target
|
* The host running Ansible playbooks is the `deployment host`.
|
||||||
hosts in total: three infrastructure hosts, one compute host, and one
|
* The hosts where Ansible installs OpenStack services and infrastructure
|
||||||
logging host. All hosts will need at least one networking interface, but
|
components are the `target host`.
|
||||||
we recommend multiple bonded interfaces. More information on setting up
|
|
||||||
target hosts can be found in `the section called "Host layout"`_.
|
|
||||||
|
|
||||||
For more information on physical, logical, and virtual network
|
Linux containers (LXC)
|
||||||
interfaces within hosts see `the section called "Host
|
|
||||||
networking"`_.
|
|
||||||
|
|
||||||
.. _the section called "Host layout": overview-hostlayout.html
|
|
||||||
.. _the section called "Host networking": overview-hostnetworking.html
|
|
||||||
|
|
||||||
|
|
||||||
Linux Containers (LXC)
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Containers provide operating-system level virtualization by enhancing
|
Containers provide operating-system level virtualization by enhancing
|
||||||
the concept of ``chroot`` environments, which isolate resources and file
|
the concept of ``chroot`` environments. These isolate resources and file
|
||||||
systems for a particular group of processes without the overhead and
|
systems for a particular group of processes without the overhead and
|
||||||
complexity of virtual machines. They access the same kernel, devices,
|
complexity of virtual machines. They access the same kernel, devices,
|
||||||
and file systems on the underlying host and provide a thin operational
|
and file systems on the underlying host and provide a thin operational
|
||||||
layer built around a set of rules.
|
layer built around a set of rules.
|
||||||
|
|
||||||
The Linux Containers (LXC) project implements operating system level
|
The LXC project implements operating system level
|
||||||
virtualization on Linux using kernel namespaces and includes the
|
virtualization on Linux using kernel namespaces and includes the
|
||||||
following features:
|
following features:
|
||||||
|
|
||||||
- Resource isolation including CPU, memory, block I/O, and network
|
* Resource isolation including CPU, memory, block I/O, and network
|
||||||
using ``cgroups``.
|
using ``cgroups``.
|
||||||
|
* Selective connectivity to physical and virtual network devices on the
|
||||||
- Selective connectivity to physical and virtual network devices on the
|
underlying physical host.
|
||||||
underlying physical host.
|
* Support for a variety of backing stores including LVM.
|
||||||
|
* Built on a foundation of stable Linux technologies with an active
|
||||||
- Support for a variety of backing stores including LVM.
|
development and support community.
|
||||||
|
|
||||||
- Built on a foundation of stable Linux technologies with an active
|
|
||||||
development and support community.
|
|
||||||
|
|
||||||
Useful commands:
|
|
||||||
|
|
||||||
- List containers and summary information such as operational state and
|
|
||||||
network configuration:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-ls --fancy
|
|
||||||
|
|
||||||
- Show container details including operational state, resource
|
|
||||||
utilization, and ``veth`` pairs:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-info --name container_name
|
|
||||||
|
|
||||||
- Start a container:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-start --name container_name
|
|
||||||
|
|
||||||
- Attach to a container:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-attach --name container_name
|
|
||||||
|
|
||||||
- Stop a container:
|
|
||||||
|
|
||||||
.. code-block:: shell-session
|
|
||||||
|
|
||||||
# lxc-stop --name container_name
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,5 +1,3 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=========================
|
=========================
|
||||||
Installation requirements
|
Installation requirements
|
||||||
=========================
|
=========================
|
||||||
@ -12,13 +10,14 @@ Installation requirements
|
|||||||
CPU requirements
|
CPU requirements
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Compute hosts have multi-core processors that have `hardware-assisted
|
* Compute hosts with multi-core processors that have `hardware-assisted
|
||||||
virtualization extensions`_ available. These extensions provide a significant
|
virtualization extensions`_ available. These extensions provide a
|
||||||
performance boost and improve security in virtualized environments.
|
significant performance boost and improve security in virtualized
|
||||||
|
environments.
|
||||||
|
|
||||||
Infrastructure hosts have multi-core processors for best
|
* Infrastructure hosts with multi-core processors for best
|
||||||
performance. Some services, such as MySQL, greatly benefit from additional CPU
|
performance. Some services, such as MySQL, greatly benefit from additional
|
||||||
cores and other technologies, such as `Hyper-threading`_.
|
CPU cores and other technologies, such as `Hyper-threading`_.
|
||||||
|
|
||||||
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
|
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
|
||||||
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
|
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
|
||||||
@ -36,35 +35,34 @@ Deployment hosts
|
|||||||
Compute hosts
|
Compute hosts
|
||||||
Disk space requirements vary depending on the total number of instances
|
Disk space requirements vary depending on the total number of instances
|
||||||
running on each host and the amount of disk space allocated to each instance.
|
running on each host and the amount of disk space allocated to each instance.
|
||||||
Compute hosts have at least 100GB of disk space available at an
|
Compute hosts need to have at least 100GB of disk space available. Consider
|
||||||
absolute minimum. Consider disks that provide higher
|
disks that provide higher throughput with lower latency, such as SSD drives
|
||||||
throughput with lower latency, such as SSD drives in a RAID array.
|
in a RAID array.
|
||||||
|
|
||||||
Storage hosts
|
Storage hosts
|
||||||
Hosts running the Block Storage (cinder) service often consume the most disk
|
Hosts running the Block Storage (cinder) service often consume the most disk
|
||||||
space in OpenStack environments. As with compute hosts,
|
space in OpenStack environments. As with compute hosts,
|
||||||
choose disks that provide the highest I/O throughput with the lowest latency
|
choose disks that provide the highest I/O throughput with the lowest latency
|
||||||
for storage hosts. Storage hosts contain 1TB of disk space at a
|
for storage hosts. Storage hosts need to have 1TB of disk space at a
|
||||||
minimum.
|
minimum.
|
||||||
|
|
||||||
Infrastructure hosts
|
Infrastructure hosts
|
||||||
The OpenStack control plane contains storage-intensive services, such as
|
The OpenStack control plane contains storage-intensive services, such as
|
||||||
the Image (glance) service as well as MariaDB. These control plane hosts
|
the Image (glance) service as well as MariaDB. These control plane hosts
|
||||||
have 100GB of disk space available at a minimum.
|
need to have 100GB of disk space available at a minimum.
|
||||||
|
|
||||||
Logging hosts
|
Logging hosts
|
||||||
An OpenStack-Ansible deployment generates a significant amount of logging.
|
An OpenStack-Ansible deployment generates a significant amount of logging.
|
||||||
Logs come from a variety of sources, including services running in
|
Logs come from a variety of sources, including services running in
|
||||||
containers, the containers themselves, and the physical hosts. Logging hosts
|
containers, the containers themselves, and the physical hosts. Logging hosts
|
||||||
need additional disk space to hold live and rotated (historical) log files.
|
need sufficient disk space to hold live, and rotated (historical), log files.
|
||||||
In addition, the storage performance must be enough to keep pace with the
|
In addition, the storage performance must be enough to keep pace with the
|
||||||
log traffic coming from various hosts and containers within the OpenStack
|
log traffic coming from various hosts and containers within the OpenStack
|
||||||
environment. Reserve a minimum of 50GB of disk space for storing
|
environment. Reserve a minimum of 50GB of disk space for storing
|
||||||
logs on the logging hosts.
|
logs on the logging hosts.
|
||||||
|
|
||||||
|
Hosts that provide Block Storage volumes must have logical volume
|
||||||
Hosts that provide Block Storage (cinder) volumes must have logical volume
|
manager (LVM) support. Ensure those hosts have a ``cinder-volume`` volume
|
||||||
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
|
|
||||||
group that OpenStack-Ansible can configure for use with cinder.
|
group that OpenStack-Ansible can configure for use with cinder.
|
||||||
|
|
||||||
Each control plane host runs services inside LXC containers. The container
|
Each control plane host runs services inside LXC containers. The container
|
||||||
@ -83,8 +81,9 @@ Network requirements
|
|||||||
network interface. This works for small environments, but it can cause
|
network interface. This works for small environments, but it can cause
|
||||||
problems when your environment grows.
|
problems when your environment grows.
|
||||||
|
|
||||||
For the best performance, reliability and scalability, deployers should
|
For the best performance, reliability, and scalability in a production
|
||||||
consider a network configuration that contains the following features:
|
environment, deployers should consider a network configuration that contains
|
||||||
|
the following features:
|
||||||
|
|
||||||
* Bonded network interfaces: Increases performance and/or reliability
|
* Bonded network interfaces: Increases performance and/or reliability
|
||||||
(dependent on bonding architecture).
|
(dependent on bonding architecture).
|
||||||
@ -93,8 +92,7 @@ consider a network configuration that contains the following features:
|
|||||||
hardware, rather than in the server's main CPU.
|
hardware, rather than in the server's main CPU.
|
||||||
|
|
||||||
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
|
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
|
||||||
also improve storage performance when using the Block Storage (cinder)
|
also improve storage performance when using the Block Storage service.
|
||||||
service.
|
|
||||||
|
|
||||||
* Jumbo frames: Increases network performance by allowing more data to be sent
|
* Jumbo frames: Increases network performance by allowing more data to be sent
|
||||||
in each packet.
|
in each packet.
|
||||||
@ -108,8 +106,8 @@ minimum requirements:
|
|||||||
* Ubuntu 14.04 LTS (Trusty Tahr)
|
* Ubuntu 14.04 LTS (Trusty Tahr)
|
||||||
|
|
||||||
* OSA is tested regularly against the latest Ubuntu 14.04 LTS point
|
* OSA is tested regularly against the latest Ubuntu 14.04 LTS point
|
||||||
releases
|
releases.
|
||||||
* Linux kernel version ``3.13.0-34-generic`` or later
|
* Linux kernel version ``3.13.0-34-generic`` or later.
|
||||||
* For swift storage hosts, you must enable the ``trusty-backports``
|
* For swift storage hosts, you must enable the ``trusty-backports``
|
||||||
repositories in ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/``
|
repositories in ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/``
|
||||||
See the `Ubuntu documentation
|
See the `Ubuntu documentation
|
||||||
@ -124,7 +122,3 @@ minimum requirements:
|
|||||||
* Python 2.7 or later
|
* Python 2.7 or later
|
||||||
|
|
||||||
* en_US.UTF-8 as locale
|
* en_US.UTF-8 as locale
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,126 +0,0 @@
|
|||||||
`Home <index.html>`__ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
========
|
|
||||||
Security
|
|
||||||
========
|
|
||||||
|
|
||||||
The OpenStack-Ansible project provides several security features for
|
|
||||||
OpenStack deployments. This section of documentation covers those
|
|
||||||
features and how they can benefit deployers of various sizes.
|
|
||||||
|
|
||||||
Security requirements always differ between deployers. If you require
|
|
||||||
additional security measures, refer to the official
|
|
||||||
`OpenStack Security Guide`_ for additional resources.
|
|
||||||
|
|
||||||
AppArmor
|
|
||||||
~~~~~~~~
|
|
||||||
|
|
||||||
The Linux kernel offers multiple `security modules`_ (LSMs) that that set
|
|
||||||
`mandatory access controls`_ (MAC) on Linux systems. The OpenStack-Ansible
|
|
||||||
project configures `AppArmor`_. AppArmor is a Linux security module that
|
|
||||||
provides additional security on LXC container hosts. AppArmor allows
|
|
||||||
administrators to set specific limits and policies around what resources a
|
|
||||||
particular application can access. Any activity outside the allowed policies
|
|
||||||
is denied at the kernel level.
|
|
||||||
|
|
||||||
AppArmor profiles that are applied in OpenStack-Ansible limit the actions
|
|
||||||
that each LXC container may take on a system. This is done within the
|
|
||||||
`lxc_hosts role`_.
|
|
||||||
|
|
||||||
.. _security modules: https://en.wikipedia.org/wiki/Linux_Security_Modules
|
|
||||||
.. _mandatory access controls: https://en.wikipedia.org/wiki/Mandatory_access_control
|
|
||||||
.. _AppArmor: https://en.wikipedia.org/wiki/AppArmor
|
|
||||||
.. _lxc_hosts role: https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/lxc_hosts/templates/lxc-openstack.apparmor.j2
|
|
||||||
|
|
||||||
Encrypted communication
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
While in transit, data is encrypted between some OpenStack services in
|
|
||||||
OpenStack-Ansible deployments. Not all communication between all services is
|
|
||||||
encrypted. For more details on what traffic is encrypted, and how
|
|
||||||
to configure SSL certificates, refer to the documentation section titled
|
|
||||||
`Securing services with SSL certificates`_.
|
|
||||||
|
|
||||||
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
|
||||||
|
|
||||||
Host security hardening
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Deployers can apply security hardening to OpenStack infrastructure and compute
|
|
||||||
hosts using the ``openstack-ansible-security`` role. The purpose of the role
|
|
||||||
is to apply as many security configurations as possible without disrupting the
|
|
||||||
operation of an OpenStack deployment.
|
|
||||||
|
|
||||||
Refer to the documentation on :ref:`security_hardening` for more information
|
|
||||||
on the role and how to enable it in OpenStack-Ansible.
|
|
||||||
|
|
||||||
Least privilege
|
|
||||||
~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The `principle of least privilege`_ is used throughout OpenStack-Ansible to
|
|
||||||
limit the damage that could be caused if an attacker gained access to a set of
|
|
||||||
credentials.
|
|
||||||
|
|
||||||
OpenStack-Ansible configures unique username and password combinations for
|
|
||||||
each service that talks to RabbitMQ and Galera/MariaDB. Each service that
|
|
||||||
connects to RabbitMQ uses a separate virtual host for publishing and consuming
|
|
||||||
messages. The MariaDB users for each service are only granted access to the
|
|
||||||
database(s) that they need to query.
|
|
||||||
|
|
||||||
.. _principle of least privilege: https://en.wikipedia.org/wiki/Principle_of_least_privilege
|
|
||||||
|
|
||||||
.. _least-access-openstack-services:
|
|
||||||
|
|
||||||
Securing network access to OpenStack services
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
OpenStack environments expose many service ports and API endpoints to the
|
|
||||||
network.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Deployers must limit access to these resources and expose them only
|
|
||||||
to trusted users and networks.
|
|
||||||
|
|
||||||
The resources within an OpenStack environment can be divided into two groups:
|
|
||||||
|
|
||||||
1. Services that users must access directly to consume the OpenStack cloud.
|
|
||||||
|
|
||||||
* Aodh
|
|
||||||
* Cinder
|
|
||||||
* Ceilometer
|
|
||||||
* Glance
|
|
||||||
* Heat
|
|
||||||
* Horizon
|
|
||||||
* Keystone *(excluding the admin API endpoint)*
|
|
||||||
* Neutron
|
|
||||||
* Nova
|
|
||||||
* Swift
|
|
||||||
|
|
||||||
2. Services that are only utilized internally by the OpenStack cloud.
|
|
||||||
|
|
||||||
* Keystone (admin API endpoint)
|
|
||||||
* MariaDB
|
|
||||||
* RabbitMQ
|
|
||||||
|
|
||||||
To manage instances, you are able to access certain public API endpoints, such
|
|
||||||
as the Nova or Neutron API. Configure firewalls to limit network access to
|
|
||||||
these services.
|
|
||||||
|
|
||||||
Other services, such as MariaDB and RabbitMQ, must be segmented away from
|
|
||||||
direct user access. You must configure a firewall to only allow
|
|
||||||
connectivity to these services within the OpenStack environment itself. This
|
|
||||||
reduces an attacker's ability to query or manipulate data in OpenStack's
|
|
||||||
critical database and queuing services, especially if one of these services has
|
|
||||||
a known vulnerability.
|
|
||||||
|
|
||||||
For more details on recommended network policies for OpenStack clouds, refer to
|
|
||||||
the `API endpoint process isolation and policy`_ section from the `OpenStack
|
|
||||||
Security Guide`_
|
|
||||||
|
|
||||||
.. _API endpoint process isolation and policy: http://docs.openstack.org/security-guide/api-endpoints/api-endpoint-configuration-recommendations.html#network-policy
|
|
||||||
.. _OpenStack Security Guide: http://docs.openstack.org/security-guide
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -128,8 +128,3 @@ Configure the ``glance-volume`` container to use the ``br-storage`` and
|
|||||||
|
|
||||||
* ``br-storage`` bridge carries image traffic to compute host.
|
* ``br-storage`` bridge carries image traffic to compute host.
|
||||||
* ``br-mgmt`` bridge carries Image Service API request traffic.
|
* ``br-mgmt`` bridge carries Image Service API request traffic.
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,100 +1,18 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=====================
|
=====================
|
||||||
Installation workflow
|
Installation workflow
|
||||||
=====================
|
=====================
|
||||||
|
|
||||||
This diagram shows the general workflow associated with an
|
This diagram shows the general workflow associated with an
|
||||||
OpenStack-Ansible (OSA) installation.
|
OpenStack-Ansible installation.
|
||||||
|
|
||||||
|
|
||||||
**Figure 1.7. Installation workflow**
|
.. figure:: figures/installation-workflow-overview.png
|
||||||
|
:width: 100%
|
||||||
|
|
||||||
.. image:: figures/workflow-overview.png
|
**Installation workflow**
|
||||||
|
|
||||||
#. :doc:`Prepare deployment hosts <deploymenthost>`
|
#. :doc:`Prepare deployment host <deploymenthost>`
|
||||||
#. :doc:`Prepare target hosts <targethosts>`
|
#. :doc:`Prepare target hosts <targethosts>`
|
||||||
#. :doc:`Configure deployment <configure>`
|
#. :doc:`Configure deployment <configure>`
|
||||||
#. :doc:`Run foundation playbooks <install-foundation>`
|
#. :doc:`Run playbooks <installation#run-playbooks>`
|
||||||
#. :doc:`Run infrastructure playbooks <install-infrastructure>`
|
#. :doc:`Verify OpenStack operation <installation>`
|
||||||
#. :doc:`Run OpenStack playbooks <install-openstack>`
|
|
||||||
|
|
||||||
=======
|
|
||||||
|
|
||||||
Network ranges
|
|
||||||
~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
For consistency, the following IP addresses and hostnames are
|
|
||||||
referred to in this installation workflow.
|
|
||||||
|
|
||||||
+-----------------------+-----------------+
|
|
||||||
| Network | IP Range |
|
|
||||||
+=======================+=================+
|
|
||||||
| Management Network | 172.29.236.0/22 |
|
|
||||||
+-----------------------+-----------------+
|
|
||||||
| Tunnel (VXLAN) Network| 172.29.240.0/22 |
|
|
||||||
+-----------------------+-----------------+
|
|
||||||
| Storage Network | 172.29.244.0/22 |
|
|
||||||
+-----------------------+-----------------+
|
|
||||||
|
|
||||||
|
|
||||||
IP assignments
|
|
||||||
~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| Host name | Management IP | Tunnel (VxLAN) IP | Storage IP |
|
|
||||||
+==================+================+===================+================+
|
|
||||||
| infra1 | 172.29.236.101 | 172.29.240.101 | 172.29.244.101 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| infra2 | 172.29.236.102 | 172.29.240.102 | 172.29.244.102 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| infra3 | 172.29.236.103 | 172.29.240.103 | 172.29.244.103 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| net1 | 172.29.236.111 | 172.29.240.111 | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| net2 | 172.29.236.112 | 172.29.240.112 | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| net3 | 172.29.236.113 | 172.29.240.113 | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| compute1 | 172.29.236.121 | 172.29.240.121 | 172.29.244.121 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| compute2 | 172.29.236.122 | 172.29.240.122 | 172.29.244.122 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| compute3 | 172.29.236.123 | 172.29.240.123 | 172.29.244.123 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| lvm-storage1 | 172.29.236.131 | | 172.29.244.131 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| nfs-storage1 | 172.29.236.141 | | 172.29.244.141 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| ceph-mon1 | 172.29.236.151 | | 172.29.244.151 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| ceph-mon2 | 172.29.236.152 | | 172.29.244.152 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| ceph-mon3 | 172.29.236.153 | | 172.29.244.153 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| swift1 | 172.29.236.161 | | 172.29.244.161 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| swift2 | 172.29.236.162 | | 172.29.244.162 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| swift3 | 172.29.236.163 | | 172.29.244.163 |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| | | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
| log1 | 172.29.236.171 | | |
|
|
||||||
+------------------+----------------+-------------------+----------------+
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,17 +1,12 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
========
|
||||||
|
Overview
|
||||||
=====================
|
========
|
||||||
Chapter 1. Overview
|
|
||||||
=====================
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
|
||||||
overview-osa.rst
|
overview-osa.rst
|
||||||
overview-hostlayout.rst
|
overview-host-layout
|
||||||
|
overview-network-arch.rst
|
||||||
|
overview-storage-arch.rst
|
||||||
overview-requirements.rst
|
overview-requirements.rst
|
||||||
overview-workflow.rst
|
overview-workflow.rst
|
||||||
overview-security.rst
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,173 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=====================
|
|
||||||
Designing the network
|
|
||||||
=====================
|
|
||||||
|
|
||||||
This section describes the recommended network architecture.
|
|
||||||
Some components are mandatory, such as the bridges described below. We
|
|
||||||
recommend other components such as a bonded network interface but this
|
|
||||||
is not a requirement.
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
Follow the reference design as closely as possible for production deployments.
|
|
||||||
|
|
||||||
Although Ansible automates most deployment operations, networking on
|
|
||||||
target hosts requires manual configuration as it varies
|
|
||||||
dramatically per environment. For demonstration purposes, these
|
|
||||||
instructions use a reference architecture with example network interface
|
|
||||||
names, networks, and IP addresses. Modify these values as needed for your
|
|
||||||
particular environment.
|
|
||||||
|
|
||||||
Bonded network interfaces
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The reference architecture includes bonded network interfaces, which
|
|
||||||
use multiple physical network interfaces for better redundancy and throughput.
|
|
||||||
Avoid using two ports on the same multi-port network card for the same bonded
|
|
||||||
interface since a network card failure affects both physical network
|
|
||||||
interfaces used by the bond.
|
|
||||||
|
|
||||||
The ``bond0`` interface carries traffic from the containers
|
|
||||||
running your OpenStack infrastructure. Configure a static IP address on the
|
|
||||||
``bond0`` interface from your management network.
|
|
||||||
|
|
||||||
The ``bond1`` interface carries traffic from your virtual machines.
|
|
||||||
Do not configure a static IP on this interface, since neutron uses this
|
|
||||||
bond to handle VLAN and VXLAN networks for virtual machines.
|
|
||||||
|
|
||||||
Additional bridge networks are required for OpenStack-Ansible. These bridges
|
|
||||||
connect the two bonded network interfaces.
|
|
||||||
|
|
||||||
Adding bridges
|
|
||||||
~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The combination of containers and flexible deployment options require
|
|
||||||
implementation of advanced Linux networking features, such as bridges and
|
|
||||||
namespaces.
|
|
||||||
|
|
||||||
Bridges provide layer 2 connectivity (similar to switches) among
|
|
||||||
physical, logical, and virtual network interfaces within a host. After
|
|
||||||
creating a bridge, the network interfaces are virtually plugged in to
|
|
||||||
it.
|
|
||||||
|
|
||||||
OpenStack-Ansible uses bridges to connect physical and logical network
|
|
||||||
interfaces on the host to virtual network interfaces within containers.
|
|
||||||
|
|
||||||
Namespaces provide logically separate layer 3 environments (similar to
|
|
||||||
routers) within a host. Namespaces use virtual interfaces to connect
|
|
||||||
with other namespaces, including the host namespace. These interfaces,
|
|
||||||
often called ``veth`` pairs, are virtually plugged in between
|
|
||||||
namespaces similar to patch cables connecting physical devices such as
|
|
||||||
switches and routers.
|
|
||||||
|
|
||||||
Each container has a namespace that connects to the host namespace with
|
|
||||||
one or more ``veth`` pairs. Unless specified, the system generates
|
|
||||||
random names for ``veth`` pairs.
|
|
||||||
|
|
||||||
The following image demonstrates how the container network interfaces are
|
|
||||||
connected to the host's bridges and to the host's physical network interfaces:
|
|
||||||
|
|
||||||
.. image:: figures/networkcomponents.png
|
|
||||||
|
|
||||||
Target hosts can contain the following network bridges:
|
|
||||||
|
|
||||||
- LXC internal ``lxcbr0``:
|
|
||||||
|
|
||||||
- This bridge is **required**, but LXC configures it automatically.
|
|
||||||
|
|
||||||
- Provides external (typically internet) connectivity to containers.
|
|
||||||
|
|
||||||
- This bridge does not directly attach to any physical or logical
|
|
||||||
interfaces on the host because iptables handles connectivity. It
|
|
||||||
attaches to ``eth0`` in each container, but the container network
|
|
||||||
interface is configurable in ``openstack_user_config.yml`` in the
|
|
||||||
``provider_networks`` dictionary.
|
|
||||||
|
|
||||||
- Container management ``br-mgmt``:
|
|
||||||
|
|
||||||
- This bridge is **required**.
|
|
||||||
|
|
||||||
- Provides management of and communication among infrastructure and
|
|
||||||
OpenStack services.
|
|
||||||
|
|
||||||
- Manually creates and attaches to a physical or logical interface,
|
|
||||||
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
|
|
||||||
in each container. The container network interface
|
|
||||||
is configurable in ``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
- Storage ``br-storage``:
|
|
||||||
|
|
||||||
- This bridge is *optional*, but recommended.
|
|
||||||
|
|
||||||
- Provides segregated access to Block Storage devices between
|
|
||||||
Compute and Block Storage hosts.
|
|
||||||
|
|
||||||
- Manually creates and attaches to a physical or logical interface,
|
|
||||||
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
|
|
||||||
in each associated container. The container network
|
|
||||||
interface is configurable in ``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
- OpenStack Networking tunnel ``br-vxlan``:
|
|
||||||
|
|
||||||
- This bridge is **required**.
|
|
||||||
|
|
||||||
- Provides infrastructure for VXLAN tunnel networks.
|
|
||||||
|
|
||||||
- Manually creates and attaches to a physical or logical interface,
|
|
||||||
typically a ``bond1`` VLAN subinterface. Also attaches to
|
|
||||||
``eth10`` in each associated container. The
|
|
||||||
container network interface is configurable in
|
|
||||||
``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
- OpenStack Networking provider ``br-vlan``:
|
|
||||||
|
|
||||||
- This bridge is **required**.
|
|
||||||
|
|
||||||
- Provides infrastructure for VLAN networks.
|
|
||||||
|
|
||||||
- Manually creates and attaches to a physical or logical interface,
|
|
||||||
typically ``bond1``. Attaches to ``eth11`` for vlan type networks
|
|
||||||
in each associated container. It does not contain an IP address because
|
|
||||||
it only handles layer 2 connectivity. The
|
|
||||||
container network interface is configurable in
|
|
||||||
``openstack_user_config.yml``.
|
|
||||||
|
|
||||||
- This interface supports flat networks with additional
|
|
||||||
bridge configuration. More details are available here:
|
|
||||||
:ref:`network_configuration`.
|
|
||||||
|
|
||||||
|
|
||||||
Network diagrams
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The following image shows how all of the interfaces and bridges interconnect
|
|
||||||
to provide network connectivity to the OpenStack deployment:
|
|
||||||
|
|
||||||
.. image:: figures/networkarch-container-external.png
|
|
||||||
|
|
||||||
OpenStack-Ansible deploys the compute service on the physical host rather than
|
|
||||||
in a container. The following image shows how to use bridges for
|
|
||||||
network connectivity:
|
|
||||||
|
|
||||||
.. image:: figures/networkarch-bare-external.png
|
|
||||||
|
|
||||||
The following image shows how the neutron agents work with the bridges
|
|
||||||
``br-vlan`` and ``br-vxlan``. OpenStack Networking (neutron) is
|
|
||||||
configured to use a DHCP agent, L3 agent, and Linux Bridge agent within a
|
|
||||||
``networking-agents`` container. The image shows how DHCP agents provide
|
|
||||||
information (IP addresses and DNS servers) to the instances, and how
|
|
||||||
routing works on the image:
|
|
||||||
|
|
||||||
.. image:: figures/networking-neutronagents.png
|
|
||||||
|
|
||||||
The following image shows how virtual machines connect to the ``br-vlan`` and
|
|
||||||
``br-vxlan`` bridges and send traffic to the network outside the host:
|
|
||||||
|
|
||||||
.. image:: figures/networking-compute.png
|
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,21 +1,133 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
=====================
|
||||||
|
Network configuration
|
||||||
|
=====================
|
||||||
|
|
||||||
=================
|
Production environment
|
||||||
Configuring hosts
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
=================
|
|
||||||
|
This example allows you to use your own parameters for the deployment.
|
||||||
|
|
||||||
|
If you followed the previously proposed design, the following table shows
|
||||||
|
bridges that are to be configured on hosts.
|
||||||
|
|
||||||
|
|
||||||
With the information available on the design guide page, you are now
|
+-------------+-----------------------+-------------------------------------+
|
||||||
able to make the decisions to build your own OpenStack. There
|
| Bridge name | Best configured on | With a static IP |
|
||||||
are two examples given here: reference architecture (recommended) and
|
+=============+=======================+=====================================+
|
||||||
single host architecture (simple).
|
| br-mgmt | On every node | Always |
|
||||||
|
+-------------+-----------------------+-------------------------------------+
|
||||||
.. toctree::
|
| | On every storage node | When component is deployed on metal |
|
||||||
|
+ br-storage +-----------------------+-------------------------------------+
|
||||||
targethosts-networkrefarch.rst
|
| | On every compute node | Always |
|
||||||
targethosts-networkexample.rst
|
+-------------+-----------------------+-------------------------------------+
|
||||||
|
| | On every network node | When component is deployed on metal |
|
||||||
|
+ br-vxlan +-----------------------+-------------------------------------+
|
||||||
|
| | On every compute node | Always |
|
||||||
|
+-------------+-----------------------+-------------------------------------+
|
||||||
|
| | On every network node | Never |
|
||||||
|
+ br-vlan +-----------------------+-------------------------------------+
|
||||||
|
| | On every compute node | Never |
|
||||||
|
+-------------+-----------------------+-------------------------------------+
|
||||||
|
|
||||||
|
|
||||||
--------------
|
Example for 3 controller nodes and 2 compute nodes
|
||||||
|
--------------------------------------------------
|
||||||
|
|
||||||
.. include:: navigation.txt
|
* VLANs:
|
||||||
|
|
||||||
|
* Host management: Untagged/Native
|
||||||
|
* Container management: 10
|
||||||
|
* Tunnels: 30
|
||||||
|
* Storage: 20
|
||||||
|
|
||||||
|
* Networks:
|
||||||
|
|
||||||
|
* Host management: 10.240.0.0/22
|
||||||
|
* Container management: 172.29.236.0/22
|
||||||
|
* Tunnel: 172.29.240.0/22
|
||||||
|
* Storage: 172.29.244.0/22
|
||||||
|
|
||||||
|
* Addresses for the controller nodes:
|
||||||
|
|
||||||
|
* Host management: 10.240.0.11 - 10.240.0.13
|
||||||
|
* Host management gateway: 10.240.0.1
|
||||||
|
* DNS servers: 69.20.0.164 69.20.0.196
|
||||||
|
* Container management: 172.29.236.11 - 172.29.236.13
|
||||||
|
* Tunnel: no IP (because IP exist in the containers, when the components
|
||||||
|
are not deployed directly on metal)
|
||||||
|
* Storage: no IP (because IP exist in the containers, when the components
|
||||||
|
are not deployed directly on metal)
|
||||||
|
|
||||||
|
* Addresses for the compute nodes:
|
||||||
|
|
||||||
|
* Host management: 10.240.0.21 - 10.240.0.22
|
||||||
|
* Host management gateway: 10.240.0.1
|
||||||
|
* DNS servers: 69.20.0.164 69.20.0.196
|
||||||
|
* Container management: 172.29.236.21 - 172.29.236.22
|
||||||
|
* Tunnel: 172.29.240.21 - 172.29.240.22
|
||||||
|
* Storage: 172.29.244.21 - 172.29.244.22
|
||||||
|
|
||||||
|
|
||||||
|
.. TODO Update this section. Should this information be moved to the overview
|
||||||
|
chapter / network architecture section?
|
||||||
|
|
||||||
|
Modifying the network interfaces file
|
||||||
|
-------------------------------------
|
||||||
|
|
||||||
|
After establishing initial host management network connectivity using
|
||||||
|
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
|
||||||
|
An example is provided on this `Link to Production Environment`_ based
|
||||||
|
on the production environment described in `host layout for production
|
||||||
|
environment`_.
|
||||||
|
|
||||||
|
.. _host layout for production environment: overview-host-layout.html#production-environment
|
||||||
|
.. _Link to Production Environment: app-targethosts-networkexample.html#production-environment
|
||||||
|
|
||||||
|
Test environment
|
||||||
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
This example uses the following parameters to configure networking on a
|
||||||
|
single target host. See `Figure 3.2`_ for a visual representation of these
|
||||||
|
parameters in the architecture.
|
||||||
|
|
||||||
|
* VLANs:
|
||||||
|
|
||||||
|
* Host management: Untagged/Native
|
||||||
|
* Container management: 10
|
||||||
|
* Tunnels: 30
|
||||||
|
* Storage: 20
|
||||||
|
|
||||||
|
* Networks:
|
||||||
|
|
||||||
|
* Host management: 10.240.0.0/22
|
||||||
|
* Container management: 172.29.236.0/22
|
||||||
|
* Tunnel: 172.29.240.0/22
|
||||||
|
* Storage: 172.29.244.0/22
|
||||||
|
|
||||||
|
* Addresses:
|
||||||
|
|
||||||
|
* Host management: 10.240.0.11
|
||||||
|
* Host management gateway: 10.240.0.1
|
||||||
|
* DNS servers: 69.20.0.164 69.20.0.196
|
||||||
|
* Container management: 172.29.236.11
|
||||||
|
* Tunnel: 172.29.240.11
|
||||||
|
* Storage: 172.29.244.11
|
||||||
|
|
||||||
|
.. _Figure 3.2: targethosts-networkconfig.html#fig_hosts-target-network-containerexample
|
||||||
|
|
||||||
|
**Figure 3.2. Target host for infrastructure, networking, compute, and
|
||||||
|
storage services**
|
||||||
|
|
||||||
|
.. image:: figures/networkarch-container-external-example.png
|
||||||
|
|
||||||
|
Modifying the network interfaces file
|
||||||
|
-------------------------------------
|
||||||
|
|
||||||
|
After establishing initial host management network connectivity using
|
||||||
|
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
|
||||||
|
An example is provided below on this `link to Test Environment`_ based
|
||||||
|
on the test environment described in `host layout for testing
|
||||||
|
environment`_.
|
||||||
|
|
||||||
|
.. _Link to Test Environment: app-targethosts-networkexample.html#test-environment
|
||||||
|
.. _host layout for testing environment: overview-host-layout.html#test-environment
|
||||||
|
@ -1,183 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
=========================================
|
|
||||||
Simple architecture: A single target host
|
|
||||||
=========================================
|
|
||||||
|
|
||||||
Overview
|
|
||||||
~~~~~~~~
|
|
||||||
|
|
||||||
This example uses the following parameters to configure networking on a
|
|
||||||
single target host. See `Figure 3.2`_ for a visual representation of these
|
|
||||||
parameters in the architecture.
|
|
||||||
|
|
||||||
- VLANs:
|
|
||||||
|
|
||||||
- Host management: Untagged/Native
|
|
||||||
|
|
||||||
- Container management: 10
|
|
||||||
|
|
||||||
- Tunnels: 30
|
|
||||||
|
|
||||||
- Storage: 20
|
|
||||||
|
|
||||||
Networks:
|
|
||||||
|
|
||||||
- Host management: 10.240.0.0/22
|
|
||||||
|
|
||||||
- Container management: 172.29.236.0/22
|
|
||||||
|
|
||||||
- Tunnel: 172.29.240.0/22
|
|
||||||
|
|
||||||
- Storage: 172.29.244.0/22
|
|
||||||
|
|
||||||
Addresses:
|
|
||||||
|
|
||||||
- Host management: 10.240.0.11
|
|
||||||
|
|
||||||
- Host management gateway: 10.240.0.1
|
|
||||||
|
|
||||||
- DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
|
|
||||||
- Container management: 172.29.236.11
|
|
||||||
|
|
||||||
- Tunnel: 172.29.240.11
|
|
||||||
|
|
||||||
- Storage: 172.29.244.11
|
|
||||||
|
|
||||||
.. _Figure 3.2: targethosts-networkexample.html#fig_hosts-target-network-containerexample
|
|
||||||
|
|
||||||
**Figure 3.2. Target host for infrastructure, networking, compute, and
|
|
||||||
storage services**
|
|
||||||
|
|
||||||
.. image:: figures/networkarch-container-external-example.png
|
|
||||||
|
|
||||||
Modifying the network interfaces file
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
After establishing the initial host management network connectivity using
|
|
||||||
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file as
|
|
||||||
described in this procedure.
|
|
||||||
|
|
||||||
Contents of the ``/etc/network/interfaces`` file:
|
|
||||||
|
|
||||||
#. Physical interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Physical interface 1
|
|
||||||
auto eth0
|
|
||||||
iface eth0 inet manual
|
|
||||||
bond-master bond0
|
|
||||||
bond-primary eth0
|
|
||||||
|
|
||||||
# Physical interface 2
|
|
||||||
auto eth1
|
|
||||||
iface eth1 inet manual
|
|
||||||
bond-master bond1
|
|
||||||
bond-primary eth1
|
|
||||||
|
|
||||||
# Physical interface 3
|
|
||||||
auto eth2
|
|
||||||
iface eth2 inet manual
|
|
||||||
bond-master bond0
|
|
||||||
|
|
||||||
# Physical interface 4
|
|
||||||
auto eth3
|
|
||||||
iface eth3 inet manual
|
|
||||||
bond-master bond1
|
|
||||||
|
|
||||||
#. Bonding interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Bond interface 0 (physical interfaces 1 and 3)
|
|
||||||
auto bond0
|
|
||||||
iface bond0 inet static
|
|
||||||
bond-slaves eth0 eth2
|
|
||||||
bond-mode active-backup
|
|
||||||
bond-miimon 100
|
|
||||||
bond-downdelay 200
|
|
||||||
bond-updelay 200
|
|
||||||
address 10.240.0.11
|
|
||||||
netmask 255.255.252.0
|
|
||||||
gateway 10.240.0.1
|
|
||||||
dns-nameservers 69.20.0.164 69.20.0.196
|
|
||||||
|
|
||||||
# Bond interface 1 (physical interfaces 2 and 4)
|
|
||||||
auto bond1
|
|
||||||
iface bond1 inet manual
|
|
||||||
bond-slaves eth1 eth3
|
|
||||||
bond-mode active-backup
|
|
||||||
bond-miimon 100
|
|
||||||
bond-downdelay 250
|
|
||||||
bond-updelay 250
|
|
||||||
|
|
||||||
#. Logical (VLAN) interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Container management VLAN interface
|
|
||||||
iface bond0.10 inet manual
|
|
||||||
vlan-raw-device bond0
|
|
||||||
|
|
||||||
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
|
|
||||||
iface bond1.30 inet manual
|
|
||||||
vlan-raw-device bond1
|
|
||||||
|
|
||||||
# Storage network VLAN interface (optional)
|
|
||||||
iface bond0.20 inet manual
|
|
||||||
vlan-raw-device bond0
|
|
||||||
|
|
||||||
|
|
||||||
#. Bridge devices:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Container management bridge
|
|
||||||
auto br-mgmt
|
|
||||||
iface br-mgmt inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references tagged interface
|
|
||||||
bridge_ports bond0.10
|
|
||||||
address 172.29.236.11
|
|
||||||
netmask 255.255.252.0
|
|
||||||
dns-nameservers 69.20.0.164 69.20.0.196
|
|
||||||
|
|
||||||
# OpenStack Networking VXLAN (tunnel/overlay) bridge
|
|
||||||
auto br-vxlan
|
|
||||||
iface br-vxlan inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references tagged interface
|
|
||||||
bridge_ports bond1.30
|
|
||||||
address 172.29.240.11
|
|
||||||
netmask 255.255.252.0
|
|
||||||
|
|
||||||
# OpenStack Networking VLAN bridge
|
|
||||||
auto br-vlan
|
|
||||||
iface br-vlan inet manual
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references untagged interface
|
|
||||||
bridge_ports bond1
|
|
||||||
|
|
||||||
# Storage bridge
|
|
||||||
auto br-storage
|
|
||||||
iface br-storage inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port reference tagged interface
|
|
||||||
bridge_ports bond0.20
|
|
||||||
address 172.29.244.11
|
|
||||||
netmask 255.255.252.0
|
|
||||||
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,207 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
======================
|
|
||||||
Reference architecture
|
|
||||||
======================
|
|
||||||
|
|
||||||
Overview
|
|
||||||
~~~~~~~~
|
|
||||||
|
|
||||||
This example allows you to use your own parameters for the deployment.
|
|
||||||
|
|
||||||
The following is a table of the bridges that are be configured on hosts, if
|
|
||||||
you followed the previously proposed design.
|
|
||||||
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| Bridge name | Best configured on | With a static IP |
|
|
||||||
+=============+=======================+=====================================+
|
|
||||||
| br-mgmt | On every node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every storage node | When component is deployed on metal |
|
|
||||||
+ br-storage +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every network node | When component is deployed on metal |
|
|
||||||
+ br-vxlan +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Always |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
| | On every network node | Never |
|
|
||||||
+ br-vlan +-----------------------+-------------------------------------+
|
|
||||||
| | On every compute node | Never |
|
|
||||||
+-------------+-----------------------+-------------------------------------+
|
|
||||||
|
|
||||||
Modifying the network interfaces file
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
After establishing initial host management network connectivity using
|
|
||||||
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file as
|
|
||||||
described in the following procedure.
|
|
||||||
|
|
||||||
**Procedure 4.1. Modifying the network interfaces file**
|
|
||||||
|
|
||||||
#. Physical interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Physical interface 1
|
|
||||||
auto eth0
|
|
||||||
iface eth0 inet manual
|
|
||||||
bond-master bond0
|
|
||||||
bond-primary eth0
|
|
||||||
|
|
||||||
# Physical interface 2
|
|
||||||
auto eth1
|
|
||||||
iface eth1 inet manual
|
|
||||||
bond-master bond1
|
|
||||||
bond-primary eth1
|
|
||||||
|
|
||||||
# Physical interface 3
|
|
||||||
auto eth2
|
|
||||||
iface eth2 inet manual
|
|
||||||
bond-master bond0
|
|
||||||
|
|
||||||
# Physical interface 4
|
|
||||||
auto eth3
|
|
||||||
iface eth3 inet manual
|
|
||||||
bond-master bond1
|
|
||||||
|
|
||||||
#. Bonding interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Bond interface 0 (physical interfaces 1 and 3)
|
|
||||||
auto bond0
|
|
||||||
iface bond0 inet static
|
|
||||||
bond-slaves eth0 eth2
|
|
||||||
bond-mode active-backup
|
|
||||||
bond-miimon 100
|
|
||||||
bond-downdelay 200
|
|
||||||
bond-updelay 200
|
|
||||||
address HOST_IP_ADDRESS
|
|
||||||
netmask HOST_NETMASK
|
|
||||||
gateway HOST_GATEWAY
|
|
||||||
dns-nameservers HOST_DNS_SERVERS
|
|
||||||
|
|
||||||
# Bond interface 1 (physical interfaces 2 and 4)
|
|
||||||
auto bond1
|
|
||||||
iface bond1 inet manual
|
|
||||||
bond-slaves eth1 eth3
|
|
||||||
bond-mode active-backup
|
|
||||||
bond-miimon 100
|
|
||||||
bond-downdelay 250
|
|
||||||
bond-updelay 250
|
|
||||||
|
|
||||||
If not already complete, replace ``HOST_IP_ADDRESS``,
|
|
||||||
``HOST_NETMASK``, ``HOST_GATEWAY``, and ``HOST_DNS_SERVERS``
|
|
||||||
with the appropriate configuration for the host management network.
|
|
||||||
|
|
||||||
#. Logical (VLAN) interfaces:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Container management VLAN interface
|
|
||||||
iface bond0.CONTAINER_MGMT_VLAN_ID inet manual
|
|
||||||
vlan-raw-device bond0
|
|
||||||
|
|
||||||
# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
|
|
||||||
iface bond1.TUNNEL_VLAN_ID inet manual
|
|
||||||
vlan-raw-device bond1
|
|
||||||
|
|
||||||
# Storage network VLAN interface (optional)
|
|
||||||
iface bond0.STORAGE_VLAN_ID inet manual
|
|
||||||
vlan-raw-device bond0
|
|
||||||
|
|
||||||
Replace ``*_VLAN_ID`` with the appropriate configuration for the
|
|
||||||
environment.
|
|
||||||
|
|
||||||
#. Bridge devices:
|
|
||||||
|
|
||||||
.. code-block:: yaml
|
|
||||||
|
|
||||||
# Container management bridge
|
|
||||||
auto br-mgmt
|
|
||||||
iface br-mgmt inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references tagged interface
|
|
||||||
bridge_ports bond0.CONTAINER_MGMT_VLAN_ID
|
|
||||||
address CONTAINER_MGMT_BRIDGE_IP_ADDRESS
|
|
||||||
netmask CONTAINER_MGMT_BRIDGE_NETMASK
|
|
||||||
dns-nameservers CONTAINER_MGMT_BRIDGE_DNS_SERVERS
|
|
||||||
|
|
||||||
# OpenStack Networking VXLAN (tunnel/overlay) bridge
|
|
||||||
auto br-vxlan
|
|
||||||
iface br-vxlan inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references tagged interface
|
|
||||||
bridge_ports bond1.TUNNEL_VLAN_ID
|
|
||||||
address TUNNEL_BRIDGE_IP_ADDRESS
|
|
||||||
netmask TUNNEL_BRIDGE_NETMASK
|
|
||||||
|
|
||||||
# OpenStack Networking VLAN bridge
|
|
||||||
auto br-vlan
|
|
||||||
iface br-vlan inet manual
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port references untagged interface
|
|
||||||
bridge_ports bond1
|
|
||||||
|
|
||||||
# Storage bridge (optional)
|
|
||||||
auto br-storage
|
|
||||||
iface br-storage inet static
|
|
||||||
bridge_stp off
|
|
||||||
bridge_waitport 0
|
|
||||||
bridge_fd 0
|
|
||||||
# Bridge port reference tagged interface
|
|
||||||
bridge_ports bond0.STORAGE_VLAN_ID
|
|
||||||
address STORAGE_BRIDGE_IP_ADDRESS
|
|
||||||
netmask STORAGE_BRIDGE_NETMASK
|
|
||||||
|
|
||||||
Replace ``*_VLAN_ID``, ``*_BRIDGE_IP_ADDRESS``, and
|
|
||||||
``*_BRIDGE_NETMASK``, ``*_BRIDGE_DNS_SERVERS`` with the
|
|
||||||
appropriate configuration for the environment.
|
|
||||||
|
|
||||||
Example for 3 controller nodes and 2 compute nodes
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
- VLANs:
|
|
||||||
|
|
||||||
- Host management: Untagged/Native
|
|
||||||
- Container management: 10
|
|
||||||
- Tunnels: 30
|
|
||||||
- Storage: 20
|
|
||||||
|
|
||||||
- Networks:
|
|
||||||
|
|
||||||
- Host management: 10.240.0.0/22
|
|
||||||
- Container management: 172.29.236.0/22
|
|
||||||
- Tunnel: 172.29.240.0/22
|
|
||||||
- Storage: 172.29.244.0/22
|
|
||||||
|
|
||||||
- Addresses for the controller nodes:
|
|
||||||
|
|
||||||
- Host management: 10.240.0.11 - 10.240.0.13
|
|
||||||
- Host management gateway: 10.240.0.1
|
|
||||||
- DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
- Container management: 172.29.236.11 - 172.29.236.13
|
|
||||||
- Tunnel: no IP (because IP exist in the containers, when the components
|
|
||||||
aren't deployed directly on metal)
|
|
||||||
- Storage: no IP (because IP exist in the containers, when the components
|
|
||||||
aren't deployed directly on metal)
|
|
||||||
|
|
||||||
- Addresses for the compute nodes:
|
|
||||||
|
|
||||||
- Host management: 10.240.0.21 - 10.240.0.22
|
|
||||||
- Host management gateway: 10.240.0.1
|
|
||||||
- DNS servers: 69.20.0.164 69.20.0.196
|
|
||||||
- Container management: 172.29.236.21 - 172.29.236.22
|
|
||||||
- Tunnel: 172.29.240.21 - 172.29.240.22
|
|
||||||
- Storage: 172.29.244.21 - 172.29.244.22
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,5 +1,3 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
==========================
|
==========================
|
||||||
Preparing the target hosts
|
Preparing the target hosts
|
||||||
==========================
|
==========================
|
||||||
@ -17,8 +15,10 @@ to access the internet or suitable local repositories.
|
|||||||
We recommend adding the Secure Shell (SSH) server packages to the
|
We recommend adding the Secure Shell (SSH) server packages to the
|
||||||
installation on target hosts without local (console) access.
|
installation on target hosts without local (console) access.
|
||||||
|
|
||||||
We also recommend setting your locale to en_US.UTF-8. Other locales may
|
.. note::
|
||||||
work, but they are not tested or supported.
|
|
||||||
|
We also recommend setting your locale to `en_US.UTF-8`. Other locales may
|
||||||
|
work, but they are not tested or supported.
|
||||||
|
|
||||||
Configuring the operating system
|
Configuring the operating system
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
@ -50,8 +50,8 @@ Configuring the operating system
|
|||||||
|
|
||||||
#. Reboot the host to activate the changes and use new kernel.
|
#. Reboot the host to activate the changes and use new kernel.
|
||||||
|
|
||||||
Deploying SSH keys
|
Deploying Secure Shell (SSH) keys
|
||||||
~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Ansible uses SSH for connectivity between the deployment and target hosts.
|
Ansible uses SSH for connectivity between the deployment and target hosts.
|
||||||
|
|
||||||
@ -67,17 +67,19 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
|
|||||||
|
|
||||||
.. _GitHub's documentation on generating SSH keys: https://help.github.com/articles/generating-ssh-keys/
|
.. _GitHub's documentation on generating SSH keys: https://help.github.com/articles/generating-ssh-keys/
|
||||||
|
|
||||||
.. warning:: OpenStack-Ansible deployments expect the presence of a
|
.. warning::
|
||||||
``/root/.ssh/id_rsa.pub`` file on the deployment host.
|
|
||||||
The contents of this file is inserted into an
|
|
||||||
``authorized_keys`` file for the containers, which is a
|
|
||||||
necessary step for the Ansible playbooks. You can
|
|
||||||
override this behavior by setting the
|
|
||||||
``lxc_container_ssh_key`` variable to the public key for
|
|
||||||
the container.
|
|
||||||
|
|
||||||
Configuring LVM
|
OpenStack-Ansible deployments expect the presence of a
|
||||||
~~~~~~~~~~~~~~~
|
``/root/.ssh/id_rsa.pub`` file on the deployment host.
|
||||||
|
The contents of this file is inserted into an
|
||||||
|
``authorized_keys`` file for the containers, which is a
|
||||||
|
necessary step for the Ansible playbooks. You can
|
||||||
|
override this behavior by setting the
|
||||||
|
``lxc_container_ssh_key`` variable to the public key for
|
||||||
|
the container.
|
||||||
|
|
||||||
|
Configuring storage
|
||||||
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
`Logical Volume Manager (LVM)`_ allows a single device to be split into
|
`Logical Volume Manager (LVM)`_ allows a single device to be split into
|
||||||
multiple logical volumes which appear as a physical storage device to the
|
multiple logical volumes which appear as a physical storage device to the
|
||||||
@ -92,7 +94,7 @@ their data storage.
|
|||||||
configuration, edit the generated configuration file as needed.
|
configuration, edit the generated configuration file as needed.
|
||||||
|
|
||||||
#. To use the optional Block Storage (cinder) service, create an LVM
|
#. To use the optional Block Storage (cinder) service, create an LVM
|
||||||
volume group named ``cinder-volumes`` on the Block Storage host. A
|
volume group named ``cinder-volume`` on the Block Storage host. A
|
||||||
metadata size of 2048 must be specified during physical volume
|
metadata size of 2048 must be specified during physical volume
|
||||||
creation. For example:
|
creation. For example:
|
||||||
|
|
||||||
@ -107,7 +109,3 @@ their data storage.
|
|||||||
default.
|
default.
|
||||||
|
|
||||||
.. _Logical Volume Manager (LVM): https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
|
.. _Logical Volume Manager (LVM): https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|
@ -1,35 +1,21 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
============
|
||||||
|
Target hosts
|
||||||
|
============
|
||||||
|
|
||||||
=======================
|
.. figure:: figures/installation-workflow-targethosts.png
|
||||||
Chapter 3. Target hosts
|
:width: 100%
|
||||||
=======================
|
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
:maxdepth: 2
|
||||||
|
|
||||||
targethosts-prepare.rst
|
targethosts-prepare.rst
|
||||||
targethosts-network.rst
|
|
||||||
targethosts-networkconfig.rst
|
targethosts-networkconfig.rst
|
||||||
|
|
||||||
**Figure 3.1. Installation workflow**
|
On each target host, perform the following tasks:
|
||||||
|
|
||||||
.. image:: figures/workflow-targethosts.png
|
* Name the target hosts
|
||||||
|
* Install the operating system
|
||||||
We recommend at least five target hosts to contain the
|
* Generate and set up security measures
|
||||||
OpenStack environment and supporting infrastructure for the OSA
|
* Update the operating system and install additional software packages
|
||||||
installation process. On each target host, perform the following tasks:
|
* Create LVM volume groups
|
||||||
|
* Configure networking devices
|
||||||
- Naming target hosts
|
|
||||||
|
|
||||||
- Install the operating system
|
|
||||||
|
|
||||||
- Generate and set up security measures
|
|
||||||
|
|
||||||
- Update the operating system and install additional software packages
|
|
||||||
|
|
||||||
- Create LVM volume groups
|
|
||||||
|
|
||||||
- Configure networking devices
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
||||||
|