[DOCS} Further edits, corrects to draft install

Change-Id: I25df03899c3052e86cce8cb3797f605993c25a37
Implements: blueprint osa-install-guide-overhaul
This commit is contained in:
Alexandra 2016-08-25 14:33:55 +01:00 committed by Travis Truman
parent 8252d313be
commit 192efa5d63
26 changed files with 230 additions and 336 deletions

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===================================
Appendix E: Advanced configuration
===================================

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===========================================
Overriding OpenStack configuration defaults
===========================================

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _security_hardening:
==================
@ -34,8 +32,8 @@ an environment using a playbook supplied with OpenStack-Ansible:
# Apply security hardening configurations
openstack-ansible security-hardening.yml
For more details on the security configurations that will be applied, refer to
the `openstack-ansible-security`_ documentation. Review the `Configuration`_
Refer to the `openstack-ansible-security`_ documentation for more details on
the security configurations. Review the `Configuration`_
section of the openstack-ansible-security documentation to find out how to
fine-tune certain security configurations.

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================================
Securing services with SSL certificates
=======================================

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=================================
Appendix A: Configuration files
=================================

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==================================================
Appendix C: Customizing host and service layouts
==================================================
@ -49,7 +47,7 @@ variables to any component containers on the specific host.
.. note::
Our current recommendation is for new inventory groups, particularly for new
We recommend new inventory groups, particularly for new
services, to be defined using a new file in the ``conf.d/`` directory in
order to manage file size.
@ -99,9 +97,6 @@ groups in this way allows flexible targeting of roles and tasks.
Customizing existing components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Numerous customization scenarios are possible, but three popular ones are
presented here as starting points and also as common recipes.
Deploying directly on hosts
---------------------------
@ -114,7 +109,7 @@ is the same for a service deployed directly onto the host.
.. note::
The ``cinder_volume`` component is also deployed directly on the host by
The ``cinder-volume`` component is also deployed directly on the host by
default. See the ``env.d/cinder.yml`` file for this example.
Omit a service or component from the deployment

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=================================
Appendix B: Additional resources
=================================

View File

@ -1,5 +1,3 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
=====================
Appendix D: Security
=====================

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===========================================
Appendix F: Sample interface configurations
===========================================

View File

@ -2,8 +2,6 @@
Appendices
==========
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. toctree::
:maxdepth: 2
@ -12,4 +10,8 @@ Appendices
app-custom-layouts.rst
app-security.rst
app-advanced-config-options.rst
targethosts-networkexample.rst
app-targethosts-networkexample.rst
--------------
.. include:: navigation.txt

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============================
Configuring service credentials
===============================
@ -16,7 +14,7 @@ users.
The following options configure passwords for the web interfaces.
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
* ``keystone_auth_admin_password`` configures the ``admin`` tenant
password for both the OpenStack API and dashboard access.
.. note::

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=================================
Initial environment configuration
=================================
@ -28,13 +26,13 @@ to the deployment of your OpenStack environment.
The file is heavily commented with details about the various options.
There are various types of physical hardware that are able to use containers
deployed by OpenStack-Ansible. For example, hosts listed in the
``shared-infra_hosts`` run containers for many of the shared services that
your OpenStack environment requires. Some of these services include databases,
memcached, and RabbitMQ. There are several other host types that contain
other types of containers and all of these are listed in
``openstack_user_config.yml``.
Configuration in ``openstack_user_config.yml`` defines which hosts
will run the containers and services deployed by OpenStack-Ansible. For
example, hosts listed in the ``shared-infra_hosts`` run containers for many of
the shared services that your OpenStack environment requires. Some of these
services include databases, memcached, and RabbitMQ. There are several other
host types that contain other types of containers and all of these are listed
in ``openstack_user_config.yml``.
For details about how the inventory is generated from the environment
configuration, see :ref:`developer-inventory`.

View File

@ -1,8 +1,6 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
======================================
``openstack_user_config.yml`` examples
======================================
==================================
openstack_user_config.yml examples
==================================
The ``/etc/openstack_deploy/openstack_user_config.yml`` configuration file
contains parameters to configure target host, and target host networking.
@ -11,26 +9,12 @@ Examples are provided below for a test environment and production environment.
Test environment
~~~~~~~~~~~~~~~~
.. TODO Parse openstack_user_config.yml examples when done.
.. TODO include openstack_user_config.yml examples when done.
Production environment
~~~~~~~~~~~~~~~~~~~~~~
.. TODO Parse openstack_user_config.yml examples when done.
Setting an MTU on a default lxc bridge (lxcbr0)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To modify a container MTU it is required to set ``lxc_net_mtu`` to
a value other than 1500 in ``user_variables.yml``.
.. note::
It is necessary to modify the ``provider_networks`` subsection to
reflect the change.
This will define the mtu on the lxcbr0 interface. An ifup/ifdown will
be required if the interface is already up for the changes to take effect.
.. TODO include openstack_user_config.yml examples when done.
--------------

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
========================
Deployment configuration
========================
@ -14,22 +12,17 @@ Deployment configuration
.. figure:: figures/installation-workflow-configure-deployment.png
:width: 100%
Installation workflow
Ansible references a handful of files containing mandatory and optional
configuration directives. These files must be modified to define the
configuration directives. Modify these files to define the
target environment before running the Ansible playbooks. Configuration
tasks include:
- Target host networking to define bridge interfaces and
networks.
- A list of target hosts on which to install the software.
- Virtual and physical network relationships for OpenStack
Networking (neutron).
- Passwords for all services.
* Target host networking to define bridge interfaces and
networks.
* A list of target hosts on which to install the software.
* Virtual and physical network relationships for OpenStack
Networking (neutron).
* Passwords for all services.
--------------

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
===============
Deployment host
===============
@ -7,8 +5,6 @@ Deployment host
.. figure:: figures/installation-workflow-deploymenthost.png
:width: 100%
**Installation workflow**
When installing OpenStack in a production environment, we recommend using a
separate deployment host which contains Ansible and orchestrates the
OpenStack-Ansible installation on the target hosts. In a test environment, we
@ -23,7 +19,7 @@ Installing the operating system
Install the `Ubuntu Server 14.04 (Trusty Tahr) LTS 64-bit
<http://releases.ubuntu.com/14.04/>`_ operating system on the
deployment host. Configure at least one network interface to
access the Internet or suitable local repositories.
access the internet or suitable local repositories.
Configuring the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -5,11 +5,6 @@ OpenStack-Ansible Installation Guide - DRAFT
This is a draft revision of the install guide for Newton
and is currently under development.
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Table of contents
~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
@ -19,3 +14,10 @@ Table of contents
configure.rst
installation.rst
app.rst
Third-party trademarks and tradenames appearing in this document are the
property of their respective owners. Such third-party trademarks have
been printed in caps or initial caps and are used for referential
purposes only. We do not intend our use or display of other companies'
tradenames, trademarks, or service marks to imply a relationship with,
or endorsement or sponsorship of us by these other companies.

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
============
Installation
============
@ -13,19 +11,19 @@ The installation process requires running three main playbooks:
- The ``setup-infrastructure.yml`` Ansible infrastructure playbook installs
infrastructure services: memcached, the repository server, Galera, RabbitMQ,
Rsyslog, and configures Rsyslog.
and Rsyslog.
- The ``setup-openstack.yml`` OpenStack playbook installs OpenStack services,
including the Identity service (keystone), Image service (glance),
Block Storage (cinder), Compute service (nova), OpenStack Networking
(neutron), Orchestration (heat), Dashboard (horizon), Telemetry service
(ceilometer and aodh), Object Storage service (swift), and OpenStack
bare metal provisioning (ironic).
Bare Metal provisioning (ironic).
Checking the integrity of your configuration files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before running any playbook, check the integrity of your configuration files:
Before running any playbook, check the integrity of your configuration files.
#. Ensure all files edited in ``/etc/`` are Ansible
YAML compliant. Guidelines can be found here:
@ -154,10 +152,6 @@ Verifying OpenStack operation
.. TODO Add procedures to test different layers of the OpenStack environment
Verify basic operation of the OpenStack API and dashboard.
**Verifying the API**
The utility container provides a CLI environment for additional
configuration and testing.
@ -204,7 +198,8 @@ configuration and testing.
| e59e4379730b41209f036bbeac51b181 | keystone |
+----------------------------------+--------------------+
**Verifying the Dashboard**
Verifying the Dashboard (horizon)
---------------------------------
#. With a web browser, access the Dashboard using the external load
balancer IP address defined by the ``external_lb_vip_address`` option

View File

@ -1,104 +1,98 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _host-layout:
===========
Host layout
===========
The hosts are called target hosts because Ansible deploys the
OpenStack-Ansible environment within these hosts. We recommend a
deployment host from which Ansible orchestrates the deployment
process. One of the target hosts can function as the deployment host.
* Infrastructure:
If the optional Block Storage (cinder) service is used, we recommend
using an additional host. Block Storage hosts require an LVM volume group named
``cinder-volumes``. See `the section called "Installation
requirements" <overview-requirements.html>`_ and `the section
called "Configuring LVM" <targethosts-prepare.html#configuring-lvm>`_
for more information.
* Galera
* RabbitMQ
* Memcached
* Repository
Use at least one load balancer to manage the traffic among
the target hosts. You can use any type of load balancer such as a hardware
appliance or software like `HAProxy <http://www.haproxy.org/>`_. We recommend
using physical load balancers for a production environment.
* (Optional) Load Balancer hosts:
A Control Plane and Infrastucture target host contains the following
services:
* HAProxy
- Infrastructure:
.. note::
- Galera
Use at least one load balancer to manage the traffic among
the target hosts. You can use any type of load balancer such as a hardware
appliance or software like `HAProxy <http://www.haproxy.org/>`_. We recommend
using physical load balancers for a production environment.
- RabbitMQ
* Log aggregation host:
- Memcached
* Rsyslog
- Logging
* OpenStack API services:
- Repository
* Identity (keystone)
* Image service (glance)
* Compute management (nova)
* Networking (neutron)
* Orchestration (heat)
* Dashboard (horizon)
- OpenStack:
* Compute hosts:
- Identity (keystone)
* Compute virtualization (``nova-compute``)
* Networking agent (``neutron-agent``)
- Image service (glance)
* (Optional) Storage hosts:
- Compute management (nova)
* Block Storage scheduler (``cinder-scheduler``)
* Block Storage volumes (``cinder-volume``)
- Networking (neutron)
- Orchestration (heat)
- Dashboard (horizon)
Log aggregation hosts contain the following services:
- Rsyslog
Compute target hosts contain the following services:
- Compute virtualization
- Logging
(Optional) Storage target hosts contain the following services:
- Block Storage scheduler
- Block Storage volumes
.. note::
If the optional Block Storage (cinder) service is used, we recommend
using an additional host. Block Storage hosts require an LVM volume group named
``cinder-volumes``. See `the section called "Installation
requirements" <overview-requirements.html>`_ and `the section
called "Configuring LVM" <targethosts-prepare.html#configuring-lvm>`_
for more information.
Test environment
~~~~~~~~~~~~~~~~
The test environment is a minimal set of components to deploy a working
OpenStack-ansible environment. It consists of three hosts in total: one
control plane and infrastructure host, one compute host and one storage host.
It also has the following features:
OpenStack-Ansible environment. It consists of three hosts in total:
- One Network Interface Card (NIC) for each target host
- No log aggregation target host
- File-backed storage for glance and nova
- LVM-backed cinder
* One control plane and infrastructure host
* One compute host
* One storage host
It contains the following features:
* One Network Interface Card (NIC) for each target host
* No log aggregation host
* File-backed storage for glance and nova
* LVM-backed cinder
.. image:: figures/arch-layout-test.png
:width: 100%
:alt: Test environment host layout
Production environment
~~~~~~~~~~~~~~~~~~~~~~
The layout for a production environment involves seven target
hosts in total: three control plane and infrastructure hosts, two compute
hosts, one storage host and one log aggregation host. It also has the
following features:
The production environment is a more complicated set of components to deploy
a working OpenStack-Ansible environment. The layout for a production
environment involves seven target hosts in total:
- Bonded NICs.
- NFS/Ceph-backed storage for nova, glance, and cinder.
* Three control plane and infrastructure hosts
* Two compute hosts
* One storage host
* One log aggregation host
All hosts will need at least one networking
It contains the following features:
* Bonded NICs
* NFS/Ceph-backed storage for nova, glance, and cinder
All hosts need at least one networking
interface, but we recommend multiple bonded interfaces.
For more information on physical, logical, and virtual network

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _network-architecture:
====================
@ -93,10 +91,9 @@ Target hosts contain the following network bridges:
- Provides management of and communication among infrastructure and
OpenStack services.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth1``
in each container. The container network interface
is configurable in ``openstack_user_config.yml``.
- Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth1`` in each container. The container
network interface is configurable in ``openstack_user_config.yml``.
- Storage ``br-storage``:
@ -105,10 +102,10 @@ Target hosts contain the following network bridges:
- Provides segregated access to Block Storage devices between
Compute and Block Storage hosts.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond0`` VLAN subinterface. Also attaches to ``eth2``
in each associated container. The container network
interface is configurable in ``openstack_user_config.yml``.
- Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth2`` in each associated container.
The container network interface is configurable in
``openstack_user_config.yml``.
- OpenStack Networking tunnel ``br-vxlan``:
@ -116,10 +113,9 @@ Target hosts contain the following network bridges:
- Provides infrastructure for VXLAN tunnel networks.
- Manually creates and attaches to a physical or logical interface,
typically a ``bond1`` VLAN subinterface. Also attaches to
``eth10`` in each associated container. The
container network interface is configurable in
- Attaches to a physical or logical interface, typically a ``bond1`` VLAN
subinterface. Also attaches to ``eth10`` in each associated container.
The container network interface is configurable in
``openstack_user_config.yml``.
- OpenStack Networking provider ``br-vlan``:
@ -128,11 +124,10 @@ Target hosts contain the following network bridges:
- Provides infrastructure for VLAN networks.
- Manually creates and attaches to a physical or logical interface,
typically ``bond1``. Attaches to ``eth11`` for vlan type networks
in each associated container. It does not contain an IP address because
it only handles layer 2 connectivity. The
container network interface is configurable in
- Attaches to a physical or logical interface, typically ``bond1``.
Attaches to ``eth11`` for vlan type networks in each associated
container. It is not assigned an IP address because it only handles
layer 2 connectivity. The container network interface is configurable in
``openstack_user_config.yml``.
- This interface supports flat networks with additional

View File

@ -1,24 +1,14 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================
About OpenStack-Ansible
=======================
OpenStack-Ansible (OSA) uses the Ansible IT automation engine to
deploy an OpenStack environment on Ubuntu Linux. OpenStack components may
be installed into Linux Containers (LXC) for isolation and ease of
maintenance.
OpenStack-Ansible (OSA) uses the `Ansible IT <https://www.ansible.com/how-ansible-works>`_
automation engine to deploy an OpenStack environment on Ubuntu Linux.
For isolation and ease of maintenance, you can install OpenStack components
into Linux containers (LXC).
This documentation is intended for deployers, and walks through an
OpenStack-Ansible installation for a test environment, and a production
environment.
Third-party trademarks and tradenames appearing in this document are the
property of their respective owners. Such third-party trademarks have
been printed in caps or initial caps and are used for referential
purposes only. We do not intend our use or display of other companies'
tradenames, trademarks, or service marks to imply a relationship with,
or endorsement or sponsorship of us by, these other companies.
OpenStack-Ansible installation for a test and production environments.
Ansible
~~~~~~~
@ -31,34 +21,33 @@ Ansible uses playbooks written in the YAML language for orchestration.
For more information, see `Ansible - Intro to
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
In this guide, we refer to the host running Ansible playbooks as
the deployment host and the hosts on which Ansible installs OpenStack services
and infrastructure components as the target hosts.
In this guide, we refer to two types of hosts:
Linux Containers (LXC)
* The host running Ansible playbooks is the `deployment host`.
* The hosts where Ansible installs OpenStack services and infrastructure
components are the `target host`.
Linux containers (LXC)
~~~~~~~~~~~~~~~~~~~~~~
Containers provide operating-system level virtualization by enhancing
the concept of ``chroot`` environments, which isolate resources and file
the concept of ``chroot`` environments. These isolate resources and file
systems for a particular group of processes without the overhead and
complexity of virtual machines. They access the same kernel, devices,
and file systems on the underlying host and provide a thin operational
layer built around a set of rules.
The Linux Containers (LXC) project implements operating system level
The LXC project implements operating system level
virtualization on Linux using kernel namespaces and includes the
following features:
- Resource isolation including CPU, memory, block I/O, and network
using ``cgroups``.
- Selective connectivity to physical and virtual network devices on the
underlying physical host.
- Support for a variety of backing stores including LVM.
- Built on a foundation of stable Linux technologies with an active
development and support community.
* Resource isolation including CPU, memory, block I/O, and network
using ``cgroups``.
* Selective connectivity to physical and virtual network devices on the
underlying physical host.
* Support for a variety of backing stores including LVM.
* Built on a foundation of stable Linux technologies with an active
development and support community.
--------------

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=========================
Installation requirements
=========================
@ -57,14 +55,14 @@ Logging hosts
An OpenStack-Ansible deployment generates a significant amount of logging.
Logs come from a variety of sources, including services running in
containers, the containers themselves, and the physical hosts. Logging hosts
need additional disk space to hold live and rotated (historical) log files.
need sufficient disk space to hold live, and rotated (historical), log files.
In addition, the storage performance must be enough to keep pace with the
log traffic coming from various hosts and containers within the OpenStack
environment. Reserve a minimum of 50GB of disk space for storing
logs on the logging hosts.
Hosts that provide Block Storage volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volumes`` volume
manager (LVM) support. Ensure those hosts have a ``cinder-volume`` volume
group that OpenStack-Ansible can configure for use with cinder.
Each control plane host runs services inside LXC containers. The container
@ -108,8 +106,8 @@ minimum requirements:
* Ubuntu 14.04 LTS (Trusty Tahr)
* OSA is tested regularly against the latest Ubuntu 14.04 LTS point
releases
* Linux kernel version ``3.13.0-34-generic`` or later
releases.
* Linux kernel version ``3.13.0-34-generic`` or later.
* For swift storage hosts, you must enable the ``trusty-backports``
repositories in ``/etc/apt/sources.list`` or ``/etc/apt/sources.list.d/``
See the `Ubuntu documentation

View File

@ -1,11 +1,9 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
====================
Storage architecture
====================
OpenStack-Ansible supports Block Storage (cinder), Ephemeral storage
(nova), Image service (glance) and Object Storage (swift).
(nova), Image service (glance), and Object Storage (swift).
Block Storage (cinder)
~~~~~~~~~~~~~~~~~~~~~~
@ -13,21 +11,21 @@ Block Storage (cinder)
.. important::
The Block Storage used by each service is typically on a storage system, not
a server. An exception to this is the LVM-backed storage store which is a
reference implementation and is used primarily for test environments but not
for production environments. For non-server storage systems, the cinder-volume
a server. An exception to this is the LVM-backed storage store; however, this is a
reference implementation and is used primarily for test environments and not
for production environments. For non-server storage systems, the ``cinder-volume``
service interacts with the Block Storage system through an API which
is implemented in the appropriate driver.
When using the cinder LVM driver, you have separate physical hosts with the
volume groups that cinder volumes will use.
Most of the other external cinder storage (For example: Ceph, EMC, NAS, and
NFS) set up a container inside one of the infra hosts.
NFS) sets up a container inside one of the infra hosts.
.. note::
The ``cinder_volumes`` service cannot run in a highly available configuration.
This is not to be set up on multiple hosts. If you have multiple storage
The ``cinder-volume`` service cannot run in a highly available configuration.
Do not set it up on multiple hosts. If you have multiple storage
backends, set up one per volumes container.
For more information: `<https://specs.openstack.org/openstack/cinder-specs/specs/mitaka/cinder-volume-active-active-support.html>`_.
@ -43,35 +41,36 @@ Configure ``cinder-volumes`` hosts with ``br-storage`` and ``br-mgmt``.
.. note::
It is recommended for production environment that the traffic (storage and
API request) from the hosts be segregated onto its own network.
For production environments segregate the traffic (storage and
API request) from the hosts onto dedicated networks.
Object Storage (swift)
~~~~~~~~~~~~~~~~~~~~~~
The swift proxy service container resides on one of the infra hosts whereas the
actual swift objects are stored on separate physical hosts.
The Object Storage proxy service container resides on one of the infra hosts
whereas the actual swift objects are stored on separate physical hosts.
.. important::
The swift proxy service is responsible for storage, retrieval, encoding and
The swift proxy service is responsible for storage, retrieval, encoding, and
decoding of objects from an object server.
Configuring the Object Storage
------------------------------
Configuring the Object Storage service
--------------------------------------
Ensure the swift proxy hosts are configured with ``br-mgmt`` and
``br-storage``. Ensure storage hosts are on ``br-storage``. When using
dedicated replication, also ensure storage hosts are on ``br-repl``.
Ensure the swift proxy hosts are configured with ``br-mgmt`` and ``br-
storage``. Ensure storage hosts are configured with ``br-storage``. When using
dedicated replication, also ensure storage hosts are configured with ``br-
repl``.
``br-storage`` handles the retrieval and upload of objects to the storage
nodes. ``br-mgmt`` handles the API requests.
* ``br-storage`` handles the transfer of objects from the storage hosts to
the proxy and vice-versa.
* ``br-repl`` handles the replication of objects between storage hosts,
and is not needed by the proxy containers.
* ``br-storage`` carries traffic for the transfer of objects from the storage
hosts to the proxy and vice-versa.
* ``br-repl`` carries traffic for the replication of objects between storage
hosts, and is not needed by the proxy containers.
.. note::
@ -85,18 +84,18 @@ Ephemeral storage (nova)
The ``nova-scheduler`` container resides on the infra host. The
``nova-scheduler`` service determines on which host (node on
which ``nova-compute`` service is running) a particular VM
should launch.
launches.
The ``nova-api-os-compute`` container resides on the infra host. The
``nova-compute`` service resides on the compute host. The
``nova-api-os-compute`` container handles the client API requests and
passes messages to the ``nova-scheduler``. The API requests may
involve operations that requires scheduling (For example: instance
involve operations that require scheduling (For example, instance
creation or deletion). These messages are then sent to
``nova-conductor`` which in turn pushes messages to ``nova-compute``
on the compute host.
Configuring the ephemeral storage
Configuring the Ephemeral storage
---------------------------------
All nova containers on the infra hosts communicate using the AMQP service over
@ -113,9 +112,8 @@ carry traffic to the storage host. Configure the
.. note::
It is recommended for production environment that the traffic (storage
and API request) from the hosts be segregated onto its own network.
For production environments segregate the traffic (storage and
API request) from the hosts onto dedicated networks.
Image service (glance)
~~~~~~~~~~~~~~~~~~~~~~
@ -125,7 +123,7 @@ infra hosts.
Configuring the Image service
-----------------------------
Configure glance-volume container to use the ``br-storage`` and
Configure the ``glance-volume`` container to use the ``br-storage`` and
``br-mgmt`` interfaces.
* ``br-storage`` bridge carries image traffic to compute host.

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Installation workflow
=====================
@ -16,10 +14,9 @@ OpenStack-Ansible installation.
#. :doc:`Prepare deployment host <deploymenthost>`
#. :doc:`Prepare target hosts <targethosts>`
#. :doc:`Configure deployment <configure>`
#. :doc:`Run playbooks <installation>`
#. :doc:`Run playbooks <installation#run-playbooks>`
#. :doc:`Verify OpenStack operation <installation>`
=======
-----------
.. include:: navigation.txt

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=====================
Network configuration
=====================
@ -35,39 +33,39 @@ bridges that are to be configured on hosts.
Example for 3 controller nodes and 2 compute nodes
--------------------------------------------------
- VLANs:
* VLANs:
- Host management: Untagged/Native
- Container management: 10
- Tunnels: 30
- Storage: 20
* Host management: Untagged/Native
* Container management: 10
* Tunnels: 30
* Storage: 20
- Networks:
* Networks:
- Host management: 10.240.0.0/22
- Container management: 172.29.236.0/22
- Tunnel: 172.29.240.0/22
- Storage: 172.29.244.0/22
* Host management: 10.240.0.0/22
* Container management: 172.29.236.0/22
* Tunnel: 172.29.240.0/22
* Storage: 172.29.244.0/22
- Addresses for the controller nodes:
* Addresses for the controller nodes:
- Host management: 10.240.0.11 - 10.240.0.13
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11 - 172.29.236.13
- Tunnel: no IP (because IP exist in the containers, when the components
aren't deployed directly on metal)
- Storage: no IP (because IP exist in the containers, when the components
aren't deployed directly on metal)
* Host management: 10.240.0.11 - 10.240.0.13
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.11 - 172.29.236.13
* Tunnel: no IP (because IP exist in the containers, when the components
are not deployed directly on metal)
* Storage: no IP (because IP exist in the containers, when the components
are not deployed directly on metal)
- Addresses for the compute nodes:
* Addresses for the compute nodes:
- Host management: 10.240.0.21 - 10.240.0.22
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.21 - 172.29.236.22
- Tunnel: 172.29.240.21 - 172.29.240.22
- Storage: 172.29.244.21 - 172.29.244.22
* Host management: 10.240.0.21 - 10.240.0.22
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.21 - 172.29.236.22
* Tunnel: 172.29.240.21 - 172.29.240.22
* Storage: 172.29.244.21 - 172.29.244.22
.. TODO Update this section. Should this information be moved to the overview
@ -83,7 +81,7 @@ on the production environment described in `host layout for production
environment`_.
.. _host layout for production environment: overview-host-layout.html#production-environment
.. _Link to Production Environment: targethosts-networkexample.html#production-environment
.. _Link to Production Environment: app-targethosts-networkexample.html#production-environment
Test environment
~~~~~~~~~~~~~~~~
@ -92,39 +90,28 @@ This example uses the following parameters to configure networking on a
single target host. See `Figure 3.2`_ for a visual representation of these
parameters in the architecture.
- VLANs:
* VLANs:
- Host management: Untagged/Native
* Host management: Untagged/Native
* Container management: 10
* Tunnels: 30
* Storage: 20
- Container management: 10
* Networks:
- Tunnels: 30
* Host management: 10.240.0.0/22
* Container management: 172.29.236.0/22
* Tunnel: 172.29.240.0/22
* Storage: 172.29.244.0/22
- Storage: 20
* Addresses:
Networks:
- Host management: 10.240.0.0/22
- Container management: 172.29.236.0/22
- Tunnel: 172.29.240.0/22
- Storage: 172.29.244.0/22
Addresses:
- Host management: 10.240.0.11
- Host management gateway: 10.240.0.1
- DNS servers: 69.20.0.164 69.20.0.196
- Container management: 172.29.236.11
- Tunnel: 172.29.240.11
- Storage: 172.29.244.11
* Host management: 10.240.0.11
* Host management gateway: 10.240.0.1
* DNS servers: 69.20.0.164 69.20.0.196
* Container management: 172.29.236.11
* Tunnel: 172.29.240.11
* Storage: 172.29.244.11
.. _Figure 3.2: targethosts-networkconfig.html#fig_hosts-target-network-containerexample
@ -138,11 +125,11 @@ Modifying the network interfaces file
After establishing initial host management network connectivity using
the ``bond0`` interface, modify the ``/etc/network/interfaces`` file.
An example is provided below on this `Link to Test Environment`_ based
An example is provided below on this `link to Test Environment`_ based
on the test environment described in `host layout for testing
environment`_.
.. _Link to Test Environment: targethosts-networkexample.html#test-environment
.. _Link to Test Environment: app-targethosts-networkexample.html#test-environment
.. _host layout for testing environment: overview-host-layout.html#test-environment

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
==========================
Preparing the target hosts
==========================
@ -17,8 +15,10 @@ to access the internet or suitable local repositories.
We recommend adding the Secure Shell (SSH) server packages to the
installation on target hosts without local (console) access.
We also recommend setting your locale to en_US.UTF-8. Other locales may
work, but they are not tested or supported.
.. note::
We also recommend setting your locale to `en_US.UTF-8`. Other locales may
work, but they are not tested or supported.
Configuring the operating system
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -50,8 +50,8 @@ Configuring the operating system
#. Reboot the host to activate the changes and use new kernel.
Deploying SSH keys
~~~~~~~~~~~~~~~~~~
Deploying Secure Shell (SSH) keys
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ansible uses SSH for connectivity between the deployment and target hosts.
@ -78,10 +78,8 @@ practices, refer to `GitHub's documentation on generating SSH keys`_.
``lxc_container_ssh_key`` variable to the public key for
the container.
Configuring LVM
~~~~~~~~~~~~~~~
.. TODO Change title to Configuring Storage and add content
Configuring storage
~~~~~~~~~~~~~~~~~~~
`Logical Volume Manager (LVM)`_ allows a single device to be split into
multiple logical volumes which appear as a physical storage device to the
@ -96,7 +94,7 @@ their data storage.
configuration, edit the generated configuration file as needed.
#. To use the optional Block Storage (cinder) service, create an LVM
volume group named ``cinder-volumes`` on the Block Storage host. A
volume group named ``cinder-volume`` on the Block Storage host. A
metadata size of 2048 must be specified during physical volume
creation. For example:

View File

@ -1,5 +1,3 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
============
Target hosts
============
@ -7,27 +5,20 @@ Target hosts
.. figure:: figures/installation-workflow-targethosts.png
:width: 100%
**Installation workflow**
.. toctree::
:maxdepth: 2
targethosts-prepare.rst
targethosts-networkconfig.rst
On each target host, perform the following tasks:
- Name the target hosts
- Install the operating system
- Generate and set up security measures
- Update the operating system and install additional software packages
- Create LVM volume groups
- Configure networking devices
* Name the target hosts
* Install the operating system
* Generate and set up security measures
* Update the operating system and install additional software packages
* Create LVM volume groups
* Configure networking devices
--------------