[docs] Applying edits to the OSA install guide: overview

Bug: #1628958
Change-Id: Id41224acc3d54a89b28c9610ec205be51ab5fb51
This commit is contained in:
Ianeta Hutchinson 2016-10-03 15:25:12 -05:00 committed by ianeta hutchinson
parent 5bac2e595d
commit 4f4dbc80eb
6 changed files with 228 additions and 244 deletions

View File

@ -2,6 +2,10 @@
Installation Guide
==================
This guide provides instructions for performing an OpenStack-Ansible
installation in a test environment and a production environment, and is
intended for deployers.
.. toctree::
:maxdepth: 2

View File

@ -4,84 +4,82 @@
Network architecture
====================
Although Ansible automates most deployment operations, networking on
target hosts requires manual configuration as it varies from one use-case
to another.
The following section describes the network configuration that must be
Although Ansible automates most deployment operations, networking on target
hosts requires manual configuration because it varies from one use case to
another. This section describes the network configuration that must be
implemented on all target hosts.
A deeper explanation of how the networking works can be found in
:ref:`network-appendix`.
For more information about how networking works, see :ref:`network-appendix`.
Host network bridges
~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible uses bridges to connect physical and logical network
interfaces on the host to virtual network interfaces within containers.
Target hosts are configured with the following network bridges.
Target hosts are configured with the following network bridges:
* LXC internal ``lxcbr0``:
* LXC internal: ``lxcbr0``
* This bridge is **required**, but OpenStack-Ansible configures it
automatically.
The ``lxcbr0`` bridge is **required**, but OpenStack-Ansible configures it
automatically. It provides external (typically Internet) connectivity to
containers.
* Provides external (typically internet) connectivity to containers.
This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container.
* This bridge does not directly attach to any physical or logical
interfaces on the host because iptables handles connectivity. It
attaches to ``eth0`` in each container, but the container network
interface it attaches to is configurable in
``openstack_user_config.yml`` in the ``provider_networks``
dictionary.
The container network that the bridge attaches to is configurable in the
``openstack_user_config.yml`` file in the ``provider_networks``
dictionary.
* Container management ``br-mgmt``:
* Container management: ``br-mgmt``
* This bridge is **required**.
The ``br-mgmt`` bridge is **required**. It provides management of and
communication between the infrastructure and OpenStack services.
* Provides management of and communication between the infrastructure
and OpenStack services.
The bridge attaches to a physical or logical interface, typically a
``bond0`` VLAN subinterface. It also attaches to ``eth1`` in each container.
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth1`` in each container. The container
network interface it attaches to is configurable in
``openstack_user_config.yml``.
The container network interface that the bridge attaches to is configurable
in the ``openstack_user_config.yml`` file.
* Storage ``br-storage``:
* Storage:``br-storage``
* This bridge is **optional**, but recommended for production
environments.
The ``br-storage`` bridge is **optional**, but recommended for production
environments. It provides segregated access to Block Storage devices
between OpenStack services and Block Storage devices.
* Provides segregated access to Block Storage devices between
OpenStack services and Block Storage devices.
The bridge attaches to a physical or logical interface, typically a
``bond0`` VLAN subinterface. It also attaches to ``eth2`` in each
associated container.
* Attaches to a physical or logical interface, typically a ``bond0`` VLAN
subinterface. Also attaches to ``eth2`` in each associated container.
The container network interface it attaches to is configurable in
``openstack_user_config.yml``.
The container network interface that the bridge attaches to is configurable
in the ``openstack_user_config.yml`` file.
* OpenStack Networking tunnel ``br-vxlan``:
* OpenStack Networking tunnel: ``br-vxlan``
- This bridge is **required** if the environment is configured to allow
projects to create virtual networks.
The ``br-vxlan`` bridge is **required** if the environment is configured to
allow projects to create virtual networks. It provides the interface for
virtual (VXLAN) tunnel networks.
- Provides the interface for virtual (VXLAN) tunnel networks.
The bridge attaches to a physical or logical interface, typically a
``bond1`` VLAN subinterface. It also attaches to ``eth10`` in each
associated container.
- Attaches to a physical or logical interface, typically a ``bond1`` VLAN
subinterface. Also attaches to ``eth10`` in each associated container.
The container network interface it attaches to is configurable in
``openstack_user_config.yml``.
The container network interface it attaches to is configurable in
the ``openstack_user_config.yml`` file.
- OpenStack Networking provider ``br-vlan``:
* OpenStack Networking provider: ``br-vlan``
- This bridge is **required**.
The ``br-vlan`` bridge is **required**. It provides infrastructure for VLAN
tagged or flat (no VLAN tag) networks.
- Provides infrastructure for VLAN tagged or flat (no VLAN tag) networks.
The bridge attaches to a physical or logical interface, typically ``bond1``.
It attaches to ``eth11`` for VLAN type networks in each associated
container. It is not assigned an IP address because it handles only
layer 2 connectivity.
- Attaches to a physical or logical interface, typically ``bond1``.
Attaches to ``eth11`` for vlan type networks in each associated
container. It is not assigned an IP address because it only handles
layer 2 connectivity. The container network interface it attaches to is
configurable in ``openstack_user_config.yml``.
The container network interface that the bridge attaches to is configurable
in the ``openstack_user_config.yml`` file.

View File

@ -7,56 +7,51 @@ automation engine to deploy an OpenStack environment on Ubuntu Linux.
For isolation and ease of maintenance, you can install OpenStack components
into Linux containers (LXC).
This documentation is intended for deployers, and walks through an
OpenStack-Ansible installation for a test environment and production
environment.
Ansible
~~~~~~~
Ansible provides an automation platform to simplify system and application
deployment. Ansible manages systems using Secure Shell (SSH)
deployment. Ansible manages systems by using Secure Shell (SSH)
instead of unique protocols that require remote daemons or agents.
Ansible uses playbooks written in the YAML language for orchestration.
For more information, see `Ansible - Intro to
Playbooks <http://docs.ansible.com/playbooks_intro.html>`_.
In this guide, we refer to two types of hosts:
This guide refers to the following types of hosts:
* `Deployment host` - The host running Ansible playbooks.
* `Target hosts` - The hosts where Ansible installs OpenStack services and
infrastructure components.
* `Deployment host`, which runs the Ansible playbooks
* `Target hosts`, where Ansible installs OpenStack services and infrastructure
components
Linux containers (LXC)
~~~~~~~~~~~~~~~~~~~~~~
Containers provide operating-system level virtualization by enhancing
the concept of ``chroot`` environments. These isolate resources and file
the concept of ``chroot`` environments. Containers isolate resources and file
systems for a particular group of processes without the overhead and
complexity of virtual machines. They access the same kernel, devices,
and file systems on the underlying host and provide a thin operational
layer built around a set of rules.
The LXC project implements operating system level
virtualization on Linux using kernel namespaces and includes the
The LXC project implements operating-system-level
virtualization on Linux by using kernel namespaces, and it includes the
following features:
* Resource isolation including CPU, memory, block I/O, and network
using ``cgroups``.
* Resource isolation including CPU, memory, block I/O, and network, by
using ``cgroups``
* Selective connectivity to physical and virtual network devices on the
underlying physical host.
* Support for a variety of backing stores including LVM.
underlying physical host
* Support for a variety of backing stores, including Logical Volume Manager
(LVM)
* Built on a foundation of stable Linux technologies with an active
development and support community.
development and support community
Installation workflow
~~~~~~~~~~~~~~~~~~~~~
This diagram shows the general workflow associated with an
OpenStack-Ansible installation.
The following diagram shows the general workflow of an OpenStack-Ansible
installation.
.. figure:: figures/installation-workflow-overview.png
:width: 100%

View File

@ -8,41 +8,41 @@ network recommendations for running OpenStack in a production environment.
Software requirements
~~~~~~~~~~~~~~~~~~~~~
Ensure all hosts within an OpenStack-Ansible environment meet the following
minimum requirements:
Ensure that all hosts within an OpenStack-Ansible (OSA) environment meet the
following minimum requirements:
* Ubuntu 16.04 LTS (Xenial Xerus)/Ubuntu 14.04 LTS (Trusty Tahr)
* Ubuntu 16.04 LTS (Xenial Xerus) or Ubuntu 14.04 LTS (Trusty Tahr)
* OSA is tested regularly against the latest Ubuntu 16.04 LTS Xenial
point releases and Ubuntu 14.04 Trusty as well.
* Linux kernel version ``3.13.0-34-generic`` or later.
* For Trusty hosts, you must enable the ``trusty-backports`` or
* OpenStack-Ansible is tested regularly against the latest point releases of
Ubuntu 16.04 LTS and Ubuntu 14.04 LTS.
* Linux kernel version ``3.13.0-34-generic`` or later is required.
* For Trusty hosts, you must enable the ``trusty-backports`` or the
repositories in ``/etc/apt/sources.list`` or
``/etc/apt/sources.list.d/``. For detailed instructions, see
``/etc/apt/sources.list.d/``. For detailed instructions, see the
`Ubuntu documentation <https://help.ubuntu.com/community/
UbuntuBackports#Enabling_Backports_Manually>`_.
* Secure Shell (SSH) client and server that supports public key
* Secure Shell (SSH) client and server that support public key
authentication
* Network Time Protocol (NTP) client for time synchronization (such as
``ntpd`` or ``chronyd``)
* Python 2.7.x must be on the hosts.
* Python 2.7.*x*
* en_US.UTF-8 as locale
* en_US.UTF-8 as the locale
CPU recommendations
~~~~~~~~~~~~~~~~~~~
* Compute hosts with multi-core processors that have `hardware-assisted
virtualization extensions`_ available. These extensions provide a
* Compute hosts should have multicore processors with `hardware-assisted
virtualization extensions`_. These extensions provide a
significant performance boost and improve security in virtualized
environments.
* Infrastructure hosts with multi-core processors for best
performance. Some services, such as MySQL, greatly benefit from additional
CPU cores and other technologies, such as `Hyper-threading`_.
* Infrastructure (control plane) hosts should have multicore processors for
best performance. Some services, such as MySQL, benefit from
additional CPU cores and other technologies, such as `Hyper-threading`_.
.. _hardware-assisted virtualization extensions: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization
.. _Hyper-threading: https://en.wikipedia.org/wiki/Hyper-threading
@ -54,47 +54,46 @@ Different hosts have different disk space requirements based on the
services running on each host:
Deployment hosts
10GB of disk space is sufficient for holding the OpenStack-Ansible
Ten GB of disk space is sufficient for holding the OpenStack-Ansible
repository content and additional required software.
Compute hosts
Disk space requirements vary depending on the total number of instances
Disk space requirements depend on the total number of instances
running on each host and the amount of disk space allocated to each instance.
Compute hosts need to have at least 100GB of disk space available. Consider
disks that provide higher throughput with lower latency, such as SSD drives
in a RAID array.
Compute hosts must have a minimum of 1 TB of disk space available. Consider
disks that provide higher I/O throughput with lower latency, such as SSD
drives in a RAID array.
Storage hosts
Hosts running the Block Storage (cinder) service often consume the most disk
space in OpenStack environments. As with compute hosts,
choose disks that provide the highest I/O throughput with the lowest latency
for storage hosts. Storage hosts need to have 1TB of disk space at a
minimum.
space in OpenStack environments. Storage hosts must have a minimum of 1 TB
of disk space. As with Compute hosts, choose disks that provide the highest
I/O throughput with the lowest latency.
Infrastructure hosts
The OpenStack control plane contains storage-intensive services, such as
the Image (glance) service as well as MariaDB. These control plane hosts
need to have 100GB of disk space available at a minimum.
Infrastructure (control plane) hosts
The OpenStack control plane contains storage-intensive services, such as the
Image service (glance), and MariaDB. These hosts must have a minimum of
100 GB of disk space.
Logging hosts
An OpenStack-Ansible deployment generates a significant amount of logging.
Logs come from a variety of sources, including services running in
containers, the containers themselves, and the physical hosts. Logging hosts
need sufficient disk space to hold live, and rotated (historical), log files.
In addition, the storage performance must be enough to keep pace with the
log traffic coming from various hosts and containers within the OpenStack
environment. Reserve a minimum of 50GB of disk space for storing
logs on the logging hosts.
An OpenStack-Ansible deployment generates a significant amount of log
information. Logs come from a variety of sources, including services running
in containers, the containers themselves, and the physical hosts. Logging
hosts need sufficient disk space to hold live and rotated (historical) log
files. In addition, the storage performance must be able to keep pace with
the log traffic coming from various hosts and containers within the OpenStack
environment. Reserve a minimum of 50 GB of disk space for storing logs on
the logging hosts.
Hosts that provide Block Storage volumes must have logical volume
manager (LVM) support. Ensure those hosts have a ``cinder-volume`` volume
group that OpenStack-Ansible can configure for use with cinder.
Hosts that provide Block Storage volumes must have Logical Volume
Manager (LVM) support. Ensure that hosts have a ``cinder-volume`` volume
group that OpenStack-Ansible can configure for use with Block Storage.
Each control plane host runs services inside LXC containers. The container
filesystems are deployed by default onto the root filesystem of each control
plane hosts. You have the option to deploy those container filesystems
into logical volumes by creating a volume group called ``lxc``.
OpenStack-Ansible creates a 5GB logical volume for the filesystem of each
Each infrastructure (control plane) host runs services inside LXC containers.
The container file systems are deployed by default on the root file system of
each control plane host. You have the option to deploy those container file
systems into logical volumes by creating a volume group called lxc.
OpenStack-Ansible creates a 5 GB logical volume for the file system of each
container running on the host.
Network recommendations
@ -107,18 +106,17 @@ Network recommendations
problems when your environment grows.
For the best performance, reliability, and scalability in a production
environment, deployers should consider a network configuration that contains
environment, consider a network configuration that contains
the following features:
* Bonded network interfaces: Increases performance and/or reliability
(dependent on bonding architecture).
* Bonded network interfaces, which increase performance, reliability, or both
(depending on the bonding architecture)
* VLAN offloading: Increases performance by adding and removing VLAN tags in
hardware, rather than in the server's main CPU.
* VLAN offloading, which increases performance by adding and removing VLAN tags
in hardware, rather than in the server's main CPU
* Gigabit or 10 Gigabit Ethernet: Supports higher network speeds, which can
also improve storage performance when using the Block Storage service.
* Jumbo frames: Increases network performance by allowing more data to be sent
in each packet.
* Gigabit or 10 Gigabit Ethernet, which supports higher network speeds and can
also improve storage performance when using the Block Storage service
* Jumbo frames, which increase network performance by allowing more data to
be sent in each packet

View File

@ -6,84 +6,83 @@ Service architecture
Introduction
~~~~~~~~~~~~
OpenStack-Ansible has a flexible deployment configuration model that is
capable of deploying:
OpenStack-Ansible has a flexible deployment configuration model that
can deploy all services in separate LXC containers or on designated hosts
without using LXC containers, and all network traffic either on a single
network interface or on many network interfaces.
* All services in separate LXC machine containers, or on designated hosts
without using LXC containers.
* All network traffic on a single network interface, or on many network
interfaces.
This flexibility enables deployers to choose how to deploy OpenStack in the
appropriate way for the specific use case.
This flexibility enables deployers to choose how to deploy OpenStack in a
way that makes the most sense for the specific use-case.
The following sections describe the services deployed by OpenStack-Ansible.
The following sections describe the services that OpenStack-Ansible deploys.
Infrastructure services
~~~~~~~~~~~~~~~~~~~~~~~
The following infrastructure components are deployed by OpenStack-Ansible:
OpenStack-Ansible deploys the following infrastructure components:
* MariaDB/Galera
* MariaDB with Galera
All OpenStack services require an underlying database. MariaDB/Galera
implements a multi-master database configuration which simplifies the
ability to use it as a highly available database with a simple failover
model.
All OpenStack services require an underlying database. MariaDB with Galera
implements a multimaster database configuration, which simplifies its use
as a highly available database with a simple failover model.
* RabbitMQ
OpenStack services make use of RabbitMQ for Remote Procedure Calls (RPC).
OpenStack-Ansible deploys RabbitMQ in a clustered configuration with all
queues mirrored between the cluster nodes. As Telemetry (ceilometer) message
queue traffic is quite heavy, for large environments we recommended
separating Telemetry notifications to a separate RabbitMQ cluster.
OpenStack services use RabbitMQ for Remote Procedure Calls (RPC).
OSA deploys RabbitMQ in a clustered configuration with all
queues mirrored between the cluster nodes. Because Telemetry (ceilometer)
message queue traffic is quite heavy, for large environments we recommend
separating Telemetry notifications into a separate RabbitMQ cluster.
* MemcacheD
* Memcached
OpenStack services use MemcacheD for in-memory caching, speeding up
OpenStack services use Memcached for in-memory caching, which accelerates
transactions. For example, the OpenStack Identity service (keystone) uses
MemcacheD for caching authentication tokens. This is to ensure that token
Memcached for caching authentication tokens, which ensures that token
validation does not have to complete a disk or database transaction every
time the service is asked to validate a token.
* Repository
The repository holds the reference set of artifacts which are used for
The repository holds the reference set of artifacts that are used for
the installation of the environment. The artifacts include:
* A git repository containing a copy of the source code which is used
to prepare the packages for all OpenStack services.
* Python wheels for all services that are deployed in the environment.
* A Git repository that contains a copy of the source code that is used
to prepare the packages for all OpenStack services
* Python wheels for all services that are deployed in the environment
* An apt/yum proxy cache that is used to cache distribution packages
installed in the environment.
installed in the environment
* Load Balancer
* Load balancer
At least one load balancer is required for a deployment. OpenStack-Ansible
At least one load balancer is required for a deployment. OSA
provides a deployment of `HAProxy`_, but we recommend using a physical
load balancing appliance for production deployments.
load balancing appliance for production environments.
* Utility Container
* Utility container
The utility container is prepared with the appropriate credentials and
clients in order to administer the OpenStack environment. It is set to
automatically use the internal service endpoints.
If a tool or object does not require a dedicated container, or if it is
impractical to create a new container for a single tool or object, it is
installed in the utility container. The utility container is also used when
tools cannot be installed directly on a host. The utility container is
prepared with the appropriate credentials and clients to administer the
OpenStack environment. It is set to automatically use the internal service
endpoints.
* Log Aggregation Host
* Log aggregation host
A rsyslog service is optionally setup to receive rsyslog traffic from all
hosts and containers. You can replace this with any alternative log
A rsyslog service is optionally set up to receive rsyslog traffic from all
hosts and containers. You can replace rsyslog with any alternative log
receiver.
* Unbound DNS Container
* Unbound DNS container
Containers running an `Unbound DNS`_ caching service can optionally be
deployed to cache DNS lookups and to handle internal DNS name resolution.
We recommend using this service for large scale production environments as
the deployment will be significantly faster. If this option is not used,
OpenStack-Ansible will fall back to modifying ``/etc/hosts`` entries for
all hosts in the environment.
We recommend using this service for large-scale production environments
because the deployment will be significantly faster. If this service is not
used, OSA modifies ``/etc/hosts`` entries for all hosts in the environment.
.. _HAProxy: http://www.haproxy.org/
.. _Unbound DNS: https://www.unbound.net/
@ -91,7 +90,7 @@ The following infrastructure components are deployed by OpenStack-Ansible:
OpenStack services
~~~~~~~~~~~~~~~~~~
OpenStack-Ansible is able to deploy the following OpenStack services:
OSA is able to deploy the following OpenStack services:
* Bare Metal (`ironic`_)
* Block Storage (`cinder`_)

View File

@ -2,9 +2,6 @@
Storage architecture
====================
Introduction
~~~~~~~~~~~~
OpenStack has multiple storage realms to consider:
* Block Storage (cinder)
@ -16,23 +13,23 @@ Block Storage (cinder)
~~~~~~~~~~~~~~~~~~~~~~
The Block Storage (cinder) service manages volumes on storage devices in an
environment. For a production deployment, this is a device that presents
storage via a storage protocol (for example: NFS, iSCSI, Ceph RBD) to a
storage network (``br-storage``) and a storage management API to the
management (``br-mgmt``) network. Instances are connected to the volumes via
the storage network by the hypervisor on the Compute host. The following
diagram illustrates how Block Storage is connected to instances.
environment. In a production environment, the device presents storage via a
storage protocol (for example, NFS, iSCSI, or Ceph RBD) to a storage network
(``br-storage``) and a storage management API to the
management network (``br-mgmt``). Instances are connected to the volumes via
the storage network by the hypervisor on the Compute host.
The following diagram illustrates how Block Storage is connected to instances.
.. figure:: figures/production-storage-cinder.png
:width: 600px
The following steps relate to the illustration above.
The diagram shows the following steps.
+----+---------------------------------------------------------------------+
| 1. | The creation of a volume is executed by the assigned |
| | ``cinder-volume`` service using the appropriate `cinder driver`_. |
| | This is done using an API which is presented to the management |
| | network. |
| 1. | A volume is created by the assigned ``cinder-volume`` service |
| | using the appropriate `cinder driver`_. The volume is created by |
| | using an API that is presented to the management network. |
+----+---------------------------------------------------------------------+
| 2. | After the volume is created, the ``nova-compute`` service connects |
| | the Compute host hypervisor to the volume via the storage network. |
@ -43,29 +40,28 @@ diagram illustrates how Block Storage is connected to instances.
.. important::
The `LVMVolumeDriver`_ is designed as a reference driver implementation
The `LVMVolumeDriver`_ is designed as a reference driver implementation,
which we do not recommend for production usage. The LVM storage back-end
is a single server solution which provides no high availability options.
is a single-server solution that provides no high-availability options.
If the server becomes unavailable, then all volumes managed by the
``cinder-volume`` service running on that server become unavailable.
Upgrading the operating system packages (for example: kernel, iscsi) on
the server will cause storage connectivity outages due to the iscsi
service (or the host) restarting.
Upgrading the operating system packages (for example, kernel or iSCSI)
on the server causes storage connectivity outages because the iSCSI service
(or the host) restarts.
Due to a `limitation with container iSCSI connectivity`_, you must deploy the
``cinder-volume`` service directly on a physical host (not into a container)
when using storage back-ends which connect via iSCSI. This includes the
`LVMVolumeDriver`_ and many of the drivers for commercial storage devices.
Because of a `limitation with container iSCSI connectivity`_, you must deploy
the ``cinder-volume`` service directly on a physical host (not into a
container) when using storage back ends that connect via iSCSI. This includes
the `LVMVolumeDriver`_ and many of the drivers for commercial storage devices.
.. note::
.. note::
The ``cinder-volume`` service does not run in a highly available
configuration. When the ``cinder-volume`` service is configured to manage
volumes on the same back-end from multiple hosts/containers, one service
is scheduled to manage the life-cycle of the volume until an alternative
service is assigned to do so. This assignment may be done through by
using the `cinder-manage CLI tool`_.
This may change in the future if the
volumes on the same back end from multiple hosts or containers, one service
is scheduled to manage the life cycle of the volume until an alternative
service is assigned to do so. This assignment can be made through the
`cinder-manage CLI tool`_. This configuration might change if
`cinder volume active-active support spec`_ is implemented.
.. _cinder driver: http://docs.openstack.org/developer/cinder/drivers.html
@ -78,7 +74,7 @@ Object Storage (swift)
~~~~~~~~~~~~~~~~~~~~~~
The Object Storage (swift) service implements a highly available, distributed,
eventually consistent object/blob store which is accessible via HTTP/HTTPS.
eventually consistent object/blob store that is accessible via HTTP/HTTPS.
The following diagram illustrates how data is accessed and replicated.
@ -86,53 +82,50 @@ The following diagram illustrates how data is accessed and replicated.
:width: 600px
The ``swift-proxy`` service is accessed by clients via the load balancer
on the management (``br-mgmt``) network. The ``swift-proxy`` service
communicates with the Account, Container and Object services on the
``swift_hosts`` via the storage (``br-storage``) network. Replication
between the ``swift_hosts`` is done via the replication (``br-repl``)
network.
on the management network (``br-mgmt``). The ``swift-proxy`` service
communicates with the Account, Container, and Object services on the
Object Storage hosts via the storage network(``br-storage``). Replication
between the Object Storage hosts is done via the replication network
(``br-repl``).
Image storage (glance)
~~~~~~~~~~~~~~~~~~~~~~
The Image service (glance) may be configured to store images on a variety of
storage back-ends supported by the `glance_store drivers`_.
The Image service (glance) can be configured to store images on a variety of
storage back ends supported by the `glance_store drivers`_.
.. important::
When using the File System store, glance has no mechanism of its own to
replicate the image between glance hosts. We recommend using a shared
storage back-end (via a file system mount) to ensure that all
``glance-api`` services have access to all images. This prevents the
unfortunate situation of losing access to images when a control plane host
is lost.
When the File System store is used, the Image service has no mechanism of
its own to replicate the image between Image service hosts. We recommend
using a shared storage back end (via a file system mount) to ensure that
all ``glance-api`` services have access to all images. Doing so prevents
losing access to images when an infrastructure (control plane) host is lost.
The following diagram illustrates the interactions between the glance service,
The following diagram illustrates the interactions between the Image service,
the storage device, and the ``nova-compute`` service when an instance is
created.
.. figure:: figures/production-storage-glance.png
:width: 600px
The following steps relate to the illustration above.
The diagram shows the following steps.
+----+---------------------------------------------------------------------+
| 1. | When a client requests an image, the ``glance-api`` service |
| 1 | When a client requests an image, the ``glance-api`` service |
| | accesses the appropriate store on the storage device over the |
| | storage (``br-storage``) network and pulls it into its cache. When |
| | storage network (``br-storage``) and pulls it into its cache. When |
| | the same image is requested again, it is given to the client |
| | directly from the cache instead of re-requesting it from the |
| | storage device. |
| | directly from the cache. |
+----+---------------------------------------------------------------------+
| 2. | When an instance is scheduled for creation on a Compute host, the |
| 2 | When an instance is scheduled for creation on a Compute host, the |
| | ``nova-compute`` service requests the image from the ``glance-api`` |
| | service over the management (``br-mgmt``) network. |
| | service over the management network (``br-mgmt``). |
+----+---------------------------------------------------------------------+
| 3. | After the image is retrieved, the ``nova-compute`` service stores |
| 3 | After the image is retrieved, the ``nova-compute`` service stores |
| | the image in its own image cache. When another instance is created |
| | with the same image, the image is retrieved from the local base |
| | image cache instead of re-requesting it from the ``glance-api`` |
| | service. |
| | image cache. |
+----+---------------------------------------------------------------------+
.. _glance_store drivers: http://docs.openstack.org/developer/glance_store/drivers/
@ -145,33 +138,30 @@ with root or ephemeral disks, the ``nova-compute`` service manages these
allocations using its ephemeral disk storage location.
In many environments, the ephemeral disks are stored on the Compute host's
local disks, but for production environments we recommended that the Compute
hosts are configured to use a shared storage subsystem instead.
Making use of a shared storage subsystem allows the use of quick live instance
migration between Compute hosts. This is useful when the administrator needs
to perform maintenance on the Compute host and wants to evacuate it.
Using a shared storage subsystem also allows the recovery of instances when a
Compute host goes offline. The administrator is able to evacuate the instance
to another Compute host and boot it up again.
The following diagram illustrates the interactions between the
storage device, the Compute host, the hypervisor and the instance.
local disks, but for production environments we recommend that the Compute
hosts be configured to use a shared storage subsystem instead. A shared
storage subsystem allows quick, live instance migration between Compute hosts,
which is useful when the administrator needs to perform maintenance on the
Compute host and wants to evacuate it. Using a shared storage subsystem also
allows the recovery of instances when a Compute host goes offline. The
administrator is able to evacuate the instance to another Compute host and
boot it up again. The following diagram illustrates the interactions between
the storage device, the Compute host, the hypervisor, and the instance.
.. figure:: figures/production-storage-nova.png
:width: 600px
The following steps relate to the illustration above.
The diagram shows the following steps.
+----+---------------------------------------------------------------------+
| 1. | The Compute host is configured with access to the storage device. |
| 1 | The Compute host is configured with access to the storage device. |
| | The Compute host accesses the storage space via the storage network |
| | (``br-storage``) using a storage protocol (for example: NFS, iSCSI, |
| | Ceph RBD). |
| | (``br-storage``) by using a storage protocol (for example, NFS, |
| | iSCSI, or Ceph RBD). |
+----+---------------------------------------------------------------------+
| 2. | The ``nova-compute`` service configures the hypervisor to present |
| 2 | The ``nova-compute`` service configures the hypervisor to present |
| | the allocated instance disk as a device to the instance. |
+----+---------------------------------------------------------------------+
| 3. | The hypervisor presents the disk as a device to the instance. |
| 3 | The hypervisor presents the disk as a device to the instance. |
+----+---------------------------------------------------------------------+