Merge "DOCS: Configuration section - cleanup"

This commit is contained in:
Jenkins 2016-05-11 20:04:15 +00:00 committed by Gerrit Code Review
commit 8c927c9fe2
13 changed files with 301 additions and 316 deletions

View File

@ -1,21 +1,19 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=======================================
Configuring the Aodh service (optional)
=======================================
The Telemetry Alarming services perform the following functions:
The Telemetry (ceilometer) alarming services perform the following functions:
- Creates an API endpoint for controlling alarms.
- Allows you to set alarms based on threshold evaluation for a collection of samples.
Aodh on OSA requires a configured MongoDB back end prior to running
the Aodh playbooks. To specify the connection data, the deployer has to edit in the
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to running
the Aodh playbooks. To specify the connection data, edit the
``user_variables.yml`` file (see section `Configuring the user data`_
below).
Setting up a MongoDB database for Aodh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -27,14 +25,14 @@ Setting up a MongoDB database for Aodh
2. Edit the ``/etc/mongodb.conf`` file and change ``bind_ip`` to the
management interface of the node running Aodh.
management interface of the node running Aodh:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles:
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
@ -48,7 +46,7 @@ Setting up a MongoDB database for Aodh
# service mongodb restart
5. Create the Aodh database
5. Create the Aodh database:
.. code-block:: console
@ -72,11 +70,13 @@ Setting up a MongoDB database for Aodh
}
.. note:: The ``AODH_DBPASS`` must match the
``aodh_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This
allows Ansible to configure the connection string within
the Aodh configuration files.
.. note::
Ensure ``AODH_DBPASS`` matches the
``aodh_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This
allows Ansible to configure the connection string within
the Aodh configuration files.
Configuring the hosts
@ -89,7 +89,7 @@ the example included in the
.. code-block:: yaml
# The infra nodes that the aodh services will run on.
# The infra nodes that the Aodh services run on.
metering-alarm_hosts:
infra1:
ip: 172.20.236.111
@ -100,47 +100,42 @@ the example included in the
The ``metering-alarm_hosts`` provides several services:
- An API server (aodh-api). Runs on one or more central management
servers to provide access to the alarm information stored in the
- An API server (``aodh-api``): Runs on one or more central management
servers to provide access to the alarm information in the
data store.
- An alarm evaluator (aodh-evaluator). Runs on one or more central
management servers to determine when alarms fire due to the
- An alarm evaluator (``aodh-evaluator``): Runs on one or more central
management servers to determine alarm fire due to the
associated statistic trend crossing a threshold over a sliding
time window.
- A notification listener (aodh-listener). Runs on a central
- A notification listener (``aodh-listener``): Runs on a central
management server and fire alarms based on defined rules against
event captured by the Telemetry module's notification agents.
event captured by ceilometer's module's notification agents.
- An alarm notifier (aodh-notifier). Runs on one or more central
management servers to allow alarms to be set based on the
- An alarm notifier (``aodh-notifier``). Runs on one or more central
management servers to allow the setting of alarms to base on the
threshold evaluation for a collection of samples.
These services communicate by using the OpenStack messaging bus. Only
the API server has access to the data store.
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
In addition to adding these hosts in the
``/etc/openstack_deploy/conf.d/aodh.yml`` file, other configurations
must be specified in the ``/etc/openstack_deploy/user_variables.yml``
file. These configurations are listed below, along with a description:
Specify the following considerations in
``/etc/openstack_deploy/user_variables.yml``:
- The type of database backend aodh will use. Currently only MongoDB
- The type of database backend Aodh uses. Currently, only MongoDB
is supported: ``aodh_db_type: mongodb``
- The IP address of the MonogoDB host: ``aodh_db_ip: localhost``
- The port of the MongoDB service: ``aodh_db_port: 27017``
After all of these steps are complete, run the ``os-aodh-install.yml``
playbook. If deploying a new openstack (instead of only aodh),
run ``setup-openstack.yml``. The aodh playbooks run as part of this
playbook.
Run the ``os-aodh-install.yml`` playbook. If deploying a new OpenStack
(instead of only Aodh), run ``setup-openstack.yml``.
The Aodh playbooks run as part of this playbook.
--------------

View File

@ -1,10 +1,9 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
=============================================
Configuring the Ceilometer service (optional)
=============================================
Configuring the Telemetry (ceilometer) service (optional)
=========================================================
The Telemetry module (Ceilometer) performs the following functions:
The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services.
@ -14,17 +13,17 @@ The Telemetry module (Ceilometer) performs the following functions:
.. note::
The alarming functionality was moved to a separate component in
Liberty. It will be handled by the metering-alarm containers
through the aodh services. For configuring these services, please
see the Aodh docs.
As of Liberty, the alarming functionality is in a separate component.
The metering-alarm containers handle the functionality through aodh
services. For configuring these services, see the aodh docs:
http://docs.openstack.org/developer/aodh/
Ceilometer on OSA requires a MongoDB backend to be configured prior to running
the ceilometer playbooks. The connection data will then need to be given in the
``user_variables.yml`` file (see section `Configuring the user data`_ below).
Configure a MongoDB backend prior to running the ceilometer playbooks.
The connection data is in the ``user_variables.yml`` file
(see section `Configuring the user data`_ below).
Setting up a MongoDB database for Ceilometer
Setting up a MongoDB database for ceilometer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install the MongoDB package:
@ -33,31 +32,32 @@ Setting up a MongoDB database for Ceilometer
# apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node the service is running on.
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management
interface:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
smallfiles = true
4. Restart the MongoDB service
4. Restart the MongoDB service:
.. code-block:: console
# service mongodb restart
5. Create the ceilometer database
5. Create the ceilometer database:
.. code-block:: console
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
This should return:
This returns:
.. code-block:: console
@ -73,16 +73,18 @@ Setting up a MongoDB database for Ceilometer
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
.. note:: The ``CEILOMETER_DBPASS`` must match the
``ceilometer_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This is
how ansible knows how to configure the connection string
within the ceilometer configuration files.
.. note::
Ensure ``CEILOMETER_DBPASS`` matches the
``ceilometer_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This is
how Ansible knows how to configure the connection string
within the ceilometer configuration files.
Configuring the hosts
~~~~~~~~~~~~~~~~~~~~~
Configure Ceilometer by specifying the ``metering-compute_hosts`` and
Configure ceilometer by specifying the ``metering-compute_hosts`` and
``metering-infra_hosts`` directives in the
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file. Below is the
example included in the
@ -91,16 +93,16 @@ example included in the
.. code-block:: bash
# The compute host that the ceilometer compute agent runs on
metering-compute_hosts:
``metering-compute_hosts``:
compute1:
ip: 172.20.236.110
# The infra node that the central agents runs on
metering-infra_hosts:
``metering-infra_hosts``:
infra1:
ip: 172.20.236.111
# Adding more than one host requires further configuration for ceilometer
# to work properly. See the "Configuring the hosts for an HA deployment" section.
# to work properly.
infra2:
ip: 172.20.236.112
infra3:
@ -124,7 +126,7 @@ services:
(See HA section below).
- A collector (ceilometer-collector): Runs on central management
server(s) and dispatches collected telemetry data to a data store
server(s) and dispatches data to a data store
or external consumer without modification.
- An API server (ceilometer-api): Runs on one or more central
@ -135,26 +137,28 @@ Configuring the hosts for an HA deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ceilometer supports running the polling and notification agents in an
HA deployment, meaning that several of these services can be run in parallel
with workload divided among these services.
HA deployment.
The Tooz library provides the coordination within the groups of service
instances. Tooz can be used with several backends. At the time of this
writing, the following backends are supported:
- Zookeeper. Recommended solution by the Tooz project.
- Zookeeper: Recommended solution by the Tooz project.
- Redis. Recommended solution by the Tooz project.
- Redis: Recommended solution by the Tooz project.
- Memcached. Recommended for testing.
- Memcached: Recommended for testing.
It's important to note that the OpenStack-Ansible project will not deploy
these backends. Instead, these backends are assumed to exist before
deploying the ceilometer service. HA is achieved by configuring the proper
directives in ceilometer.conf using ``ceilometer_ceilometer_conf_overrides``
in the user_variables.yml file. The Ceilometer admin guide[1] details the
options used in ceilometer.conf for an HA deployment. An example
``ceilometer_ceilometer_conf_overrides`` is provided below.
.. important::
The OpenStack-Ansible project does not deploy these backends.
The backends exist before deploying the ceilometer service.
Achieve HA by configuring the proper directives in ``ceilometer.conf`` using
``ceilometer_ceilometer_conf_overrides`` in the ``user_variables.yml`` file.
The ceilometer admin guide[1] details the
options used in ``ceilometer.conf`` for HA deployment. The following is an
example of ``ceilometer_ceilometer_conf_overrides``:
.. code-block:: yaml
@ -168,12 +172,10 @@ options used in ceilometer.conf for an HA deployment. An example
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
In addition to adding these hosts in the
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file, other configurations
must be specified in the ``/etc/openstack_deploy/user_variables.yml`` file.
These configurations are listed below, along with a description:
Specify the following configurations in the
``/etc/openstack_deploy/user_variables.yml`` file:
- The type of database backend ceilometer will use. Currently only
- The type of database backend ceilometer uses. Currently only
MongoDB is supported: ``ceilometer_db_type: mongodb``
- The IP address of the MonogoDB host: ``ceilometer_db_ip:
@ -202,11 +204,9 @@ These configurations are listed below, along with a description:
- This configures keystone to send notifications to the message bus:
``keystone_ceilometer_enabled: False``
After all of these steps are complete, run the
``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
Run the ``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
(instead of only ceilometer), run ``setup-openstack.yml``. The
ceilometer playbooks run as part of this
playbook.
ceilometer playbooks run as part of this playbook.
References

View File

@ -1,7 +1,7 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring HAProxy (optional)
------------------------------
==============================
HAProxy provides load balancing services and SSL termination when hardware
load balancers are not available for high availability architectures deployed
@ -9,21 +9,21 @@ by OpenStack-Ansible. The default HAProxy configuration provides highly-
available load balancing services via keepalived if there is more than one
host in the ``haproxy_hosts`` group.
.. note::
.. important::
Deployers must review the services exposed by HAProxy and **must limit access
to these services to trusted users and networks only**. For more details,
Ensure you review the services exposed by HAProxy and limit access
to these services to trusted users and networks only. For more details,
refer to the :ref:`least-access-openstack-services` section.
.. note::
A load balancer is required for a successful installation. Deployers may
prefer to make use of hardware load balancers instead of haproxy. If hardware
load balancers are used then the load balancing configuration for services
must be implemented prior to executing the deployment.
For a successful installation, you require a load balancer. You may
prefer to make use of hardware load balancers instead of HAProxy. If hardware
load balancers are in use, then implement the load balancing configuration for
services prior to executing the deployment.
To deploy HAProxy within your OpenStack-Ansible environment, define target
hosts which should run HAProxy:
hosts to run HAProxy:
.. code-block:: yaml
@ -41,21 +41,21 @@ There is an example configuration file already provided in
in an OpenStack-Ansible deployment.
Making HAProxy highly-available
###############################
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
HAProxy will be deployed in a highly-available manner, by installing
keepalived if multiple hosts are found in the inventory.
If multiple hosts are found in the inventory, deploy
HAProxy in a highly-available manner by installing keepalived.
To skip the deployment of keepalived along HAProxy when installing
HAProxy on multiple hosts, edit the
``/etc/openstack_deploy/user_variables.yml`` by setting:
Edit the ``/etc/openstack_deploy/user_variables.yml`` to skip the deployment
of keepalived along HAProxy when installing HAProxy on multiple hosts.
To do this, set the following::
.. code-block:: yaml
haproxy_use_keepalived: False
Otherwise, edit at least the following variables in
``user_variables.yml`` to make keepalived work:
To make keepalived work, edit at least the following variables
in ``user_variables.yml``:
.. code-block:: yaml
@ -66,51 +66,52 @@ Otherwise, edit at least the following variables in
- ``haproxy_keepalived_internal_interface`` and
``haproxy_keepalived_external_interface`` represent the interfaces on the
deployed node where the keepalived nodes will bind the internal/external
vip. By default the ``br-mgmt`` will be used.
deployed node where the keepalived nodes bind the internal and external
vip. By default, use ``br-mgmt``.
- ``haproxy_keepalived_internal_vip_cidr`` and
``haproxy_keepalived_external_vip_cidr`` represents the internal and
external (respectively) vips (with their prefix length) that will be used on
keepalived host with the master status, on the interface listed above.
- On the interface listed above, ``haproxy_keepalived_internal_vip_cidr`` and
``haproxy_keepalived_external_vip_cidr`` represent the internal and
external (respectively) vips (with their prefix length).
- Additional variables can be set to adapt keepalived in the deployed
environment. Please refer to the ``user_variables.yml`` for more descriptions.
- Set additional variables to adapt keepalived in your deployment.
Refer to the ``user_variables.yml`` for more descriptions.
To always deploy (or upgrade to) the latest stable version of keepalived,
edit the ``/etc/openstack_deploy/user_variables.yml`` by setting:
To always deploy (or upgrade to) the latest stable version of keepalived.
Edit the ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keepalived_use_latest_stable: True
The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
variable file and provides its content to the keepalived role for
variable file and provides content to the keepalived role for
keepalived master and backup nodes.
Keepalived pings a public IP address to check its status. The default
address is ``193.0.14.129``. To change this default, for example to
your gateway, set the ``keepalived_ping_address`` variable in the
address is ``193.0.14.129``. To change this default,
set the ``keepalived_ping_address`` variable in the
``user_variables.yml`` file.
.. note:: The keepalived test works with IPv4 addresses only.
.. note::
You can define additional variables to adapt keepalived to the
deployed environment. Refer to the ``user_variables.yml`` file for
more information. Optionally, you can use your own variable file, as
follows:
The keepalived test works with IPv4 addresses only.
You can define additional variables to adapt keepalived to your
deployment. Refer to the ``user_variables.yml`` file for
more information. Optionally, you can use your own variable file.
For example:
.. code-block:: yaml
haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
Configuring keepalived ping checks
##################################
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible configures keepalived with a check script that pings an
external resource and uses that ping to determine if a node has lost network
connectivity. If the pings fail, keepalived will fail over to another node and
HAProxy will serve requests there.
connectivity. If the pings fail, keepalived fails over to another node and
HAProxy serves requests there.
The destination address, ping count and ping interval are configurable via
Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
@ -121,17 +122,17 @@ Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
keepalived_ping_count: # ICMP packets to send (per interval)
keepalived_ping_interval: # How often ICMP packets are sent
By default, OpenStack-Ansible will configure keepalived to ping one of the root
By default, OpenStack-Ansible configures keepalived to ping one of the root
DNS servers operated by RIPE. You can change this IP address to a different
external address or another address on your internal network.
Securing HAProxy communication with SSL certificates
####################################################
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure HAProxy
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are used with HAProxy. However, deployers can
provide their own certificates by using the following Ansible variables:
self-signed certificates are used with HAProxy. However, you can
provide your own certificates by using the following Ansible variables:
.. code-block:: yaml
@ -140,7 +141,7 @@ provide their own certificates by using the following Ansible variables:
haproxy_user_ssl_ca_cert: # Path to CA certificate
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how deployers can provide their own
these configuration options and how you can provide your own
certificates and keys to use with HAProxy.
.. _Securing services with SSL certificates: configure-sslcertificates.html

View File

@ -1,15 +1,14 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring Horizon (optional)
------------------------------
Configuring the Dashboard (horizon) (optional)
==============================================
Customizing the Horizon deployment is done within
``/etc/openstack_deploy/user_variables.yml``.
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``.
Securing Horizon communication with SSL certificates
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Securing horizon communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure Horizon
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon)
communications with self-signed or user-provided SSL certificates.
Refer to `Securing services with SSL certificates`_ for available configuration
@ -17,10 +16,10 @@ options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Configuring a Horizon Customization Module
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuring a horizon customization module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Openstack-Ansible supports deployment of a Horizon `customization module`_.
Openstack-Ansible supports deployment of a horizon `customization module`_.
After building your customization module, configure the ``horizon_customization_module`` variable
with a path to your module.

View File

@ -1,18 +1,18 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring Keystone (optional)
-------------------------------
Configuring the Identity service (keystone) (optional)
======================================================
Customizing the Keystone deployment is done within
``/etc/openstack_deploy/user_variables.yml``.
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``.
Securing Keystone communication with SSL certificates
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The OpenStack-Ansible project provides the ability to secure Keystone
Securing keystone communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure keystone
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are used with Keystone. However, deployers can
provide their own certificates by using the following Ansible variables in
self-signed certificates are in use. However, you can
provide your own certificates by using the following Ansible variables in
``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
@ -21,41 +21,43 @@ provide their own certificates by using the following Ansible variables in
keystone_user_ssl_key: # Path to private key
keystone_user_ssl_ca_cert: # Path to CA certificate
.. note:: If the deployer is providing certificate, key, and ca file for a
.. note::
If you are providing certificates, keys, and CA file for a
CA without chain of trust (or an invalid/self-generated ca), the variables
`keystone_service_internaluri_insecure` and
`keystone_service_adminuri_insecure` should be set to True.
``keystone_service_internaluri_insecure`` and
``keystone_service_adminuri_insecure`` should be set to ``True``.
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how deployers can provide their own
certificates and keys to use with Keystone.
these configuration options and how you can provide your own
certificates and keys to use with keystone.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Implementing LDAP (or Active Directory) Back ends
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Implementing LDAP (or Active Directory) backends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Deployers that already have LDAP or Active Directory (AD) infrastructure
deployed can use the built-in Keystone support for those identity services.
Keystone can use the existing users, groups and user-group relationships to
You can use the built-in keystone support for services if you already have
LDAP or Active Directory (AD) infrastructure on your deployment.
Keystone uses the existing users, groups, and user-group relationships to
handle authentication and access control in an OpenStack deployment.
.. note::
Although deployers can configure the default domain in Keystone to use LDAP
or AD identity back ends, **this is not recommended**. Deployers should
create an additional domain in Keystone and configure an LDAP/AD back end
for that domain.
This is critical in situations where the identity back end cannot
We do not recommend configuring the default domain in keystone to use
LDAP or AD identity backends. Create additional domains
in keystone and configure either LDAP or active directory backends for
that domain.
This is critical in situations where the identity backend cannot
be reached due to network issues or other problems. In those situations,
the administrative users in the default domain would still be able to
authenticate to keystone using the default domain which is not backed by
LDAP or AD.
Deployers can add domains with LDAP back ends by adding variables in
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary will
add a new Keystone domain called ``Users`` that is backed by an LDAP server:
You can add domains with LDAP backends by adding variables in
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary
adds a new keystone domain called ``Users`` that is backed by an LDAP server:
.. code-block:: yaml
@ -65,11 +67,11 @@ add a new Keystone domain called ``Users`` that is backed by an LDAP server:
user: "root"
password: "secrete"
Adding the YAML block above will cause the Keystone playbook to create a
``/etc/keystone/domains/keystone.Users.conf`` file within each Keystone service
Adding the YAML block above causes the keystone playbook to create a
``/etc/keystone/domains/keystone.Users.conf`` file within each keystone service
container that configures the LDAP-backed domain called ``Users``.
Deployers can create more complex configurations that use LDAP filtering and
You can create more complex configurations that use LDAP filtering and
consume LDAP as a read-only resource. The following example shows how to apply
these configurations:
@ -91,8 +93,8 @@ these configurations:
user_name_attribute: "uid"
user_filter: "(groupMembership=cn=openstack-users,ou=Users,o=MyCorporation)"
In the *MyCorporation* example above, Keystone will use the LDAP server as a
read-only resource. The configuration also ensures that Keystone filters the
In the `MyCorporation` example above, keystone uses the LDAP server as a
read-only resource. The configuration also ensures that keystone filters the
list of possible users to the ones that exist in the
``cn=openstack-users,ou=Users,o=MyCorporation`` group.
@ -103,11 +105,11 @@ variable during deployment:
horizon_keystone_multidomain_support: True
Enabling multi-domain support in Horizon will add the ``Domain`` input field on
the Horizon login page and it will add other domain-specific features in the
*Identity* section.
Enabling multi-domain support in horizon adds the ``Domain`` input field on
the horizon login page and it adds other domain-specific features in the
keystone section.
More details regarding valid configuration for the LDAP Identity Back-End can
More details regarding valid configuration for the LDAP Identity backend can
be found in the `Keystone Developer Documentation`_ and the
`OpenStack Admin Guide`_.

View File

@ -1,42 +1,41 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring RabbitMQ (optional)
-------------------------------
===============================
RabbitMQ provides the messaging broker for various OpenStack services. The
RabbitMQ provides the messaging broker for various OpenStack services. The
OpenStack-Ansible project configures a plaintext listener on port 5672 and
a SSL/TLS encrypted listener on port 5671.
Customizing the RabbitMQ deployment is done within
``/etc/openstack_deploy/user_variables.yml``.
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``.
Add a TLS encrypted listener to RabbitMQ
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure RabbitMQ
communications with self-signed or user-provided SSL certificates. Refer to
communications with self-signed or user-provided SSL certificates. Refer to
`Securing services with SSL certificates`_ for available configuration
options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Enable encrypted connections to RabbitMQ
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SSL communication between various OpenStack services and RabbitMQ is
controlled via the Ansible variable ``rabbitmq_use_ssl``:
The control of SSL communication between various OpenStack services and
RabbitMQ is via the Ansible variable ``rabbitmq_use_ssl``:
.. code-block:: yaml
rabbitmq_use_ssl: true
Setting this variable to ``true`` will adjust the RabbitMQ port to 5671 (the
default SSL/TLS listener port) and enable SSL connectivity between each
Setting this variable to ``true`` adjusts the RabbitMQ port to 5671 (the
default SSL/TLS listener port) and enables SSL connectivity between each
OpenStack service and RabbitMQ.
Setting this variable to ``false`` will disable SSL encryption between
OpenStack services and RabbitMQ. The plaintext port for RabbitMQ, 5672, will
be used for all services.
Setting this variable to ``false`` disables SSL encryption between
OpenStack services and RabbitMQ. Use the plaintext port for RabbitMQ, 5672,
for all services.
--------------

View File

@ -1,32 +1,32 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Add to existing deployment
--------------------------
==========================
Complete the following procedure to deploy Object Storage on an
Complete the following procedure to deploy swift on an
existing deployment.
#. `the section called "Configure and mount storage
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `the section called "Configure an Object Storage
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity users to use Object Storage by setting
#. Optionally, allow all keystone users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
Identity (keystone) users) can create containers and upload objects
to Object Storage.
keystone users) can create containers and upload objects
to swift.
If this value is ``False``, then by default, only users with the
admin or swiftoperator role are allowed to create containers or
If this value is ``False``, by default only users with the
``admin`` or ``swiftoperator`` role can create containers or
manage tenants.
When the backend type for the Image Service (glance) is set to
``swift``, the Image Service can access the Object Storage cluster
When the backend type for the glance is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
#. Run the Object Storage play:
#. Run the swift play:
.. code-block:: shell-session

View File

@ -1,7 +1,7 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the service
-----------------------
=======================
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
file**
@ -53,14 +53,14 @@ file**
``part_power``
Set the partition power value based on the total amount of
storage the entire ring will use.
storage the entire ring uses.
Multiply the maximum number of drives ever used with this Object
Storage installation by 100 and round that value up to the
Multiply the maximum number of drives ever used with the swift
installation by 100 and round that value up to the
closest power of two value. For example, a maximum of six drives,
times 100, equals 600. The nearest power of two above 600 is two
to the power of nine, so the partition power is nine. The
partition power cannot be changed after the Object Storage rings
partition power cannot be changed after the swift rings
are built.
``weight``
@ -71,9 +71,9 @@ file**
``min_part_hours``
The default value is 1. Set the minimum partition hours to the
amount of time to lock a partition's replicas after a partition
has been moved. Moving multiple replicas at the same time might
make data inaccessible. This value can be set separately in the
amount of time to lock a partition's replicas after moving a partition.
Moving multiple replicas at the same time
makes data inaccessible. This value can be set separately in the
swift, container, account, and policy sections with the value in
lower sections superseding the value in the swift section.
@ -85,13 +85,13 @@ file**
section.
``storage_network``
By default, the swift services will listen on the default
By default, the swift services listen on the default
management IP. Optionally, specify the interface of the storage
network.
If the ``storage_network`` is not set, but the ``storage_ips``
per host are set (or the ``storage_ip`` is not on the
``storage_network`` interface) the proxy server will not be able
``storage_network`` interface) the proxy server is unable
to connect to the storage services.
``replication_network``
@ -99,9 +99,8 @@ file**
dedicated replication can be setup. If this value is not
specified, no dedicated ``replication_network`` is set.
As with the ``storage_network``, if the ``repl_ip`` is not set on
the ``replication_network`` interface, replication will not work
properly.
Replication does not work properly if the ``repl_ip`` is not set on
the ``replication_network`` interface.
``drives``
Set the default drives per host. This is useful when all hosts
@ -123,39 +122,39 @@ file**
created before storage policies were instituted.
``default``
Set the default value to *yes* for at least one policy. This is
Set the default value to ``yes`` for at least one policy. This is
the default storage policy for any non-legacy containers that are
created.
``deprecated``
Set the deprecated value to *yes* to turn off storage policies.
Set the deprecated value to ``yes`` to turn off storage policies.
For account and container rings, ``min_part_hours`` and
``repl_number`` are the only values that can be set. Setting them
in this section overrides the defaults for the specific ring.
``statsd_host``
Swift supports sending extra metrics to a statsd host. This option
sets the statsd host that will receive statsd metrics. Specifying
this here will apply to all hosts in the cluster.
Swift supports sending extra metrics to a ``statsd`` host. This option
sets the ``statsd`` host to receive ``statsd`` metrics. Specifying
this here applies to all hosts in the cluster.
If statsd_host is left blank or omitted then statsd will be
If ``statsd_host`` is left blank or omitted, then ``statsd`` are
disabled.
All statsd settings can be overridden or specified deeper in the
structure if you want to only catch statsd metrics on certain hosts.
All ``statsd`` settings can be overridden or you can specify deeper in the
structure if you want to only catch ``statsdv`` metrics on certain hosts.
``statsd_port``
Optionally, use this to specify the statsd server's port your sending
metrics to. Defaults to 8125 of omitted.
Optionally, use this to specify the ``statsd`` server's port you are
sending metrics to. Defaults to 8125 of omitted.
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
These statsd related options are a little more complicated and are
used to tune how many samples are sent to statsd. Omit them unless
These ``statsd`` related options are more complex and are
used to tune how many samples are sent to ``statsd``. Omit them unless
you need to tweak these settings, if so first read:
http://docs.openstack.org/developer/swift/admin_guide.html
#. Update the Object Storage proxy hosts values:
#. Update the swift proxy hosts values:
.. code-block:: yaml
@ -171,17 +170,16 @@ file**
# statsd_metric_prefix: proxy03
``swift-proxy_hosts``
Set the ``IP`` address of the hosts that Ansible will connect to
to deploy the swift-proxy containers. The ``swift-proxy_hosts``
value should match the infra nodes.
Set the ``IP`` address of the hosts so Ansible connects to
to deploy the ``swift-proxy`` containers. The ``swift-proxy_hosts``
value matches the infra nodes.
``statsd_metric_prefix``
This metric is optional, and also only evaluated it you have defined
``statsd_host`` somewhere. It allows you define a prefix to add to
all statsd metrics sent from this hose. If omitted the node name will
be used.
all ``statsd`` metrics sent from this hose. If omitted, use the node name.
#. Update the Object Storage hosts values:
#. Update the swift hosts values:
.. code-block:: yaml
@ -237,20 +235,20 @@ file**
``swift_hosts``
Specify the hosts to be used as the storage nodes. The ``ip`` is
the address of the host to which Ansible connects. Set the name
and IP address of each Object Storage host. The ``swift_hosts``
and IP address of each swift host. The ``swift_hosts``
section is not required.
``swift_vars``
Contains the Object Storage host specific values.
Contains the swift host specific values.
``storage_ip`` and ``repl_ip``
These values are based on the IP addresses of the host's
Base these values on the IP addresses of the host's
``storage_network`` or ``replication_network``. For example, if
the ``storage_network`` is ``br-storage`` and host1 has an IP
address of 1.1.1.1 on ``br-storage``, then that is the IP address
that will be used for ``storage_ip``. If only the ``storage_ip``
is specified then the ``repl_ip`` defaults to the ``storage_ip``.
If neither are specified, both will default to the host IP
address of 1.1.1.1 on ``br-storage``, then this is the IP address
in use for ``storage_ip``. If only the ``storage_ip``
is specified, then the ``repl_ip`` defaults to the ``storage_ip``.
If neither are specified, both default to the host IP
address.
Overriding these values on a host or drive basis can cause
@ -259,11 +257,11 @@ file**
the ring is set to a different IP address.
``zone``
The default is 0. Optionally, set the Object Storage zone for the
The default is 0. Optionally, set the swift zone for the
ring.
``region``
Optionally, set the Object Storage region for the ring.
Optionally, set the swift region for the ring.
``weight``
The default weight is 100. If the drives are different sizes, set
@ -273,21 +271,20 @@ file**
``groups``
Set the groups to list the rings to which a host's drive belongs.
This can be set on a per drive basis which will override the host
This can be set on a per drive basis which overrides the host
setting.
``drives``
Set the names of the drives on this Object Storage host. At least
one name must be specified.
Set the names of the drives on the swift host. Specify at least
one name.
``statsd_metric_prefix``
This metric is optional, and also only evaluated it you have defined
``statsd_host`` somewhere. It allows you define a prefix to add to
all statsd metrics sent from this hose. If omitted the node name will
be used.
This metric is optional, and only evaluates if ``statsd_host`` is defined
somewhere. This allows you to define a prefix to add to
all ``statsd`` metrics sent from the hose. If omitted, use the node name.
In the following example, ``swift-node5`` shows values in the
``swift_hosts`` section that will override the global values. Groups
``swift_hosts`` section that override the global values. Groups
are set, which overrides the global settings for drive ``sdb``. The
weight is overridden for the host and specifically adjusted on drive
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently

View File

@ -1,30 +1,28 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage devices
---------------
===============
This section offers a set of prerequisite instructions for setting up
Object Storage storage devices. The storage devices must be set up
before installing Object Storage.
Object Storage (swift) storage devices. The storage devices must be set up
before installing swift.
**Procedure 5.1. Configuring and mounting storage devices**
Object Storage recommends a minimum of three Object Storage hosts
Object Storage recommends a minimum of three swift hosts
with five storage disks. The example commands in this procedure
assume the storage devices for Object Storage are devices ``sdc``
through ``sdg``.
use the storage devices ``sdc`` through to ``sdg``.
#. Determine the storage devices on the node to be used for Object
Storage.
#. Determine the storage devices on the node to be used for swift.
#. Format each device on the node used for storage with XFS. While
formatting the devices, add a unique label for each device.
Without labels, a failed drive can cause mount points to shift and
Without labels, a failed drive causes mount points to shift and
data to become inaccessible.
For example, create the file systems on the devices using the
**mkfs** command
``mkfs`` command:
.. code-block:: shell-session
@ -37,7 +35,7 @@ through ``sdg``.
#. Add the mount locations to the ``fstab`` file so that the storage
devices are remounted on boot. The following example mount options
are recommended when using XFS.
are recommended when using XFS:
.. code-block:: shell-session
@ -47,7 +45,7 @@ through ``sdg``.
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
#. Create the mount points for the devices using the **mkdir** command.
#. Create the mount points for the devices using the ``mkdir`` command:
.. code-block:: shell-session
@ -58,7 +56,7 @@ through ``sdg``.
# mkdir -p /srv/node/sdg
The mount point is referenced as the ``mount_point`` parameter in
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``).
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``):
.. code-block:: shell-session
@ -89,7 +87,7 @@ For the following mounted devices:
Table: Table 5.1. Mounted devices
The entry in the ``swift.yml`` would be:
The entry in the ``swift.yml``:
.. code-block:: yaml

View File

@ -1,17 +1,16 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Integrate with the Image Service
--------------------------------
Integrate with the Image Service (glance)
=========================================
Optionally, the images created by the Image Service (glance) can be
stored using Object Storage.
As an option, you can create images in Image Service (glance) and
store them using Object Storage (swift).
If there is an existing Image Service (glance) backend (for example,
cloud files) but want to add Object Storage (swift) to use as the Image
Service back end, re-add any images from the Image Service after moving
to Object Storage. If the Image Service variables are changed (as
described below) and begin using Object storage, any images in the Image
Service will no longer be available.
If there is an existing glance backend (for example,
cloud files) but you want to add swift to use as the glance backend,
you can re-add any images from glance after moving
to swift. Images are no longer available if there is a change in the
glance variables when you begin using swift.
**Procedure 5.3. Integrating Object Storage with Image Service**
@ -19,7 +18,7 @@ This procedure requires the following:
- OSA Kilo (v11)
- Object Storage v 2.2.0
- Object Storage v2.2.0
#. Update the glance options in the
``/etc/openstack_deploy/user_variables.yml`` file:
@ -47,7 +46,7 @@ This procedure requires the following:
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
``internalURL``.
- ``glance_swift_store_key``: Set the Image Service password using
- ``glance_swift_store_key``: Set the glance password using
the ``{{ glance_service_password }}`` variable.
- ``glance_swift_store_region``: Set the region. The default value
@ -56,9 +55,9 @@ This procedure requires the following:
- ``glance_swift_store_user``: Set the tenant and user name to
``'service:glance'``.
#. Rerun the Image Service (glance) configuration plays.
#. Rerun the glance configuration plays.
#. Run the Image Service (glance) playbook:
#. Run the glance playbook:
.. code-block:: shell-session

View File

@ -1,23 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Overview
--------
Object Storage is configured using the
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
``/etc/openstack_deploy/user_variables.yml`` file.
The group variables in the
``/etc/openstack_deploy/conf.d/swift.yml`` file are used by the
Ansible playbooks when installing Object Storage. Some variables cannot
be changed after they are set, while some changes require re-running the
playbooks. The values in the ``swift_hosts`` section supersede values in
the ``swift`` section.
To view the configuration files, including information about which
variables are required and which are optional, see `Appendix A, *OSA
configuration files* <app-configfiles.html>`_.
--------------
.. include:: navigation.txt

View File

@ -1,15 +1,15 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage Policies
----------------
Storage policies
================
Storage Policies allow segmenting the cluster for various purposes
Storage policies allow segmenting the cluster for various purposes
through the creation of multiple object rings. Using policies, different
devices can belong to different rings with varying levels of
replication. By supporting multiple object rings, Object Storage can
replication. By supporting multiple object rings, swift can
segregate the objects within a single cluster.
Storage policies can be used for the following situations:
Use storage policies for the following situations:
- Differing levels of replication: A provider may want to offer 2x
replication and 3x replication, but does not want to maintain two
@ -27,7 +27,7 @@ Storage policies can be used for the following situations:
- Differing storage implementations: A policy can be used to direct
traffic to collected nodes that use a different disk file (for
example, Kinetic, GlusterFS).
example: Kinetic, GlusterFS).
Most storage clusters do not require more than one storage policy. The
following problems can occur if using multiple storage policies per
@ -37,11 +37,11 @@ cluster:
drives are part of only the account, container, and default storage
policy groups) creates an empty ring for that storage policy.
- A non-default storage policy is used only if specified when creating
- Only use a non-default storage policy if specified when creating
a container, using the ``X-Storage-Policy: <policy-name>`` header.
After the container is created, it uses the created storage policy.
Other containers continue using the default or another storage policy
specified when created.
After creating the container, it uses the storage policy.
Other containers continue using the default or another specified
storage policy.
For more information about storage policies, see: `Storage
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_

View File

@ -1,23 +1,22 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Object Storage service (optional)
-------------------------------------------------
Configuring the Object Storage (swift) service (optional)
=========================================================
.. toctree::
configure-swift-overview.rst
configure-swift-devices.rst
configure-swift-config.rst
configure-swift-glance.rst
configure-swift-add.rst
configure-swift-policies.rst
Object Storage (swift) is a multi-tenant object storage system. It is
Object Storage (swift) is a multi-tenant Object Storage system. It is
highly scalable, can manage large amounts of unstructured data, and
provides a RESTful HTTP API.
The following procedure describes how to set up storage devices and
modify the Object Storage configuration files to enable Object Storage
modify the Object Storage configuration files to enable swift
usage.
#. `The section called "Configure and mount storage
@ -26,20 +25,39 @@ usage.
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity users to use Object Storage by setting
#. Optionally, allow all Identity (keystone) users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
Identity (keystone) users) can create containers and upload objects
keystone users) can create containers and upload objects
to Object Storage.
If this value is ``False``, then by default, only users with the
admin or swiftoperator role are allowed to create containers or
admin or ``swiftoperator`` role are allowed to create containers or
manage tenants.
When the backend type for the Image Service (glance) is set to
``swift``, the Image Service can access the Object Storage cluster
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
Overview
~~~~~~~~
Object Storage (swift) is configured using the
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
``/etc/openstack_deploy/user_variables.yml`` file.
When installing swift, use the group variables in the
``/etc/openstack_deploy/conf.d/swift.yml`` file for the Ansible
playbooks. Some variables cannot
be changed after they are set, while some changes require re-running the
playbooks. The values in the ``swift_hosts`` section supersede values in
the ``swift`` section.
To view the configuration files, including information about which
variables are required and which are optional, see `Appendix A, *OSA
configuration files* <app-configfiles.html>`_.
--------------
.. include:: navigation.txt