Merge "DOCS: Configuration section - cleanup"
This commit is contained in:
commit
8c927c9fe2
@ -1,21 +1,19 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
=======================================
|
|
||||||
Configuring the Aodh service (optional)
|
Configuring the Aodh service (optional)
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
The Telemetry Alarming services perform the following functions:
|
The Telemetry (ceilometer) alarming services perform the following functions:
|
||||||
|
|
||||||
- Creates an API endpoint for controlling alarms.
|
- Creates an API endpoint for controlling alarms.
|
||||||
|
|
||||||
- Allows you to set alarms based on threshold evaluation for a collection of samples.
|
- Allows you to set alarms based on threshold evaluation for a collection of samples.
|
||||||
|
|
||||||
Aodh on OSA requires a configured MongoDB back end prior to running
|
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to running
|
||||||
the Aodh playbooks. To specify the connection data, the deployer has to edit in the
|
the Aodh playbooks. To specify the connection data, edit the
|
||||||
``user_variables.yml`` file (see section `Configuring the user data`_
|
``user_variables.yml`` file (see section `Configuring the user data`_
|
||||||
below).
|
below).
|
||||||
|
|
||||||
|
|
||||||
Setting up a MongoDB database for Aodh
|
Setting up a MongoDB database for Aodh
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
@ -27,14 +25,14 @@ Setting up a MongoDB database for Aodh
|
|||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/mongodb.conf`` file and change ``bind_ip`` to the
|
2. Edit the ``/etc/mongodb.conf`` file and change ``bind_ip`` to the
|
||||||
management interface of the node running Aodh.
|
management interface of the node running Aodh:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
bind_ip = 10.0.0.11
|
bind_ip = 10.0.0.11
|
||||||
|
|
||||||
|
|
||||||
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles:
|
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
@ -48,7 +46,7 @@ Setting up a MongoDB database for Aodh
|
|||||||
# service mongodb restart
|
# service mongodb restart
|
||||||
|
|
||||||
|
|
||||||
5. Create the Aodh database
|
5. Create the Aodh database:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -72,11 +70,13 @@ Setting up a MongoDB database for Aodh
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
.. note:: The ``AODH_DBPASS`` must match the
|
.. note::
|
||||||
``aodh_container_db_password`` in the
|
|
||||||
``/etc/openstack_deploy/user_secrets.yml`` file. This
|
Ensure ``AODH_DBPASS`` matches the
|
||||||
allows Ansible to configure the connection string within
|
``aodh_container_db_password`` in the
|
||||||
the Aodh configuration files.
|
``/etc/openstack_deploy/user_secrets.yml`` file. This
|
||||||
|
allows Ansible to configure the connection string within
|
||||||
|
the Aodh configuration files.
|
||||||
|
|
||||||
|
|
||||||
Configuring the hosts
|
Configuring the hosts
|
||||||
@ -89,7 +89,7 @@ the example included in the
|
|||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
# The infra nodes that the aodh services will run on.
|
# The infra nodes that the Aodh services run on.
|
||||||
metering-alarm_hosts:
|
metering-alarm_hosts:
|
||||||
infra1:
|
infra1:
|
||||||
ip: 172.20.236.111
|
ip: 172.20.236.111
|
||||||
@ -100,47 +100,42 @@ the example included in the
|
|||||||
|
|
||||||
The ``metering-alarm_hosts`` provides several services:
|
The ``metering-alarm_hosts`` provides several services:
|
||||||
|
|
||||||
- An API server (aodh-api). Runs on one or more central management
|
- An API server (``aodh-api``): Runs on one or more central management
|
||||||
servers to provide access to the alarm information stored in the
|
servers to provide access to the alarm information in the
|
||||||
data store.
|
data store.
|
||||||
|
|
||||||
- An alarm evaluator (aodh-evaluator). Runs on one or more central
|
- An alarm evaluator (``aodh-evaluator``): Runs on one or more central
|
||||||
management servers to determine when alarms fire due to the
|
management servers to determine alarm fire due to the
|
||||||
associated statistic trend crossing a threshold over a sliding
|
associated statistic trend crossing a threshold over a sliding
|
||||||
time window.
|
time window.
|
||||||
|
|
||||||
- A notification listener (aodh-listener). Runs on a central
|
- A notification listener (``aodh-listener``): Runs on a central
|
||||||
management server and fire alarms based on defined rules against
|
management server and fire alarms based on defined rules against
|
||||||
event captured by the Telemetry module's notification agents.
|
event captured by ceilometer's module's notification agents.
|
||||||
|
|
||||||
- An alarm notifier (aodh-notifier). Runs on one or more central
|
- An alarm notifier (``aodh-notifier``). Runs on one or more central
|
||||||
management servers to allow alarms to be set based on the
|
management servers to allow the setting of alarms to base on the
|
||||||
threshold evaluation for a collection of samples.
|
threshold evaluation for a collection of samples.
|
||||||
|
|
||||||
These services communicate by using the OpenStack messaging bus. Only
|
These services communicate by using the OpenStack messaging bus. Only
|
||||||
the API server has access to the data store.
|
the API server has access to the data store.
|
||||||
|
|
||||||
|
|
||||||
Configuring the user data
|
Configuring the user data
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In addition to adding these hosts in the
|
Specify the following considerations in
|
||||||
``/etc/openstack_deploy/conf.d/aodh.yml`` file, other configurations
|
``/etc/openstack_deploy/user_variables.yml``:
|
||||||
must be specified in the ``/etc/openstack_deploy/user_variables.yml``
|
|
||||||
file. These configurations are listed below, along with a description:
|
|
||||||
|
|
||||||
|
- The type of database backend Aodh uses. Currently, only MongoDB
|
||||||
- The type of database backend aodh will use. Currently only MongoDB
|
|
||||||
is supported: ``aodh_db_type: mongodb``
|
is supported: ``aodh_db_type: mongodb``
|
||||||
|
|
||||||
- The IP address of the MonogoDB host: ``aodh_db_ip: localhost``
|
- The IP address of the MonogoDB host: ``aodh_db_ip: localhost``
|
||||||
|
|
||||||
- The port of the MongoDB service: ``aodh_db_port: 27017``
|
- The port of the MongoDB service: ``aodh_db_port: 27017``
|
||||||
|
|
||||||
After all of these steps are complete, run the ``os-aodh-install.yml``
|
Run the ``os-aodh-install.yml`` playbook. If deploying a new OpenStack
|
||||||
playbook. If deploying a new openstack (instead of only aodh),
|
(instead of only Aodh), run ``setup-openstack.yml``.
|
||||||
run ``setup-openstack.yml``. The aodh playbooks run as part of this
|
The Aodh playbooks run as part of this playbook.
|
||||||
playbook.
|
|
||||||
|
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
|
@ -1,10 +1,9 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
=============================================
|
Configuring the Telemetry (ceilometer) service (optional)
|
||||||
Configuring the Ceilometer service (optional)
|
=========================================================
|
||||||
=============================================
|
|
||||||
|
|
||||||
The Telemetry module (Ceilometer) performs the following functions:
|
The Telemetry module (ceilometer) performs the following functions:
|
||||||
|
|
||||||
- Efficiently polls metering data related to OpenStack services.
|
- Efficiently polls metering data related to OpenStack services.
|
||||||
|
|
||||||
@ -14,17 +13,17 @@ The Telemetry module (Ceilometer) performs the following functions:
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
The alarming functionality was moved to a separate component in
|
As of Liberty, the alarming functionality is in a separate component.
|
||||||
Liberty. It will be handled by the metering-alarm containers
|
The metering-alarm containers handle the functionality through aodh
|
||||||
through the aodh services. For configuring these services, please
|
services. For configuring these services, see the aodh docs:
|
||||||
see the Aodh docs.
|
http://docs.openstack.org/developer/aodh/
|
||||||
|
|
||||||
Ceilometer on OSA requires a MongoDB backend to be configured prior to running
|
Configure a MongoDB backend prior to running the ceilometer playbooks.
|
||||||
the ceilometer playbooks. The connection data will then need to be given in the
|
The connection data is in the ``user_variables.yml`` file
|
||||||
``user_variables.yml`` file (see section `Configuring the user data`_ below).
|
(see section `Configuring the user data`_ below).
|
||||||
|
|
||||||
|
|
||||||
Setting up a MongoDB database for Ceilometer
|
Setting up a MongoDB database for ceilometer
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
1. Install the MongoDB package:
|
1. Install the MongoDB package:
|
||||||
@ -33,31 +32,32 @@ Setting up a MongoDB database for Ceilometer
|
|||||||
|
|
||||||
# apt-get install mongodb-server mongodb-clients python-pymongo
|
# apt-get install mongodb-server mongodb-clients python-pymongo
|
||||||
|
|
||||||
2. Edit the ``/etc/mongodb.conf`` file and change the bind_ip to the management interface of the node the service is running on.
|
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the management
|
||||||
|
interface:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
bind_ip = 10.0.0.11
|
bind_ip = 10.0.0.11
|
||||||
|
|
||||||
3. Edit the ``/etc/mongodb.conf`` file and enable smallfiles
|
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
|
||||||
|
|
||||||
.. code-block:: ini
|
.. code-block:: ini
|
||||||
|
|
||||||
smallfiles = true
|
smallfiles = true
|
||||||
|
|
||||||
4. Restart the MongoDB service
|
4. Restart the MongoDB service:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# service mongodb restart
|
# service mongodb restart
|
||||||
|
|
||||||
5. Create the ceilometer database
|
5. Create the ceilometer database:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
|
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
|
||||||
|
|
||||||
This should return:
|
This returns:
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
@ -73,16 +73,18 @@ Setting up a MongoDB database for Ceilometer
|
|||||||
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
|
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
|
||||||
}
|
}
|
||||||
|
|
||||||
.. note:: The ``CEILOMETER_DBPASS`` must match the
|
.. note::
|
||||||
``ceilometer_container_db_password`` in the
|
|
||||||
``/etc/openstack_deploy/user_secrets.yml`` file. This is
|
Ensure ``CEILOMETER_DBPASS`` matches the
|
||||||
how ansible knows how to configure the connection string
|
``ceilometer_container_db_password`` in the
|
||||||
within the ceilometer configuration files.
|
``/etc/openstack_deploy/user_secrets.yml`` file. This is
|
||||||
|
how Ansible knows how to configure the connection string
|
||||||
|
within the ceilometer configuration files.
|
||||||
|
|
||||||
Configuring the hosts
|
Configuring the hosts
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Configure Ceilometer by specifying the ``metering-compute_hosts`` and
|
Configure ceilometer by specifying the ``metering-compute_hosts`` and
|
||||||
``metering-infra_hosts`` directives in the
|
``metering-infra_hosts`` directives in the
|
||||||
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file. Below is the
|
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file. Below is the
|
||||||
example included in the
|
example included in the
|
||||||
@ -91,16 +93,16 @@ example included in the
|
|||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
# The compute host that the ceilometer compute agent runs on
|
# The compute host that the ceilometer compute agent runs on
|
||||||
metering-compute_hosts:
|
``metering-compute_hosts``:
|
||||||
compute1:
|
compute1:
|
||||||
ip: 172.20.236.110
|
ip: 172.20.236.110
|
||||||
|
|
||||||
# The infra node that the central agents runs on
|
# The infra node that the central agents runs on
|
||||||
metering-infra_hosts:
|
``metering-infra_hosts``:
|
||||||
infra1:
|
infra1:
|
||||||
ip: 172.20.236.111
|
ip: 172.20.236.111
|
||||||
# Adding more than one host requires further configuration for ceilometer
|
# Adding more than one host requires further configuration for ceilometer
|
||||||
# to work properly. See the "Configuring the hosts for an HA deployment" section.
|
# to work properly.
|
||||||
infra2:
|
infra2:
|
||||||
ip: 172.20.236.112
|
ip: 172.20.236.112
|
||||||
infra3:
|
infra3:
|
||||||
@ -124,7 +126,7 @@ services:
|
|||||||
(See HA section below).
|
(See HA section below).
|
||||||
|
|
||||||
- A collector (ceilometer-collector): Runs on central management
|
- A collector (ceilometer-collector): Runs on central management
|
||||||
server(s) and dispatches collected telemetry data to a data store
|
server(s) and dispatches data to a data store
|
||||||
or external consumer without modification.
|
or external consumer without modification.
|
||||||
|
|
||||||
- An API server (ceilometer-api): Runs on one or more central
|
- An API server (ceilometer-api): Runs on one or more central
|
||||||
@ -135,26 +137,28 @@ Configuring the hosts for an HA deployment
|
|||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Ceilometer supports running the polling and notification agents in an
|
Ceilometer supports running the polling and notification agents in an
|
||||||
HA deployment, meaning that several of these services can be run in parallel
|
HA deployment.
|
||||||
with workload divided among these services.
|
|
||||||
|
|
||||||
The Tooz library provides the coordination within the groups of service
|
The Tooz library provides the coordination within the groups of service
|
||||||
instances. Tooz can be used with several backends. At the time of this
|
instances. Tooz can be used with several backends. At the time of this
|
||||||
writing, the following backends are supported:
|
writing, the following backends are supported:
|
||||||
|
|
||||||
- Zookeeper. Recommended solution by the Tooz project.
|
- Zookeeper: Recommended solution by the Tooz project.
|
||||||
|
|
||||||
- Redis. Recommended solution by the Tooz project.
|
- Redis: Recommended solution by the Tooz project.
|
||||||
|
|
||||||
- Memcached. Recommended for testing.
|
- Memcached: Recommended for testing.
|
||||||
|
|
||||||
It's important to note that the OpenStack-Ansible project will not deploy
|
.. important::
|
||||||
these backends. Instead, these backends are assumed to exist before
|
|
||||||
deploying the ceilometer service. HA is achieved by configuring the proper
|
The OpenStack-Ansible project does not deploy these backends.
|
||||||
directives in ceilometer.conf using ``ceilometer_ceilometer_conf_overrides``
|
The backends exist before deploying the ceilometer service.
|
||||||
in the user_variables.yml file. The Ceilometer admin guide[1] details the
|
|
||||||
options used in ceilometer.conf for an HA deployment. An example
|
Achieve HA by configuring the proper directives in ``ceilometer.conf`` using
|
||||||
``ceilometer_ceilometer_conf_overrides`` is provided below.
|
``ceilometer_ceilometer_conf_overrides`` in the ``user_variables.yml`` file.
|
||||||
|
The ceilometer admin guide[1] details the
|
||||||
|
options used in ``ceilometer.conf`` for HA deployment. The following is an
|
||||||
|
example of ``ceilometer_ceilometer_conf_overrides``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -168,12 +172,10 @@ options used in ceilometer.conf for an HA deployment. An example
|
|||||||
Configuring the user data
|
Configuring the user data
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In addition to adding these hosts in the
|
Specify the following configurations in the
|
||||||
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file, other configurations
|
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||||
must be specified in the ``/etc/openstack_deploy/user_variables.yml`` file.
|
|
||||||
These configurations are listed below, along with a description:
|
|
||||||
|
|
||||||
- The type of database backend ceilometer will use. Currently only
|
- The type of database backend ceilometer uses. Currently only
|
||||||
MongoDB is supported: ``ceilometer_db_type: mongodb``
|
MongoDB is supported: ``ceilometer_db_type: mongodb``
|
||||||
|
|
||||||
- The IP address of the MonogoDB host: ``ceilometer_db_ip:
|
- The IP address of the MonogoDB host: ``ceilometer_db_ip:
|
||||||
@ -202,11 +204,9 @@ These configurations are listed below, along with a description:
|
|||||||
- This configures keystone to send notifications to the message bus:
|
- This configures keystone to send notifications to the message bus:
|
||||||
``keystone_ceilometer_enabled: False``
|
``keystone_ceilometer_enabled: False``
|
||||||
|
|
||||||
After all of these steps are complete, run the
|
Run the ``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
|
||||||
``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
|
|
||||||
(instead of only ceilometer), run ``setup-openstack.yml``. The
|
(instead of only ceilometer), run ``setup-openstack.yml``. The
|
||||||
ceilometer playbooks run as part of this
|
ceilometer playbooks run as part of this playbook.
|
||||||
playbook.
|
|
||||||
|
|
||||||
|
|
||||||
References
|
References
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring HAProxy (optional)
|
Configuring HAProxy (optional)
|
||||||
------------------------------
|
==============================
|
||||||
|
|
||||||
HAProxy provides load balancing services and SSL termination when hardware
|
HAProxy provides load balancing services and SSL termination when hardware
|
||||||
load balancers are not available for high availability architectures deployed
|
load balancers are not available for high availability architectures deployed
|
||||||
@ -9,21 +9,21 @@ by OpenStack-Ansible. The default HAProxy configuration provides highly-
|
|||||||
available load balancing services via keepalived if there is more than one
|
available load balancing services via keepalived if there is more than one
|
||||||
host in the ``haproxy_hosts`` group.
|
host in the ``haproxy_hosts`` group.
|
||||||
|
|
||||||
.. note::
|
.. important::
|
||||||
|
|
||||||
Deployers must review the services exposed by HAProxy and **must limit access
|
Ensure you review the services exposed by HAProxy and limit access
|
||||||
to these services to trusted users and networks only**. For more details,
|
to these services to trusted users and networks only. For more details,
|
||||||
refer to the :ref:`least-access-openstack-services` section.
|
refer to the :ref:`least-access-openstack-services` section.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
A load balancer is required for a successful installation. Deployers may
|
For a successful installation, you require a load balancer. You may
|
||||||
prefer to make use of hardware load balancers instead of haproxy. If hardware
|
prefer to make use of hardware load balancers instead of HAProxy. If hardware
|
||||||
load balancers are used then the load balancing configuration for services
|
load balancers are in use, then implement the load balancing configuration for
|
||||||
must be implemented prior to executing the deployment.
|
services prior to executing the deployment.
|
||||||
|
|
||||||
To deploy HAProxy within your OpenStack-Ansible environment, define target
|
To deploy HAProxy within your OpenStack-Ansible environment, define target
|
||||||
hosts which should run HAProxy:
|
hosts to run HAProxy:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -41,21 +41,21 @@ There is an example configuration file already provided in
|
|||||||
in an OpenStack-Ansible deployment.
|
in an OpenStack-Ansible deployment.
|
||||||
|
|
||||||
Making HAProxy highly-available
|
Making HAProxy highly-available
|
||||||
###############################
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
HAProxy will be deployed in a highly-available manner, by installing
|
If multiple hosts are found in the inventory, deploy
|
||||||
keepalived if multiple hosts are found in the inventory.
|
HAProxy in a highly-available manner by installing keepalived.
|
||||||
|
|
||||||
To skip the deployment of keepalived along HAProxy when installing
|
Edit the ``/etc/openstack_deploy/user_variables.yml`` to skip the deployment
|
||||||
HAProxy on multiple hosts, edit the
|
of keepalived along HAProxy when installing HAProxy on multiple hosts.
|
||||||
``/etc/openstack_deploy/user_variables.yml`` by setting:
|
To do this, set the following::
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
haproxy_use_keepalived: False
|
haproxy_use_keepalived: False
|
||||||
|
|
||||||
Otherwise, edit at least the following variables in
|
To make keepalived work, edit at least the following variables
|
||||||
``user_variables.yml`` to make keepalived work:
|
in ``user_variables.yml``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -66,51 +66,52 @@ Otherwise, edit at least the following variables in
|
|||||||
|
|
||||||
- ``haproxy_keepalived_internal_interface`` and
|
- ``haproxy_keepalived_internal_interface`` and
|
||||||
``haproxy_keepalived_external_interface`` represent the interfaces on the
|
``haproxy_keepalived_external_interface`` represent the interfaces on the
|
||||||
deployed node where the keepalived nodes will bind the internal/external
|
deployed node where the keepalived nodes bind the internal and external
|
||||||
vip. By default the ``br-mgmt`` will be used.
|
vip. By default, use ``br-mgmt``.
|
||||||
|
|
||||||
- ``haproxy_keepalived_internal_vip_cidr`` and
|
- On the interface listed above, ``haproxy_keepalived_internal_vip_cidr`` and
|
||||||
``haproxy_keepalived_external_vip_cidr`` represents the internal and
|
``haproxy_keepalived_external_vip_cidr`` represent the internal and
|
||||||
external (respectively) vips (with their prefix length) that will be used on
|
external (respectively) vips (with their prefix length).
|
||||||
keepalived host with the master status, on the interface listed above.
|
|
||||||
|
|
||||||
- Additional variables can be set to adapt keepalived in the deployed
|
- Set additional variables to adapt keepalived in your deployment.
|
||||||
environment. Please refer to the ``user_variables.yml`` for more descriptions.
|
Refer to the ``user_variables.yml`` for more descriptions.
|
||||||
|
|
||||||
To always deploy (or upgrade to) the latest stable version of keepalived,
|
To always deploy (or upgrade to) the latest stable version of keepalived.
|
||||||
edit the ``/etc/openstack_deploy/user_variables.yml`` by setting:
|
Edit the ``/etc/openstack_deploy/user_variables.yml``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
keepalived_use_latest_stable: True
|
keepalived_use_latest_stable: True
|
||||||
|
|
||||||
The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
|
The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
|
||||||
variable file and provides its content to the keepalived role for
|
variable file and provides content to the keepalived role for
|
||||||
keepalived master and backup nodes.
|
keepalived master and backup nodes.
|
||||||
|
|
||||||
Keepalived pings a public IP address to check its status. The default
|
Keepalived pings a public IP address to check its status. The default
|
||||||
address is ``193.0.14.129``. To change this default, for example to
|
address is ``193.0.14.129``. To change this default,
|
||||||
your gateway, set the ``keepalived_ping_address`` variable in the
|
set the ``keepalived_ping_address`` variable in the
|
||||||
``user_variables.yml`` file.
|
``user_variables.yml`` file.
|
||||||
|
|
||||||
.. note:: The keepalived test works with IPv4 addresses only.
|
.. note::
|
||||||
|
|
||||||
You can define additional variables to adapt keepalived to the
|
The keepalived test works with IPv4 addresses only.
|
||||||
deployed environment. Refer to the ``user_variables.yml`` file for
|
|
||||||
more information. Optionally, you can use your own variable file, as
|
You can define additional variables to adapt keepalived to your
|
||||||
follows:
|
deployment. Refer to the ``user_variables.yml`` file for
|
||||||
|
more information. Optionally, you can use your own variable file.
|
||||||
|
For example:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
|
haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
|
||||||
|
|
||||||
Configuring keepalived ping checks
|
Configuring keepalived ping checks
|
||||||
##################################
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
OpenStack-Ansible configures keepalived with a check script that pings an
|
OpenStack-Ansible configures keepalived with a check script that pings an
|
||||||
external resource and uses that ping to determine if a node has lost network
|
external resource and uses that ping to determine if a node has lost network
|
||||||
connectivity. If the pings fail, keepalived will fail over to another node and
|
connectivity. If the pings fail, keepalived fails over to another node and
|
||||||
HAProxy will serve requests there.
|
HAProxy serves requests there.
|
||||||
|
|
||||||
The destination address, ping count and ping interval are configurable via
|
The destination address, ping count and ping interval are configurable via
|
||||||
Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
|
Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
|
||||||
@ -121,17 +122,17 @@ Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
|
|||||||
keepalived_ping_count: # ICMP packets to send (per interval)
|
keepalived_ping_count: # ICMP packets to send (per interval)
|
||||||
keepalived_ping_interval: # How often ICMP packets are sent
|
keepalived_ping_interval: # How often ICMP packets are sent
|
||||||
|
|
||||||
By default, OpenStack-Ansible will configure keepalived to ping one of the root
|
By default, OpenStack-Ansible configures keepalived to ping one of the root
|
||||||
DNS servers operated by RIPE. You can change this IP address to a different
|
DNS servers operated by RIPE. You can change this IP address to a different
|
||||||
external address or another address on your internal network.
|
external address or another address on your internal network.
|
||||||
|
|
||||||
Securing HAProxy communication with SSL certificates
|
Securing HAProxy communication with SSL certificates
|
||||||
####################################################
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The OpenStack-Ansible project provides the ability to secure HAProxy
|
The OpenStack-Ansible project provides the ability to secure HAProxy
|
||||||
communications with self-signed or user-provided SSL certificates. By default,
|
communications with self-signed or user-provided SSL certificates. By default,
|
||||||
self-signed certificates are used with HAProxy. However, deployers can
|
self-signed certificates are used with HAProxy. However, you can
|
||||||
provide their own certificates by using the following Ansible variables:
|
provide your own certificates by using the following Ansible variables:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -140,7 +141,7 @@ provide their own certificates by using the following Ansible variables:
|
|||||||
haproxy_user_ssl_ca_cert: # Path to CA certificate
|
haproxy_user_ssl_ca_cert: # Path to CA certificate
|
||||||
|
|
||||||
Refer to `Securing services with SSL certificates`_ for more information on
|
Refer to `Securing services with SSL certificates`_ for more information on
|
||||||
these configuration options and how deployers can provide their own
|
these configuration options and how you can provide your own
|
||||||
certificates and keys to use with HAProxy.
|
certificates and keys to use with HAProxy.
|
||||||
|
|
||||||
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
||||||
|
@ -1,15 +1,14 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring Horizon (optional)
|
Configuring the Dashboard (horizon) (optional)
|
||||||
------------------------------
|
==============================================
|
||||||
|
|
||||||
Customizing the Horizon deployment is done within
|
Customize your horizon deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||||
``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
|
|
||||||
Securing Horizon communication with SSL certificates
|
Securing horizon communication with SSL certificates
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The OpenStack-Ansible project provides the ability to secure Horizon
|
The OpenStack-Ansible project provides the ability to secure Dashboard (horizon)
|
||||||
communications with self-signed or user-provided SSL certificates.
|
communications with self-signed or user-provided SSL certificates.
|
||||||
|
|
||||||
Refer to `Securing services with SSL certificates`_ for available configuration
|
Refer to `Securing services with SSL certificates`_ for available configuration
|
||||||
@ -17,10 +16,10 @@ options.
|
|||||||
|
|
||||||
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
||||||
|
|
||||||
Configuring a Horizon Customization Module
|
Configuring a horizon customization module
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Openstack-Ansible supports deployment of a Horizon `customization module`_.
|
Openstack-Ansible supports deployment of a horizon `customization module`_.
|
||||||
After building your customization module, configure the ``horizon_customization_module`` variable
|
After building your customization module, configure the ``horizon_customization_module`` variable
|
||||||
with a path to your module.
|
with a path to your module.
|
||||||
|
|
||||||
|
@ -1,18 +1,18 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring Keystone (optional)
|
Configuring the Identity service (keystone) (optional)
|
||||||
-------------------------------
|
======================================================
|
||||||
|
|
||||||
Customizing the Keystone deployment is done within
|
Customize your keystone deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||||
``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
|
|
||||||
Securing Keystone communication with SSL certificates
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
The OpenStack-Ansible project provides the ability to secure Keystone
|
Securing keystone communication with SSL certificates
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
The OpenStack-Ansible project provides the ability to secure keystone
|
||||||
communications with self-signed or user-provided SSL certificates. By default,
|
communications with self-signed or user-provided SSL certificates. By default,
|
||||||
self-signed certificates are used with Keystone. However, deployers can
|
self-signed certificates are in use. However, you can
|
||||||
provide their own certificates by using the following Ansible variables in
|
provide your own certificates by using the following Ansible variables in
|
||||||
``/etc/openstack_deploy/user_variables.yml``:
|
``/etc/openstack_deploy/user_variables.yml``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
@ -21,41 +21,43 @@ provide their own certificates by using the following Ansible variables in
|
|||||||
keystone_user_ssl_key: # Path to private key
|
keystone_user_ssl_key: # Path to private key
|
||||||
keystone_user_ssl_ca_cert: # Path to CA certificate
|
keystone_user_ssl_ca_cert: # Path to CA certificate
|
||||||
|
|
||||||
.. note:: If the deployer is providing certificate, key, and ca file for a
|
.. note::
|
||||||
|
|
||||||
|
If you are providing certificates, keys, and CA file for a
|
||||||
CA without chain of trust (or an invalid/self-generated ca), the variables
|
CA without chain of trust (or an invalid/self-generated ca), the variables
|
||||||
`keystone_service_internaluri_insecure` and
|
``keystone_service_internaluri_insecure`` and
|
||||||
`keystone_service_adminuri_insecure` should be set to True.
|
``keystone_service_adminuri_insecure`` should be set to ``True``.
|
||||||
|
|
||||||
Refer to `Securing services with SSL certificates`_ for more information on
|
Refer to `Securing services with SSL certificates`_ for more information on
|
||||||
these configuration options and how deployers can provide their own
|
these configuration options and how you can provide your own
|
||||||
certificates and keys to use with Keystone.
|
certificates and keys to use with keystone.
|
||||||
|
|
||||||
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
||||||
|
|
||||||
Implementing LDAP (or Active Directory) Back ends
|
Implementing LDAP (or Active Directory) backends
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Deployers that already have LDAP or Active Directory (AD) infrastructure
|
You can use the built-in keystone support for services if you already have
|
||||||
deployed can use the built-in Keystone support for those identity services.
|
LDAP or Active Directory (AD) infrastructure on your deployment.
|
||||||
Keystone can use the existing users, groups and user-group relationships to
|
Keystone uses the existing users, groups, and user-group relationships to
|
||||||
handle authentication and access control in an OpenStack deployment.
|
handle authentication and access control in an OpenStack deployment.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Although deployers can configure the default domain in Keystone to use LDAP
|
We do not recommend configuring the default domain in keystone to use
|
||||||
or AD identity back ends, **this is not recommended**. Deployers should
|
LDAP or AD identity backends. Create additional domains
|
||||||
create an additional domain in Keystone and configure an LDAP/AD back end
|
in keystone and configure either LDAP or active directory backends for
|
||||||
for that domain.
|
that domain.
|
||||||
|
|
||||||
This is critical in situations where the identity back end cannot
|
This is critical in situations where the identity backend cannot
|
||||||
be reached due to network issues or other problems. In those situations,
|
be reached due to network issues or other problems. In those situations,
|
||||||
the administrative users in the default domain would still be able to
|
the administrative users in the default domain would still be able to
|
||||||
authenticate to keystone using the default domain which is not backed by
|
authenticate to keystone using the default domain which is not backed by
|
||||||
LDAP or AD.
|
LDAP or AD.
|
||||||
|
|
||||||
Deployers can add domains with LDAP back ends by adding variables in
|
You can add domains with LDAP backends by adding variables in
|
||||||
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary will
|
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary
|
||||||
add a new Keystone domain called ``Users`` that is backed by an LDAP server:
|
adds a new keystone domain called ``Users`` that is backed by an LDAP server:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -65,11 +67,11 @@ add a new Keystone domain called ``Users`` that is backed by an LDAP server:
|
|||||||
user: "root"
|
user: "root"
|
||||||
password: "secrete"
|
password: "secrete"
|
||||||
|
|
||||||
Adding the YAML block above will cause the Keystone playbook to create a
|
Adding the YAML block above causes the keystone playbook to create a
|
||||||
``/etc/keystone/domains/keystone.Users.conf`` file within each Keystone service
|
``/etc/keystone/domains/keystone.Users.conf`` file within each keystone service
|
||||||
container that configures the LDAP-backed domain called ``Users``.
|
container that configures the LDAP-backed domain called ``Users``.
|
||||||
|
|
||||||
Deployers can create more complex configurations that use LDAP filtering and
|
You can create more complex configurations that use LDAP filtering and
|
||||||
consume LDAP as a read-only resource. The following example shows how to apply
|
consume LDAP as a read-only resource. The following example shows how to apply
|
||||||
these configurations:
|
these configurations:
|
||||||
|
|
||||||
@ -91,8 +93,8 @@ these configurations:
|
|||||||
user_name_attribute: "uid"
|
user_name_attribute: "uid"
|
||||||
user_filter: "(groupMembership=cn=openstack-users,ou=Users,o=MyCorporation)"
|
user_filter: "(groupMembership=cn=openstack-users,ou=Users,o=MyCorporation)"
|
||||||
|
|
||||||
In the *MyCorporation* example above, Keystone will use the LDAP server as a
|
In the `MyCorporation` example above, keystone uses the LDAP server as a
|
||||||
read-only resource. The configuration also ensures that Keystone filters the
|
read-only resource. The configuration also ensures that keystone filters the
|
||||||
list of possible users to the ones that exist in the
|
list of possible users to the ones that exist in the
|
||||||
``cn=openstack-users,ou=Users,o=MyCorporation`` group.
|
``cn=openstack-users,ou=Users,o=MyCorporation`` group.
|
||||||
|
|
||||||
@ -103,11 +105,11 @@ variable during deployment:
|
|||||||
|
|
||||||
horizon_keystone_multidomain_support: True
|
horizon_keystone_multidomain_support: True
|
||||||
|
|
||||||
Enabling multi-domain support in Horizon will add the ``Domain`` input field on
|
Enabling multi-domain support in horizon adds the ``Domain`` input field on
|
||||||
the Horizon login page and it will add other domain-specific features in the
|
the horizon login page and it adds other domain-specific features in the
|
||||||
*Identity* section.
|
keystone section.
|
||||||
|
|
||||||
More details regarding valid configuration for the LDAP Identity Back-End can
|
More details regarding valid configuration for the LDAP Identity backend can
|
||||||
be found in the `Keystone Developer Documentation`_ and the
|
be found in the `Keystone Developer Documentation`_ and the
|
||||||
`OpenStack Admin Guide`_.
|
`OpenStack Admin Guide`_.
|
||||||
|
|
||||||
|
@ -1,42 +1,41 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring RabbitMQ (optional)
|
Configuring RabbitMQ (optional)
|
||||||
-------------------------------
|
===============================
|
||||||
|
|
||||||
RabbitMQ provides the messaging broker for various OpenStack services. The
|
RabbitMQ provides the messaging broker for various OpenStack services. The
|
||||||
OpenStack-Ansible project configures a plaintext listener on port 5672 and
|
OpenStack-Ansible project configures a plaintext listener on port 5672 and
|
||||||
a SSL/TLS encrypted listener on port 5671.
|
a SSL/TLS encrypted listener on port 5671.
|
||||||
|
|
||||||
Customizing the RabbitMQ deployment is done within
|
Customize your RabbitMQ deployment in ``/etc/openstack_deploy/user_variables.yml``.
|
||||||
``/etc/openstack_deploy/user_variables.yml``.
|
|
||||||
|
|
||||||
Add a TLS encrypted listener to RabbitMQ
|
Add a TLS encrypted listener to RabbitMQ
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The OpenStack-Ansible project provides the ability to secure RabbitMQ
|
The OpenStack-Ansible project provides the ability to secure RabbitMQ
|
||||||
communications with self-signed or user-provided SSL certificates. Refer to
|
communications with self-signed or user-provided SSL certificates. Refer to
|
||||||
`Securing services with SSL certificates`_ for available configuration
|
`Securing services with SSL certificates`_ for available configuration
|
||||||
options.
|
options.
|
||||||
|
|
||||||
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
.. _Securing services with SSL certificates: configure-sslcertificates.html
|
||||||
|
|
||||||
Enable encrypted connections to RabbitMQ
|
Enable encrypted connections to RabbitMQ
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
SSL communication between various OpenStack services and RabbitMQ is
|
The control of SSL communication between various OpenStack services and
|
||||||
controlled via the Ansible variable ``rabbitmq_use_ssl``:
|
RabbitMQ is via the Ansible variable ``rabbitmq_use_ssl``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
rabbitmq_use_ssl: true
|
rabbitmq_use_ssl: true
|
||||||
|
|
||||||
Setting this variable to ``true`` will adjust the RabbitMQ port to 5671 (the
|
Setting this variable to ``true`` adjusts the RabbitMQ port to 5671 (the
|
||||||
default SSL/TLS listener port) and enable SSL connectivity between each
|
default SSL/TLS listener port) and enables SSL connectivity between each
|
||||||
OpenStack service and RabbitMQ.
|
OpenStack service and RabbitMQ.
|
||||||
|
|
||||||
Setting this variable to ``false`` will disable SSL encryption between
|
Setting this variable to ``false`` disables SSL encryption between
|
||||||
OpenStack services and RabbitMQ. The plaintext port for RabbitMQ, 5672, will
|
OpenStack services and RabbitMQ. Use the plaintext port for RabbitMQ, 5672,
|
||||||
be used for all services.
|
for all services.
|
||||||
|
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
|
@ -1,32 +1,32 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Add to existing deployment
|
Add to existing deployment
|
||||||
--------------------------
|
==========================
|
||||||
|
|
||||||
Complete the following procedure to deploy Object Storage on an
|
Complete the following procedure to deploy swift on an
|
||||||
existing deployment.
|
existing deployment.
|
||||||
|
|
||||||
#. `the section called "Configure and mount storage
|
#. `The section called "Configure and mount storage
|
||||||
devices" <configure-swift-devices.html>`_
|
devices" <configure-swift-devices.html>`_
|
||||||
|
|
||||||
#. `the section called "Configure an Object Storage
|
#. `The section called "Configure an Object Storage
|
||||||
deployment" <configure-swift-config.html>`_
|
deployment" <configure-swift-config.html>`_
|
||||||
|
|
||||||
#. Optionally, allow all Identity users to use Object Storage by setting
|
#. Optionally, allow all keystone users to use swift by setting
|
||||||
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
||||||
``True``. Any users with the ``_member_`` role (all authorized
|
``True``. Any users with the ``_member_`` role (all authorized
|
||||||
Identity (keystone) users) can create containers and upload objects
|
keystone users) can create containers and upload objects
|
||||||
to Object Storage.
|
to swift.
|
||||||
|
|
||||||
If this value is ``False``, then by default, only users with the
|
If this value is ``False``, by default only users with the
|
||||||
admin or swiftoperator role are allowed to create containers or
|
``admin`` or ``swiftoperator`` role can create containers or
|
||||||
manage tenants.
|
manage tenants.
|
||||||
|
|
||||||
When the backend type for the Image Service (glance) is set to
|
When the backend type for the glance is set to
|
||||||
``swift``, the Image Service can access the Object Storage cluster
|
``swift``, glance can access the swift cluster
|
||||||
regardless of whether this value is ``True`` or ``False``.
|
regardless of whether this value is ``True`` or ``False``.
|
||||||
|
|
||||||
#. Run the Object Storage play:
|
#. Run the swift play:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring the service
|
Configuring the service
|
||||||
-----------------------
|
=======================
|
||||||
|
|
||||||
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
|
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
|
||||||
file**
|
file**
|
||||||
@ -53,14 +53,14 @@ file**
|
|||||||
|
|
||||||
``part_power``
|
``part_power``
|
||||||
Set the partition power value based on the total amount of
|
Set the partition power value based on the total amount of
|
||||||
storage the entire ring will use.
|
storage the entire ring uses.
|
||||||
|
|
||||||
Multiply the maximum number of drives ever used with this Object
|
Multiply the maximum number of drives ever used with the swift
|
||||||
Storage installation by 100 and round that value up to the
|
installation by 100 and round that value up to the
|
||||||
closest power of two value. For example, a maximum of six drives,
|
closest power of two value. For example, a maximum of six drives,
|
||||||
times 100, equals 600. The nearest power of two above 600 is two
|
times 100, equals 600. The nearest power of two above 600 is two
|
||||||
to the power of nine, so the partition power is nine. The
|
to the power of nine, so the partition power is nine. The
|
||||||
partition power cannot be changed after the Object Storage rings
|
partition power cannot be changed after the swift rings
|
||||||
are built.
|
are built.
|
||||||
|
|
||||||
``weight``
|
``weight``
|
||||||
@ -71,9 +71,9 @@ file**
|
|||||||
|
|
||||||
``min_part_hours``
|
``min_part_hours``
|
||||||
The default value is 1. Set the minimum partition hours to the
|
The default value is 1. Set the minimum partition hours to the
|
||||||
amount of time to lock a partition's replicas after a partition
|
amount of time to lock a partition's replicas after moving a partition.
|
||||||
has been moved. Moving multiple replicas at the same time might
|
Moving multiple replicas at the same time
|
||||||
make data inaccessible. This value can be set separately in the
|
makes data inaccessible. This value can be set separately in the
|
||||||
swift, container, account, and policy sections with the value in
|
swift, container, account, and policy sections with the value in
|
||||||
lower sections superseding the value in the swift section.
|
lower sections superseding the value in the swift section.
|
||||||
|
|
||||||
@ -85,13 +85,13 @@ file**
|
|||||||
section.
|
section.
|
||||||
|
|
||||||
``storage_network``
|
``storage_network``
|
||||||
By default, the swift services will listen on the default
|
By default, the swift services listen on the default
|
||||||
management IP. Optionally, specify the interface of the storage
|
management IP. Optionally, specify the interface of the storage
|
||||||
network.
|
network.
|
||||||
|
|
||||||
If the ``storage_network`` is not set, but the ``storage_ips``
|
If the ``storage_network`` is not set, but the ``storage_ips``
|
||||||
per host are set (or the ``storage_ip`` is not on the
|
per host are set (or the ``storage_ip`` is not on the
|
||||||
``storage_network`` interface) the proxy server will not be able
|
``storage_network`` interface) the proxy server is unable
|
||||||
to connect to the storage services.
|
to connect to the storage services.
|
||||||
|
|
||||||
``replication_network``
|
``replication_network``
|
||||||
@ -99,9 +99,8 @@ file**
|
|||||||
dedicated replication can be setup. If this value is not
|
dedicated replication can be setup. If this value is not
|
||||||
specified, no dedicated ``replication_network`` is set.
|
specified, no dedicated ``replication_network`` is set.
|
||||||
|
|
||||||
As with the ``storage_network``, if the ``repl_ip`` is not set on
|
Replication does not work properly if the ``repl_ip`` is not set on
|
||||||
the ``replication_network`` interface, replication will not work
|
the ``replication_network`` interface.
|
||||||
properly.
|
|
||||||
|
|
||||||
``drives``
|
``drives``
|
||||||
Set the default drives per host. This is useful when all hosts
|
Set the default drives per host. This is useful when all hosts
|
||||||
@ -123,39 +122,39 @@ file**
|
|||||||
created before storage policies were instituted.
|
created before storage policies were instituted.
|
||||||
|
|
||||||
``default``
|
``default``
|
||||||
Set the default value to *yes* for at least one policy. This is
|
Set the default value to ``yes`` for at least one policy. This is
|
||||||
the default storage policy for any non-legacy containers that are
|
the default storage policy for any non-legacy containers that are
|
||||||
created.
|
created.
|
||||||
|
|
||||||
``deprecated``
|
``deprecated``
|
||||||
Set the deprecated value to *yes* to turn off storage policies.
|
Set the deprecated value to ``yes`` to turn off storage policies.
|
||||||
|
|
||||||
For account and container rings, ``min_part_hours`` and
|
For account and container rings, ``min_part_hours`` and
|
||||||
``repl_number`` are the only values that can be set. Setting them
|
``repl_number`` are the only values that can be set. Setting them
|
||||||
in this section overrides the defaults for the specific ring.
|
in this section overrides the defaults for the specific ring.
|
||||||
|
|
||||||
``statsd_host``
|
``statsd_host``
|
||||||
Swift supports sending extra metrics to a statsd host. This option
|
Swift supports sending extra metrics to a ``statsd`` host. This option
|
||||||
sets the statsd host that will receive statsd metrics. Specifying
|
sets the ``statsd`` host to receive ``statsd`` metrics. Specifying
|
||||||
this here will apply to all hosts in the cluster.
|
this here applies to all hosts in the cluster.
|
||||||
|
|
||||||
If statsd_host is left blank or omitted then statsd will be
|
If ``statsd_host`` is left blank or omitted, then ``statsd`` are
|
||||||
disabled.
|
disabled.
|
||||||
|
|
||||||
All statsd settings can be overridden or specified deeper in the
|
All ``statsd`` settings can be overridden or you can specify deeper in the
|
||||||
structure if you want to only catch statsd metrics on certain hosts.
|
structure if you want to only catch ``statsdv`` metrics on certain hosts.
|
||||||
|
|
||||||
``statsd_port``
|
``statsd_port``
|
||||||
Optionally, use this to specify the statsd server's port your sending
|
Optionally, use this to specify the ``statsd`` server's port you are
|
||||||
metrics to. Defaults to 8125 of omitted.
|
sending metrics to. Defaults to 8125 of omitted.
|
||||||
|
|
||||||
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
|
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
|
||||||
These statsd related options are a little more complicated and are
|
These ``statsd`` related options are more complex and are
|
||||||
used to tune how many samples are sent to statsd. Omit them unless
|
used to tune how many samples are sent to ``statsd``. Omit them unless
|
||||||
you need to tweak these settings, if so first read:
|
you need to tweak these settings, if so first read:
|
||||||
http://docs.openstack.org/developer/swift/admin_guide.html
|
http://docs.openstack.org/developer/swift/admin_guide.html
|
||||||
|
|
||||||
#. Update the Object Storage proxy hosts values:
|
#. Update the swift proxy hosts values:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -171,17 +170,16 @@ file**
|
|||||||
# statsd_metric_prefix: proxy03
|
# statsd_metric_prefix: proxy03
|
||||||
|
|
||||||
``swift-proxy_hosts``
|
``swift-proxy_hosts``
|
||||||
Set the ``IP`` address of the hosts that Ansible will connect to
|
Set the ``IP`` address of the hosts so Ansible connects to
|
||||||
to deploy the swift-proxy containers. The ``swift-proxy_hosts``
|
to deploy the ``swift-proxy`` containers. The ``swift-proxy_hosts``
|
||||||
value should match the infra nodes.
|
value matches the infra nodes.
|
||||||
|
|
||||||
``statsd_metric_prefix``
|
``statsd_metric_prefix``
|
||||||
This metric is optional, and also only evaluated it you have defined
|
This metric is optional, and also only evaluated it you have defined
|
||||||
``statsd_host`` somewhere. It allows you define a prefix to add to
|
``statsd_host`` somewhere. It allows you define a prefix to add to
|
||||||
all statsd metrics sent from this hose. If omitted the node name will
|
all ``statsd`` metrics sent from this hose. If omitted, use the node name.
|
||||||
be used.
|
|
||||||
|
|
||||||
#. Update the Object Storage hosts values:
|
#. Update the swift hosts values:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
@ -237,20 +235,20 @@ file**
|
|||||||
``swift_hosts``
|
``swift_hosts``
|
||||||
Specify the hosts to be used as the storage nodes. The ``ip`` is
|
Specify the hosts to be used as the storage nodes. The ``ip`` is
|
||||||
the address of the host to which Ansible connects. Set the name
|
the address of the host to which Ansible connects. Set the name
|
||||||
and IP address of each Object Storage host. The ``swift_hosts``
|
and IP address of each swift host. The ``swift_hosts``
|
||||||
section is not required.
|
section is not required.
|
||||||
|
|
||||||
``swift_vars``
|
``swift_vars``
|
||||||
Contains the Object Storage host specific values.
|
Contains the swift host specific values.
|
||||||
|
|
||||||
``storage_ip`` and ``repl_ip``
|
``storage_ip`` and ``repl_ip``
|
||||||
These values are based on the IP addresses of the host's
|
Base these values on the IP addresses of the host's
|
||||||
``storage_network`` or ``replication_network``. For example, if
|
``storage_network`` or ``replication_network``. For example, if
|
||||||
the ``storage_network`` is ``br-storage`` and host1 has an IP
|
the ``storage_network`` is ``br-storage`` and host1 has an IP
|
||||||
address of 1.1.1.1 on ``br-storage``, then that is the IP address
|
address of 1.1.1.1 on ``br-storage``, then this is the IP address
|
||||||
that will be used for ``storage_ip``. If only the ``storage_ip``
|
in use for ``storage_ip``. If only the ``storage_ip``
|
||||||
is specified then the ``repl_ip`` defaults to the ``storage_ip``.
|
is specified, then the ``repl_ip`` defaults to the ``storage_ip``.
|
||||||
If neither are specified, both will default to the host IP
|
If neither are specified, both default to the host IP
|
||||||
address.
|
address.
|
||||||
|
|
||||||
Overriding these values on a host or drive basis can cause
|
Overriding these values on a host or drive basis can cause
|
||||||
@ -259,11 +257,11 @@ file**
|
|||||||
the ring is set to a different IP address.
|
the ring is set to a different IP address.
|
||||||
|
|
||||||
``zone``
|
``zone``
|
||||||
The default is 0. Optionally, set the Object Storage zone for the
|
The default is 0. Optionally, set the swift zone for the
|
||||||
ring.
|
ring.
|
||||||
|
|
||||||
``region``
|
``region``
|
||||||
Optionally, set the Object Storage region for the ring.
|
Optionally, set the swift region for the ring.
|
||||||
|
|
||||||
``weight``
|
``weight``
|
||||||
The default weight is 100. If the drives are different sizes, set
|
The default weight is 100. If the drives are different sizes, set
|
||||||
@ -273,21 +271,20 @@ file**
|
|||||||
|
|
||||||
``groups``
|
``groups``
|
||||||
Set the groups to list the rings to which a host's drive belongs.
|
Set the groups to list the rings to which a host's drive belongs.
|
||||||
This can be set on a per drive basis which will override the host
|
This can be set on a per drive basis which overrides the host
|
||||||
setting.
|
setting.
|
||||||
|
|
||||||
``drives``
|
``drives``
|
||||||
Set the names of the drives on this Object Storage host. At least
|
Set the names of the drives on the swift host. Specify at least
|
||||||
one name must be specified.
|
one name.
|
||||||
|
|
||||||
``statsd_metric_prefix``
|
``statsd_metric_prefix``
|
||||||
This metric is optional, and also only evaluated it you have defined
|
This metric is optional, and only evaluates if ``statsd_host`` is defined
|
||||||
``statsd_host`` somewhere. It allows you define a prefix to add to
|
somewhere. This allows you to define a prefix to add to
|
||||||
all statsd metrics sent from this hose. If omitted the node name will
|
all ``statsd`` metrics sent from the hose. If omitted, use the node name.
|
||||||
be used.
|
|
||||||
|
|
||||||
In the following example, ``swift-node5`` shows values in the
|
In the following example, ``swift-node5`` shows values in the
|
||||||
``swift_hosts`` section that will override the global values. Groups
|
``swift_hosts`` section that override the global values. Groups
|
||||||
are set, which overrides the global settings for drive ``sdb``. The
|
are set, which overrides the global settings for drive ``sdb``. The
|
||||||
weight is overridden for the host and specifically adjusted on drive
|
weight is overridden for the host and specifically adjusted on drive
|
||||||
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
|
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
|
||||||
|
@ -1,30 +1,28 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Storage devices
|
Storage devices
|
||||||
---------------
|
===============
|
||||||
|
|
||||||
This section offers a set of prerequisite instructions for setting up
|
This section offers a set of prerequisite instructions for setting up
|
||||||
Object Storage storage devices. The storage devices must be set up
|
Object Storage (swift) storage devices. The storage devices must be set up
|
||||||
before installing Object Storage.
|
before installing swift.
|
||||||
|
|
||||||
**Procedure 5.1. Configuring and mounting storage devices**
|
**Procedure 5.1. Configuring and mounting storage devices**
|
||||||
|
|
||||||
Object Storage recommends a minimum of three Object Storage hosts
|
Object Storage recommends a minimum of three swift hosts
|
||||||
with five storage disks. The example commands in this procedure
|
with five storage disks. The example commands in this procedure
|
||||||
assume the storage devices for Object Storage are devices ``sdc``
|
use the storage devices ``sdc`` through to ``sdg``.
|
||||||
through ``sdg``.
|
|
||||||
|
|
||||||
#. Determine the storage devices on the node to be used for Object
|
#. Determine the storage devices on the node to be used for swift.
|
||||||
Storage.
|
|
||||||
|
|
||||||
#. Format each device on the node used for storage with XFS. While
|
#. Format each device on the node used for storage with XFS. While
|
||||||
formatting the devices, add a unique label for each device.
|
formatting the devices, add a unique label for each device.
|
||||||
|
|
||||||
Without labels, a failed drive can cause mount points to shift and
|
Without labels, a failed drive causes mount points to shift and
|
||||||
data to become inaccessible.
|
data to become inaccessible.
|
||||||
|
|
||||||
For example, create the file systems on the devices using the
|
For example, create the file systems on the devices using the
|
||||||
**mkfs** command
|
``mkfs`` command:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -37,7 +35,7 @@ through ``sdg``.
|
|||||||
|
|
||||||
#. Add the mount locations to the ``fstab`` file so that the storage
|
#. Add the mount locations to the ``fstab`` file so that the storage
|
||||||
devices are remounted on boot. The following example mount options
|
devices are remounted on boot. The following example mount options
|
||||||
are recommended when using XFS.
|
are recommended when using XFS:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -47,7 +45,7 @@ through ``sdg``.
|
|||||||
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
|
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
|
||||||
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
|
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
|
||||||
|
|
||||||
#. Create the mount points for the devices using the **mkdir** command.
|
#. Create the mount points for the devices using the ``mkdir`` command:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -58,7 +56,7 @@ through ``sdg``.
|
|||||||
# mkdir -p /srv/node/sdg
|
# mkdir -p /srv/node/sdg
|
||||||
|
|
||||||
The mount point is referenced as the ``mount_point`` parameter in
|
The mount point is referenced as the ``mount_point`` parameter in
|
||||||
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``).
|
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``):
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
@ -89,7 +87,7 @@ For the following mounted devices:
|
|||||||
|
|
||||||
Table: Table 5.1. Mounted devices
|
Table: Table 5.1. Mounted devices
|
||||||
|
|
||||||
The entry in the ``swift.yml`` would be:
|
The entry in the ``swift.yml``:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
|
@ -1,17 +1,16 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Integrate with the Image Service
|
Integrate with the Image Service (glance)
|
||||||
--------------------------------
|
=========================================
|
||||||
|
|
||||||
Optionally, the images created by the Image Service (glance) can be
|
As an option, you can create images in Image Service (glance) and
|
||||||
stored using Object Storage.
|
store them using Object Storage (swift).
|
||||||
|
|
||||||
If there is an existing Image Service (glance) backend (for example,
|
If there is an existing glance backend (for example,
|
||||||
cloud files) but want to add Object Storage (swift) to use as the Image
|
cloud files) but you want to add swift to use as the glance backend,
|
||||||
Service back end, re-add any images from the Image Service after moving
|
you can re-add any images from glance after moving
|
||||||
to Object Storage. If the Image Service variables are changed (as
|
to swift. Images are no longer available if there is a change in the
|
||||||
described below) and begin using Object storage, any images in the Image
|
glance variables when you begin using swift.
|
||||||
Service will no longer be available.
|
|
||||||
|
|
||||||
**Procedure 5.3. Integrating Object Storage with Image Service**
|
**Procedure 5.3. Integrating Object Storage with Image Service**
|
||||||
|
|
||||||
@ -19,7 +18,7 @@ This procedure requires the following:
|
|||||||
|
|
||||||
- OSA Kilo (v11)
|
- OSA Kilo (v11)
|
||||||
|
|
||||||
- Object Storage v 2.2.0
|
- Object Storage v2.2.0
|
||||||
|
|
||||||
#. Update the glance options in the
|
#. Update the glance options in the
|
||||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||||
@ -47,7 +46,7 @@ This procedure requires the following:
|
|||||||
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
|
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
|
||||||
``internalURL``.
|
``internalURL``.
|
||||||
|
|
||||||
- ``glance_swift_store_key``: Set the Image Service password using
|
- ``glance_swift_store_key``: Set the glance password using
|
||||||
the ``{{ glance_service_password }}`` variable.
|
the ``{{ glance_service_password }}`` variable.
|
||||||
|
|
||||||
- ``glance_swift_store_region``: Set the region. The default value
|
- ``glance_swift_store_region``: Set the region. The default value
|
||||||
@ -56,9 +55,9 @@ This procedure requires the following:
|
|||||||
- ``glance_swift_store_user``: Set the tenant and user name to
|
- ``glance_swift_store_user``: Set the tenant and user name to
|
||||||
``'service:glance'``.
|
``'service:glance'``.
|
||||||
|
|
||||||
#. Rerun the Image Service (glance) configuration plays.
|
#. Rerun the glance configuration plays.
|
||||||
|
|
||||||
#. Run the Image Service (glance) playbook:
|
#. Run the glance playbook:
|
||||||
|
|
||||||
.. code-block:: shell-session
|
.. code-block:: shell-session
|
||||||
|
|
||||||
|
@ -1,23 +0,0 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
|
||||||
|
|
||||||
Overview
|
|
||||||
--------
|
|
||||||
|
|
||||||
Object Storage is configured using the
|
|
||||||
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
|
|
||||||
``/etc/openstack_deploy/user_variables.yml`` file.
|
|
||||||
|
|
||||||
The group variables in the
|
|
||||||
``/etc/openstack_deploy/conf.d/swift.yml`` file are used by the
|
|
||||||
Ansible playbooks when installing Object Storage. Some variables cannot
|
|
||||||
be changed after they are set, while some changes require re-running the
|
|
||||||
playbooks. The values in the ``swift_hosts`` section supersede values in
|
|
||||||
the ``swift`` section.
|
|
||||||
|
|
||||||
To view the configuration files, including information about which
|
|
||||||
variables are required and which are optional, see `Appendix A, *OSA
|
|
||||||
configuration files* <app-configfiles.html>`_.
|
|
||||||
|
|
||||||
--------------
|
|
||||||
|
|
||||||
.. include:: navigation.txt
|
|
@ -1,15 +1,15 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Storage Policies
|
Storage policies
|
||||||
----------------
|
================
|
||||||
|
|
||||||
Storage Policies allow segmenting the cluster for various purposes
|
Storage policies allow segmenting the cluster for various purposes
|
||||||
through the creation of multiple object rings. Using policies, different
|
through the creation of multiple object rings. Using policies, different
|
||||||
devices can belong to different rings with varying levels of
|
devices can belong to different rings with varying levels of
|
||||||
replication. By supporting multiple object rings, Object Storage can
|
replication. By supporting multiple object rings, swift can
|
||||||
segregate the objects within a single cluster.
|
segregate the objects within a single cluster.
|
||||||
|
|
||||||
Storage policies can be used for the following situations:
|
Use storage policies for the following situations:
|
||||||
|
|
||||||
- Differing levels of replication: A provider may want to offer 2x
|
- Differing levels of replication: A provider may want to offer 2x
|
||||||
replication and 3x replication, but does not want to maintain two
|
replication and 3x replication, but does not want to maintain two
|
||||||
@ -27,7 +27,7 @@ Storage policies can be used for the following situations:
|
|||||||
|
|
||||||
- Differing storage implementations: A policy can be used to direct
|
- Differing storage implementations: A policy can be used to direct
|
||||||
traffic to collected nodes that use a different disk file (for
|
traffic to collected nodes that use a different disk file (for
|
||||||
example, Kinetic, GlusterFS).
|
example: Kinetic, GlusterFS).
|
||||||
|
|
||||||
Most storage clusters do not require more than one storage policy. The
|
Most storage clusters do not require more than one storage policy. The
|
||||||
following problems can occur if using multiple storage policies per
|
following problems can occur if using multiple storage policies per
|
||||||
@ -37,11 +37,11 @@ cluster:
|
|||||||
drives are part of only the account, container, and default storage
|
drives are part of only the account, container, and default storage
|
||||||
policy groups) creates an empty ring for that storage policy.
|
policy groups) creates an empty ring for that storage policy.
|
||||||
|
|
||||||
- A non-default storage policy is used only if specified when creating
|
- Only use a non-default storage policy if specified when creating
|
||||||
a container, using the ``X-Storage-Policy: <policy-name>`` header.
|
a container, using the ``X-Storage-Policy: <policy-name>`` header.
|
||||||
After the container is created, it uses the created storage policy.
|
After creating the container, it uses the storage policy.
|
||||||
Other containers continue using the default or another storage policy
|
Other containers continue using the default or another specified
|
||||||
specified when created.
|
storage policy.
|
||||||
|
|
||||||
For more information about storage policies, see: `Storage
|
For more information about storage policies, see: `Storage
|
||||||
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_
|
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_
|
||||||
|
@ -1,23 +1,22 @@
|
|||||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||||
|
|
||||||
Configuring the Object Storage service (optional)
|
Configuring the Object Storage (swift) service (optional)
|
||||||
-------------------------------------------------
|
=========================================================
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
|
||||||
configure-swift-overview.rst
|
|
||||||
configure-swift-devices.rst
|
configure-swift-devices.rst
|
||||||
configure-swift-config.rst
|
configure-swift-config.rst
|
||||||
configure-swift-glance.rst
|
configure-swift-glance.rst
|
||||||
configure-swift-add.rst
|
configure-swift-add.rst
|
||||||
configure-swift-policies.rst
|
configure-swift-policies.rst
|
||||||
|
|
||||||
Object Storage (swift) is a multi-tenant object storage system. It is
|
Object Storage (swift) is a multi-tenant Object Storage system. It is
|
||||||
highly scalable, can manage large amounts of unstructured data, and
|
highly scalable, can manage large amounts of unstructured data, and
|
||||||
provides a RESTful HTTP API.
|
provides a RESTful HTTP API.
|
||||||
|
|
||||||
The following procedure describes how to set up storage devices and
|
The following procedure describes how to set up storage devices and
|
||||||
modify the Object Storage configuration files to enable Object Storage
|
modify the Object Storage configuration files to enable swift
|
||||||
usage.
|
usage.
|
||||||
|
|
||||||
#. `The section called "Configure and mount storage
|
#. `The section called "Configure and mount storage
|
||||||
@ -26,20 +25,39 @@ usage.
|
|||||||
#. `The section called "Configure an Object Storage
|
#. `The section called "Configure an Object Storage
|
||||||
deployment" <configure-swift-config.html>`_
|
deployment" <configure-swift-config.html>`_
|
||||||
|
|
||||||
#. Optionally, allow all Identity users to use Object Storage by setting
|
#. Optionally, allow all Identity (keystone) users to use swift by setting
|
||||||
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
``swift_allow_all_users`` in the ``user_variables.yml`` file to
|
||||||
``True``. Any users with the ``_member_`` role (all authorized
|
``True``. Any users with the ``_member_`` role (all authorized
|
||||||
Identity (keystone) users) can create containers and upload objects
|
keystone users) can create containers and upload objects
|
||||||
to Object Storage.
|
to Object Storage.
|
||||||
|
|
||||||
If this value is ``False``, then by default, only users with the
|
If this value is ``False``, then by default, only users with the
|
||||||
admin or swiftoperator role are allowed to create containers or
|
admin or ``swiftoperator`` role are allowed to create containers or
|
||||||
manage tenants.
|
manage tenants.
|
||||||
|
|
||||||
When the backend type for the Image Service (glance) is set to
|
When the backend type for the Image Service (glance) is set to
|
||||||
``swift``, the Image Service can access the Object Storage cluster
|
``swift``, glance can access the swift cluster
|
||||||
regardless of whether this value is ``True`` or ``False``.
|
regardless of whether this value is ``True`` or ``False``.
|
||||||
|
|
||||||
|
|
||||||
|
Overview
|
||||||
|
~~~~~~~~
|
||||||
|
|
||||||
|
Object Storage (swift) is configured using the
|
||||||
|
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
|
||||||
|
``/etc/openstack_deploy/user_variables.yml`` file.
|
||||||
|
|
||||||
|
When installing swift, use the group variables in the
|
||||||
|
``/etc/openstack_deploy/conf.d/swift.yml`` file for the Ansible
|
||||||
|
playbooks. Some variables cannot
|
||||||
|
be changed after they are set, while some changes require re-running the
|
||||||
|
playbooks. The values in the ``swift_hosts`` section supersede values in
|
||||||
|
the ``swift`` section.
|
||||||
|
|
||||||
|
To view the configuration files, including information about which
|
||||||
|
variables are required and which are optional, see `Appendix A, *OSA
|
||||||
|
configuration files* <app-configfiles.html>`_.
|
||||||
|
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
.. include:: navigation.txt
|
.. include:: navigation.txt
|
||||||
|
Loading…
Reference in New Issue
Block a user