[docs] Remove duplicated content

1. Remove optional configuration content in the install guide, which is
now located in the developer docs
2. Fixed appendix titles

Change-Id: Ie3d7223d38dfda822b18bde123b68baa415418bc
Implements: blueprint osa-install-guide-overhaul
This commit is contained in:
daz 2016-07-29 15:40:52 +10:00 committed by Darren Chan
parent 0392a70533
commit b52c21e7bb
32 changed files with 26 additions and 3495 deletions

View File

@ -10,74 +10,6 @@ The Telemetry (ceilometer) alarming services perform the following functions:
- Allows you to set alarms based on threshold evaluation for a collection of
samples.
Aodh on OpenStack-Ansible requires a configured MongoDB backend prior to
running the Aodh playbooks. To specify the connection data, edit the
``user_variables.yml`` file (see section `Configuring the user data`_
below).
Setting up a MongoDB database for Aodh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install the MongoDB package:
.. code-block:: console
# apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change ``bind_ip`` to the
management interface of the node running Aodh:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
smallfiles = true
4. Restart the MongoDB service:
.. code-block:: console
# service mongodb restart
5. Create the Aodh database:
.. code-block:: console
# mongo --host controller --eval 'db = db.getSiblingDB("aodh"); db.addUser({user: "aodh", pwd: "AODH_DBPASS", roles: [ "readWrite", "dbAdmin" ]});'
This returns:
.. code-block:: console
MongoDB shell version: 2.4.x
connecting to: controller:27017/test
{
"user" : "aodh",
"pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
"roles" : [
"readWrite",
"dbAdmin"
],
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
.. note::
Ensure ``AODH_DBPASS`` matches the
``aodh_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This
allows Ansible to configure the connection string within
the Aodh configuration files.
Configuring the hosts
@ -121,19 +53,6 @@ The ``metering-alarm_hosts`` provides several services:
These services communicate by using the OpenStack messaging bus. Only
the API server has access to the data store.
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the following considerations in
``/etc/openstack_deploy/user_variables.yml``:
- The type of database backend Aodh uses. Currently, only MongoDB
is supported: ``aodh_db_type: mongodb``
- The IP address of the MonogoDB host: ``aodh_db_ip: localhost``
- The port of the MongoDB service: ``aodh_db_port: 27017``
Run the ``os-aodh-install.yml`` playbook. If deploying a new OpenStack
(instead of only Aodh), run ``setup-openstack.yml``.
The Aodh playbooks run as part of this playbook.

View File

@ -146,6 +146,27 @@ certificates and keys to use with HAProxy.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Configuring additional services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional haproxy service entries can be configured by setting
``haproxy_extra_services`` in ``/etc/openstack_deploy/user_variables.yml``
For more information on the service dict syntax, please reference
``playbooks/vars/configs/haproxy_config.yml``
An example HTTP service could look like:
.. code-block:: yaml
haproxy_extra_services:
- service:
haproxy_service_name: extra-web-service
haproxy_backend_nodes: "{{ groups['service_group'] | default([]) }}"
haproxy_ssl: "{{ haproxy_ssl }}"
haproxy_port: 10000
haproxy_balance_type: http
--------------
.. include:: navigation.txt

View File

@ -1,5 +1,7 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _configure-swift:
Configuring the Object Storage (swift) service (optional)
=========================================================

View File

@ -1,7 +1,7 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
================================================
Appendix H: Customizing host and service layouts
Appendix E: Customizing host and service layouts
================================================
Understanding the default layout

View File

@ -1,6 +1,6 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Appendix F. Using Nuage Neutron Plugin
Appendix D. Using Nuage Neutron Plugin
--------------------------------------
Introduction

View File

@ -1,7 +1,7 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
==========================================
Appendix E: Using PLUMgrid Neutron plugin
Appendix D: Using PLUMgrid Neutron plugin
==========================================
Installing source and host networking

View File

@ -1,62 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Aodh service (optional)
=======================================
The Telemetry (ceilometer) alarming services perform the following functions:
- Creates an API endpoint for controlling alarms.
- Allows you to set alarms based on threshold evaluation for a collection of
samples.
Configuring the hosts
~~~~~~~~~~~~~~~~~~~~~
Configure Aodh by specifying the ``metering-alarm_hosts`` directive in
the ``/etc/openstack_deploy/conf.d/aodh.yml`` file. The following shows
the example included in the
``etc/openstack_deploy/conf.d/aodh.yml.example`` file:
.. code-block:: yaml
# The infra nodes that the Aodh services run on.
metering-alarm_hosts:
infra1:
ip: 172.20.236.111
infra2:
ip: 172.20.236.112
infra3:
ip: 172.20.236.113
The ``metering-alarm_hosts`` provides several services:
- An API server (``aodh-api``): Runs on one or more central management
servers to provide access to the alarm information in the
data store.
- An alarm evaluator (``aodh-evaluator``): Runs on one or more central
management servers to determine alarm fire due to the
associated statistic trend crossing a threshold over a sliding
time window.
- A notification listener (``aodh-listener``): Runs on a central
management server and fire alarms based on defined rules against
event captured by ceilometer's module's notification agents.
- An alarm notifier (``aodh-notifier``). Runs on one or more central
management servers to allow the setting of alarms to base on the
threshold evaluation for a collection of samples.
These services communicate by using the OpenStack messaging bus. Only
the API server has access to the data store.
Run the ``os-aodh-install.yml`` playbook. If deploying a new OpenStack
(instead of only Aodh), run ``setup-openstack.yml``.
The Aodh playbooks run as part of this playbook.
--------------
.. include:: navigation.txt

View File

@ -1,224 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Telemetry (ceilometer) service (optional)
=========================================================
The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services.
- Collects event and metering data by monitoring notifications sent from
services.
- Publishes collected data to various targets including data stores and
message queues.
.. note::
As of Liberty, the alarming functionality is in a separate component.
The metering-alarm containers handle the functionality through aodh
services. For configuring these services, see the aodh docs:
http://docs.openstack.org/developer/aodh/
Configure a MongoDB backend prior to running the ceilometer playbooks.
The connection data is in the ``user_variables.yml`` file
(see section `Configuring the user data`_ below).
Setting up a MongoDB database for ceilometer
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install the MongoDB package:
.. code-block:: console
# apt-get install mongodb-server mongodb-clients python-pymongo
2. Edit the ``/etc/mongodb.conf`` file and change the ``bind_i`` to the
management interface:
.. code-block:: ini
bind_ip = 10.0.0.11
3. Edit the ``/etc/mongodb.conf`` file and enable ``smallfiles``:
.. code-block:: ini
smallfiles = true
4. Restart the MongoDB service:
.. code-block:: console
# service mongodb restart
5. Create the ceilometer database:
.. code-block:: console
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'
This returns:
.. code-block:: console
MongoDB shell version: 2.4.x
connecting to: controller:27017/test
{
"user" : "ceilometer",
"pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f",
"roles" : [
"readWrite",
"dbAdmin"
],
"_id" : ObjectId("5489c22270d7fad1ba631dc3")
}
.. note::
Ensure ``CEILOMETER_DBPASS`` matches the
``ceilometer_container_db_password`` in the
``/etc/openstack_deploy/user_secrets.yml`` file. This is
how Ansible knows how to configure the connection string
within the ceilometer configuration files.
Configuring the hosts
~~~~~~~~~~~~~~~~~~~~~
Configure ceilometer by specifying the ``metering-compute_hosts`` and
``metering-infra_hosts`` directives in the
``/etc/openstack_deploy/conf.d/ceilometer.yml`` file. Below is the
example included in the
``etc/openstack_deploy/conf.d/ceilometer.yml.example`` file:
.. code-block:: bash
# The compute host that the ceilometer compute agent runs on
``metering-compute_hosts``:
compute1:
ip: 172.20.236.110
# The infra node that the central agents runs on
``metering-infra_hosts``:
infra1:
ip: 172.20.236.111
# Adding more than one host requires further configuration for ceilometer
# to work properly.
infra2:
ip: 172.20.236.112
infra3:
ip: 172.20.236.113
The ``metering-compute_hosts`` houses the ``ceilometer-agent-compute``
service. It runs on each compute node and polls for resource
utilization statistics. The ``metering-infra_hosts`` houses several
services:
- A central agent (ceilometer-agent-central): Runs on a central
management server to poll for resource utilization statistics for
resources not tied to instances or compute nodes. Multiple agents
can be started to enable workload partitioning (See HA section
below).
- A notification agent (ceilometer-agent-notification): Runs on a
central management server(s) and consumes messages from the
message queue(s) to build event and metering data. Multiple
notification agents can be started to enable workload partitioning
(See HA section below).
- A collector (ceilometer-collector): Runs on central management
server(s) and dispatches data to a data store
or external consumer without modification.
- An API server (ceilometer-api): Runs on one or more central
management servers to provide data access from the data store.
Configuring the hosts for an HA deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ceilometer supports running the polling and notification agents in an
HA deployment.
The Tooz library provides the coordination within the groups of service
instances. Tooz can be used with several backends. At the time of this
writing, the following backends are supported:
- Zookeeper: Recommended solution by the Tooz project.
- Redis: Recommended solution by the Tooz project.
- Memcached: Recommended for testing.
.. important::
The OpenStack-Ansible project does not deploy these backends.
The backends exist before deploying the ceilometer service.
Achieve HA by configuring the proper directives in ``ceilometer.conf`` using
``ceilometer_ceilometer_conf_overrides`` in the ``user_variables.yml`` file.
The ceilometer admin guide[1] details the
options used in ``ceilometer.conf`` for HA deployment. The following is an
example of ``ceilometer_ceilometer_conf_overrides``:
.. code-block:: yaml
ceilometer_ceilometer_conf_overrides:
coordination:
backend_url: "zookeeper://172.20.1.110:2181"
notification:
workload_partitioning: True
Configuring the user data
~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the following configurations in the
``/etc/openstack_deploy/user_variables.yml`` file:
- The type of database backend ceilometer uses. Currently only
MongoDB is supported: ``ceilometer_db_type: mongodb``
- The IP address of the MonogoDB host: ``ceilometer_db_ip:
localhost``
- The port of the MongoDB service: ``ceilometer_db_port: 27017``
- This configures swift to send notifications to the message bus:
``swift_ceilometer_enabled: False``
- This configures heat to send notifications to the message bus:
``heat_ceilometer_enabled: False``
- This configures cinder to send notifications to the message bus:
``cinder_ceilometer_enabled: False``
- This configures glance to send notifications to the message bus:
``glance_ceilometer_enabled: False``
- This configures nova to send notifications to the message bus:
``nova_ceilometer_enabled: False``
- This configures neutron to send notifications to the message bus:
``neutron_ceilometer_enabled: False``
- This configures keystone to send notifications to the message bus:
``keystone_ceilometer_enabled: False``
Run the ``os-ceilometer-install.yml`` playbook. If deploying a new OpenStack
(instead of only ceilometer), run ``setup-openstack.yml``. The
ceilometer playbooks run as part of this playbook.
References
~~~~~~~~~~
[1] `Ceilometer Admin Guide`_
.. _Ceilometer Admin Guide: http://docs.openstack.org/admin-guide/telemetry-data-collection.html
--------------
.. include:: navigation.txt

View File

@ -1,97 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Ceph client (optional)
======================================
Ceph is a massively scalable, open source, distributed storage system.
These links provide details on how to use Ceph with OpenStack:
* `Ceph Block Devices and OpenStack`_
* `Ceph - The De Facto Storage Backend for OpenStack`_ *(Hong Kong Summit
talk)*
* `OpenStack Config Reference - Ceph RADOS Block Device (RBD)`_
* `OpenStack-Ansible and Ceph Working Example`_
.. _Ceph Block Devices and OpenStack: http://docs.ceph.com/docs/master/rbd/rbd-openstack/
.. _Ceph - The De Facto Storage Backend for OpenStack: https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/ceph-the-de-facto-storage-backend-for-openstack
.. _OpenStack Config Reference - Ceph RADOS Block Device (RBD): http://docs.openstack.org/liberty/config-reference/content/ceph-rados.html
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
.. note::
Configuring Ceph storage servers is outside the scope of this documentation.
Authentication
~~~~~~~~~~~~~~
We recommend the ``cephx`` authentication method in the `Ceph
config reference`_. OpenStack-Ansible enables ``cephx`` by default for
the Ceph client. You can choose to override this setting by using the
``cephx`` Ansible variable:
.. code-block:: yaml
cephx: False
Deploy Ceph on a trusted network if disabling ``cephx``.
.. _Ceph config reference: http://docs.ceph.com/docs/master/rados/configuration/auth-config-ref/
Configuration file overrides
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible provides the ``ceph_conf_file`` variable. This allows
you to specify configuration file options to override the default
Ceph configuration:
.. code-block:: console
ceph_conf_file: |
[global]
fsid = 4037aa5f-abde-4378-9470-f73dbd6ceaba
mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.local
mon_host = 172.29.244.151,172.29.244.152,172.29.244.153
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
The use of the ``ceph_conf_file`` variable is optional. By default,
OpenStack-Ansible obtains a copy of ``ceph.conf`` from one of your Ceph
monitors. This transfer of ``ceph.conf`` requires the OpenStack-Ansible
deployment host public key to be deployed to all of the Ceph monitors. More
details are available here: `Deploying SSH Keys`_.
The following minimal example configuration sets nova and glance
to use ceph pools: ``ephemeral-vms`` and ``images`` respectively.
The example uses ``cephx`` authentication, and requires existing ``glance`` and
``cinder`` accounts for ``images`` and ``ephemeral-vms`` pools.
.. code-block:: console
glance_default_store: rbd
nova_libvirt_images_rbd_pool: ephemeral-vms
.. _Deploying SSH Keys: targethosts-prepare.html#deploying-ssh-keys
Monitors
~~~~~~~~
The `Ceph Monitor`_ maintains a master copy of the cluster map.
OpenStack-Ansible provides the ``ceph_mons`` variable and expects a list of
IP addresses for the Ceph Monitor servers in the deployment:
.. code-block:: yaml
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
.. _Ceph Monitor: http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/
--------------
.. include:: navigation.txt

View File

@ -1,459 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Block (cinder) storage service (optional)
=========================================================
By default, the Block (cinder) storage service installs on the host itself
using the LVM backend.
.. note::
While this is the default for cinder, using the LVM backend results in a
Single Point of Failure. As a result of the volume service being deployed
directly to the host, ``is_metal`` is ``true`` when using LVM.
NFS backend
~~~~~~~~~~~~
Edit ``/etc/openstack_deploy/openstack_user_config.yml`` and configure
the NFS client on each storage node if the NetApp backend is configured to use
an NFS storage protocol.
#. Add the ``cinder_backends`` stanza (which includes
``cinder_nfs_client``) under the ``container_vars`` stanza for
each storage node:
.. code-block:: yaml
container_vars:
cinder_backends:
cinder_nfs_client:
#. Configure the location of the file that lists shares available to the
block storage service. This configuration file must include
``nfs_shares_config``:
.. code-block:: yaml
nfs_shares_config: SHARE_CONFIG
Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure one or more NFS shares:
.. code-block:: yaml
shares:
- { ip: "NFS_HOST", share: "NFS_SHARE" }
Replace ``NFS_HOST`` with the IP address or hostname of the NFS
server, and the ``NFS_SHARE`` with the absolute path to an existing
and accessible NFS share.
Backup
~~~~~~
You can configure cinder to backup volumes to Object
Storage (swift). Enable the default
configuration to back up volumes to a swift installation
accessible within your environment. Alternatively, you can set
``cinder_service_backup_swift_url`` and other variables to
back up to an external swift installation.
#. Add or edit the following line in the
``/etc/openstack_deploy/user_variables.yml`` file and set the value
to ``True``:
.. code-block:: yaml
cinder_service_backup_program_enabled: True
#. By default, cinder uses the access credentials of the user
initiating the backup. Default values are set in the
``/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.yml``
file. You can override those defaults by setting variables in
``/etc/openstack_deploy/user_variables.yml`` to change how cinder
performs backups. Add and edit any of the
following variables to the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
...
cinder_service_backup_swift_auth: per_user
# Options include 'per_user' or 'single_user'. We default to
# 'per_user' so that backups are saved to a user's swift
# account.
cinder_service_backup_swift_url:
# This is your swift storage url when using 'per_user', or keystone
# endpoint when using 'single_user'. When using 'per_user', you
# can leave this as empty or as None to allow cinder-backup to
# obtain a storage url from environment.
cinder_service_backup_swift_url:
cinder_service_backup_swift_auth_version: 2
cinder_service_backup_swift_user:
cinder_service_backup_swift_tenant:
cinder_service_backup_swift_key:
cinder_service_backup_swift_container: volumebackups
cinder_service_backup_swift_object_size: 52428800
cinder_service_backup_swift_retry_attempts: 3
cinder_service_backup_swift_retry_backoff: 2
cinder_service_backup_compression_algorithm: zlib
cinder_service_backup_metadata_version: 2
During installation of cinder, the backup service is configured.
Using Ceph for cinder backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can deploy Ceph to hold cinder volume backups.
To get started, set the ``cinder_service_backup_driver`` Ansible
variable:
.. code-block:: yaml
cinder_service_backup_driver: cinder.backup.drivers.ceph
Configure the Ceph user and the pool to use for backups. The defaults
are shown here:
.. code-block:: yaml
cinder_service_backup_ceph_user: cinder-backup
cinder_service_backup_ceph_pool: backups
Availability zones
~~~~~~~~~~~~~~~~~~
Create multiple availability zones to manage cinder storage hosts. Edit the
``/etc/openstack_deploy/openstack_user_config.yml`` and
``/etc/openstack_deploy/user_variables.yml`` files to set up
availability zones.
#. For each cinder storage host, configure the availability zone under
the ``container_vars`` stanza:
.. code-block:: yaml
cinder_storage_availability_zone: CINDERAZ
Replace ``CINDERAZ`` with a suitable name. For example
``cinderAZ_2``.
#. If more than one availability zone is created, configure the default
availability zone for all the hosts by creating a
``cinder_default_availability_zone`` in your
``/etc/openstack_deploy/user_variables.yml``
.. code-block:: yaml
cinder_default_availability_zone: CINDERAZ_DEFAULT
Replace ``CINDERAZ_DEFAULT`` with a suitable name. For example,
``cinderAZ_1``. The default availability zone should be the same
for all cinder hosts.
OpenStack Dashboard (horizon) configuration for cinder
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can configure variables to set the behavior for cinder
volume management in OpenStack Dashboard (horizon).
By default, no horizon configuration is set.
#. The default destination availability zone is ``nova`` if you use
multiple availability zones and ``cinder_default_availability_zone``
has no definition. Volume creation with
horizon might fail if there is no availability zone named ``nova``.
Set ``cinder_default_availability_zone`` to an appropriate
availability zone name so that :guilabel:`Any availability zone`
works in horizon.
#. horizon does not populate the volume type by default. On the new
volume page, a request for the creation of a volume with the
default parameters fails. Set ``cinder_default_volume_type`` so
that a volume creation request without an explicit volume type
succeeds.
Configuring cinder to use LVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. List the ``container_vars`` that contain the storage options for the target
host.
.. note::
The vars related to the cinder availability zone and the
``limit_container_types`` are optional.
To configure an LVM, utilize the following example:
.. code-block:: yaml
storage_hosts:
Infra01:
ip: 172.29.236.16
container_vars:
cinder_storage_availability_zone: cinderAZ_1
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
lvm:
volume_backend_name: LVM_iSCSI
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes
iscsi_ip_address: "{{ storage_address }}"
limit_container_types: cinder_volume
To use another backend in a
container instead of bare metal, edit
the ``/etc/openstack_deploy/env.d/cinder.yml`` and remove the
``is_metal: true`` stanza under the ``cinder_volumes_container`` properties.
Configuring cinder to use Ceph
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order for cinder to use Ceph, it is necessary to configure for both
the API and backend. When using any forms of network storage
(iSCSI, NFS, Ceph) for cinder, the API containers can be considered
as backend servers. A separate storage host is not required.
In ``env.d/cinder.yml`` remove ``is_metal: true``
#. List of target hosts on which to deploy the cinder API. We recommend
that a minimum of three target hosts are used for this service.
.. code-block:: yaml
storage-infra_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
To configure an RBD backend, utilize the following example:
.. code-block:: yaml
container_vars:
cinder_storage_availability_zone: cinderAZ_3
cinder_default_availability_zone: cinderAZ_1
cinder_backends:
limit_container_types: cinder_volume
volumes_hdd:
volume_driver: cinder.volume.drivers.rbd.RBDDriver
rbd_pool: volumes_hdd
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot: 'false'
rbd_max_clone_depth: 5
rbd_store_chunk_size: 4
rados_connect_timeout: -1
volume_backend_name: volumes_hdd
rbd_user: "{{ cinder_ceph_client }}"
rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
The following example sets cinder to use the ``cinder_volumes`` pool.
The example uses cephx authentication and requires existing ``cinder``
account for ``cinder_volumes`` pool.
In ``user_variables.yml``:
.. code-block:: yaml
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
In ``openstack_user_config.yml``:
.. code-block:: yaml
storage_hosts:
infra1:
ip: 172.29.236.101
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
infra2:
ip: 172.29.236.102
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
infra3:
ip: 172.29.236.103
container_vars:
cinder_backends:
limit_container_types: cinder_volume
rbd:
volume_group: cinder-volumes
volume_driver: cinder.volume.drivers.rbd.RBDDriver
volume_backend_name: rbd
rbd_pool: cinder-volumes
rbd_ceph_conf: /etc/ceph/ceph.conf
rbd_user: cinder
This link provides a complete working example of Ceph setup and
integration with cinder (nova and glance included):
* `OpenStack-Ansible and Ceph Working Example`_
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
Configuring cinder to use a NetApp appliance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use a NetApp storage appliance back end, edit the
``/etc/openstack_deploy/openstack_user_config.yml`` file and configure
each storage node that will use it.
.. note::
Ensure that the NAS Team enables ``httpd.admin.access``.
#. Add the ``netapp`` stanza under the ``cinder_backends`` stanza for
each storage node:
.. code-block:: yaml
cinder_backends:
netapp:
The options in subsequent steps fit under the ``netapp`` stanza.
The backend name is arbitrary and becomes a volume type within cinder.
#. Configure the storage family:
.. code-block:: yaml
netapp_storage_family: STORAGE_FAMILY
Replace ``STORAGE_FAMILY`` with ``ontap_7mode`` for Data ONTAP
operating in 7-mode or ``ontap_cluster`` for Data ONTAP operating as
a cluster.
#. Configure the storage protocol:
.. code-block:: yaml
netapp_storage_protocol: STORAGE_PROTOCOL
Replace ``STORAGE_PROTOCOL`` with ``iscsi`` for iSCSI or ``nfs``
for NFS.
For the NFS protocol, specify the location of the
configuration file that lists the shares available to cinder:
.. code-block:: yaml
nfs_shares_config: SHARE_CONFIG
Replace ``SHARE_CONFIG`` with the location of the share
configuration file. For example, ``/etc/cinder/nfs_shares``.
#. Configure the server:
.. code-block:: yaml
netapp_server_hostname: SERVER_HOSTNAME
Replace ``SERVER_HOSTNAME`` with the hostnames for both netapp
controllers.
#. Configure the server API port:
.. code-block:: yaml
netapp_server_port: PORT_NUMBER
Replace ``PORT_NUMBER`` with 80 for HTTP or 443 for HTTPS.
#. Configure the server credentials:
.. code-block:: yaml
netapp_login: USER_NAME
netapp_password: PASSWORD
Replace ``USER_NAME`` and ``PASSWORD`` with the appropriate
values.
#. Select the NetApp driver:
.. code-block:: yaml
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
#. Configure the volume back end name:
.. code-block:: yaml
volume_backend_name: BACKEND_NAME
Replace ``BACKEND_NAME`` with a value that provides a hint
for the cinder scheduler. For example, ``NETAPP_iSCSI``.
#. Ensure the ``openstack_user_config.yml`` configuration is
accurate:
.. code-block:: yaml
storage_hosts:
Infra01:
ip: 172.29.236.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
netapp:
netapp_storage_family: ontap_7mode
netapp_storage_protocol: nfs
netapp_server_hostname: 111.222.333.444
netapp_server_port: 80
netapp_login: openstack_cinder
netapp_password: password
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name: NETAPP_NFS
For ``netapp_server_hostname``, specify the IP address of the Data
ONTAP server. Include iSCSI or NFS for the
``netapp_storage_family`` depending on the configuration. Add 80 if
using HTTP or 443 if using HTTPS for ``netapp_server_port``.
The ``cinder-volume.yml`` playbook will automatically install the
``nfs-common`` file across the hosts, transitioning from an LVM to a
NetApp back end.
--------------
.. include:: navigation.txt

View File

@ -1,44 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring ADFS 3.0 as an identity provider
============================================
To install Active Directory Federation Services (ADFS):
* `Prerequisites for ADFS from Microsoft Technet <https://technet.microsoft.com/library/bf7f9cf4-6170-40e8-83dd-e636cb4f9ecb>`_
* `ADFS installation procedure from Microsoft Technet <https://technet.microsoft.com/en-us/library/dn303423>`_
Configuring ADFS
~~~~~~~~~~~~~~~~
#. Ensure the ADFS Server trusts the service provider's (SP) keystone
certificate. We recommend to have the ADFS CA (or a
public CA) sign a certificate request for the keystone service.
#. In the ADFS Management Console, choose ``Add Relying Party Trust``.
#. Select ``Import data about the relying party published online or on a
local network`` and enter the URL for the SP Metadata (
for example, ``https://<SP_IP_ADDRESS or DNS_NAME>:5000/Shibboleth.sso/Metadata``)
.. note::
ADFS may give a warning message. The message states that ADFS skipped
some of the content gathered from metadata because it is not supported by ADFS
#. Continuing the wizard, select ``Permit all users to access this
relying party``.
#. In the ``Add Transform Claim Rule Wizard``, select ``Pass Through or
Filter an Incoming Claim``.
#. Name the rule (for example, ``Pass Through UPN``) and select the ``UPN``
Incoming claim type.
#. Click :guilabel:`OK` to apply the rule and finalize the setup.
References
~~~~~~~~~~
* http://blogs.technet.com/b/rmilne/archive/2014/04/28/how-to-install-adfs-2012-r2-for-office-365.aspx
* http://blog.kloud.com.au/2013/08/14/powershell-deployment-of-web-application-proxy-and-adfs-in-under-10-minutes/
* https://ethernuno.wordpress.com/2014/04/20/install-adds-on-windows-server-2012-r2-with-powershell/
--------------
.. include:: navigation.txt

View File

@ -1,81 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity service (keystone) as a federated identity provider
======================================================================
The Identity Provider (IdP) configuration for keystone provides a
dictionary attribute with the key ``keystone_idp``. The following is a
complete example:
.. code::
keystone_idp:
certfile: "/etc/keystone/ssl/idp_signing_cert.pem"
keyfile: "/etc/keystone/ssl/idp_signing_key.pem"
self_signed_cert_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN={{ external_lb_vip_address }}"
regen_cert: false
idp_entity_id: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/idp"
idp_sso_endpoint: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/sso"
idp_metadata_path: /etc/keystone/saml2_idp_metadata.xml
service_providers:
- id: "sp_1"
auth_url: https://example.com:5000/v3/OS-FEDERATION/identity_providers/idp/protocols/saml2/auth
sp_url: https://example.com:5000/Shibboleth.sso/SAML2/ECP
organization_name: example_company
organization_display_name: Example Corp.
organization_url: example.com
contact_company: example_company
contact_name: John
contact_surname: Smith
contact_email: jsmith@example.com
contact_telephone: 555-55-5555
contact_type: technical
The following list is a reference of allowed settings:
* ``certfile`` defines the location and filename of the SSL certificate that
the IdP uses to sign assertions. This file must be in a location that is
accessible to the keystone system user.
* ``keyfile`` defines the location and filename of the SSL private key that
the IdP uses to sign assertions. This file must be in a location that is
accessible to the keystone system user.
* ``self_signed_cert_subject`` is the subject in the SSL signing
certificate. The common name of the certificate
must match the hostname configuration in the service provider(s) for
this IdP.
* ``regen_cert`` by default is set to ``False``. When set to ``True``, the
next Ansible run replaces the existing signing certificate with a new one.
This setting is added as a convenience mechanism to renew a certificate when
it is close to its expiration date.
* ``idp_entity_id`` is the entity ID. The service providers
use this as a unique identifier for each IdP.
``<keystone-public-endpoint>/OS-FEDERATION/saml2/idp`` is the value we
recommend for this setting.
* ``idp_sso_endpoint`` is the single sign-on endpoint for this IdP.
``<keystone-public-endpoint>/OS-FEDERATION/saml2/sso>`` is the value
we recommend for this setting.
* ``idp_metadata_path`` is the location and filename where the metadata for
this IdP is cached. The keystone system user must have access to this
location.
* ``service_providers`` is a list of the known service providers (SP) that
use the keystone instance as identity provider. For each SP, provide
three values: ``id`` as a unique identifier,
``auth_url`` as the authentication endpoint of the SP, and ``sp_url``
endpoint for posting SAML2 assertions.
* ``organization_name``, ``organization_display_name``, ``organization_url``,
``contact_company``, ``contact_name``, ``contact_surname``,
``contact_email``, ``contact_telephone`` and ``contact_type`` are
settings that describe the identity provider. These settings are all
optional.
--------------
.. include:: navigation.txt

View File

@ -1,164 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity Service (keystone) Domain-Project-Group-Role mappings
========================================================================
The following is an example service provider (SP) mapping configuration
for an ADFS identity provider (IdP):
.. code-block:: yaml
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
Each IdP trusted by an SP must have the following configuration:
#. ``project``: The project that federation users have access to.
If the project does not already exist, create it in the
domain with the name, ``domain``.
#. ``group``: The keystone group that federation users
belong. If the group does not already exist, create it in
the domain with the name, ``domain``.
#. ``role``: The role that federation users use in that project.
Create the role if it does not already exist.
#. ``domain``: The domain where the ``project`` lives, and where
the you assign roles. Create the domain if it does not already exist.
Ansible implements the equivalent of the following OpenStack CLI commands:
.. code-block:: shell-session
# if the domain does not already exist
openstack domain create Default
# if the group does not already exist
openstack group create fedgroup --domain Default
# if the role does not already exist
openstack role create _member_
# if the project does not already exist
openstack project create --domain Default fedproject
# map the role to the project and user group in the domain
openstack role add --project fedproject --group fedgroup _member_
To add more mappings, add options to the list.
For example:
.. code-block:: yaml
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
- domain: Default
project: fedproject2
group: fedgroup2
role: _member_
Identity service federation attribute mapping
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Attribute mapping adds a set of rules to map federation attributes to keystone
users and groups. IdP specifies one mapping per protocol.
Use mapping objects multiple times by different combinations of
IdP and protocol.
The details of how the mapping engine works, the schema, and various rule
examples are in the `keystone developer documentation <http://docs.openstack.org/developer/keystone/mapping_combinations.html>`_.
For example, SP attribute mapping configuration for an ADFS IdP:
.. code-block:: yaml
mapping:
name: adfs-IdP-mapping
rules:
- remote:
- type: upn
local:
- group:
name: fedgroup
domain:
name: Default
- user:
name: '{0}'
attributes:
- name: 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn'
id: upn
Each IdP for an SP needs to be set up with a mapping. This tells the SP how
to interpret the attributes provided to the SP from the IdP.
In this example, the IdP publishes the ``upn`` attribute. As this
is not in the standard Shibboleth attribute map (see
``/etc/shibboleth/attribute-map.xml`` in the keystone containers), the configuration
of the IdP has extra mapping through the ``attributes`` dictionary.
The ``mapping`` dictionary is a YAML representation similar to the
keystone mapping property which Ansible uploads. The above mapping
produces the following in keystone.
.. code-block:: shell-session
root@aio1_keystone_container-783aa4c0:~# openstack mapping list
+------------------+
| ID |
+------------------+
| adfs-IdP-mapping |
+------------------+
root@aio1_keystone_container-783aa4c0:~# openstack mapping show adfs-IdP-mapping
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
| id | adfs-IdP-mapping |
| rules | [{"remote": [{"type": "upn"}], "local": [{"group": {"domain": {"name": "Default"}, "name": "fedgroup"}}, {"user": {"name": "{0}"}}]}] |
+-------+---------------------------------------------------------------------------------------------------------------------------------------+
root@aio1_keystone_container-783aa4c0:~# openstack mapping show adfs-IdP-mapping | awk -F\| '/rules/ {print $3}' | python -mjson.tool
[
{
"remote": [
{
"type": "upn"
}
],
"local": [
{
"group": {
"domain": {
"name": "Default"
},
"name": "fedgroup"
}
},
{
"user": {
"name": "{0}"
}
}
]
}
]
The interpretation of the above mapping rule is that any federation user
authenticated by the IdP maps to an ``ephemeral`` (non-existant) user in
keystone. The user is a member of a group named ``fedgroup``. This is
in a domain called ``Default``. The user's ID and Name (federation uses
the same value for both properties) for all OpenStack services is
the value of ``upn``.
--------------
.. include:: navigation.txt

View File

@ -1,63 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Identity Service (keystone) service provider background
=======================================================
In OpenStack-Ansible, the Identity Service (keystone) is set up to
use Apache with ``mod_wsgi``. The additional configuration of
keystone as a federation service provider adds Apache ``mod_shib``
and configures it to respond to specific locations requests
from a client.
.. note::
There are alternative methods of implementing
federation, but at this time only SAML2-based federation using
the Shibboleth SP is instrumented in OA.
When requests are sent to those locations, Apache hands off the
request to the ``shibd`` service.
.. note::
Handing off happens only with requests pertaining to authentication.
Handle the ``shibd`` service configuration through
the following files in ``/etc/shibboleth/`` in the keystone
containers:
* ``sp-cert.pem``, ``sp-key.pem``: The ``os-keystone-install.yml`` playbook
uses these files generated on the first keystone container to replicate
them to the other keystone containers. The SP and the IdP use these files
as signing credentials in communications.
* ``shibboleth2.xml``: The ``os-keystone-install.yml`` playbook writes the
file's contents, basing on the structure of the configuration
of the ``keystone_sp`` attribute in the
``/etc/openstack_deploy/user_variables.yml`` file. It contains
the list of trusted IdP's, the entityID by which the SP is known,
and other facilitating configurations.
* ``attribute-map.xml``: The ``os-keystone-install.yml`` playbook writes
the file's contents, basing on the structure of the configuration
of the ``keystone_sp`` attribute in the
``/etc/openstack_deploy/user_variables.yml`` file. It contains
the default attribute mappings that work for any basic
Shibboleth-type IDP setup, but also contains any additional
attribute mappings set out in the structure of the ``keystone_sp``
attribute.
* ``shibd.logger``: This file is left alone by Ansible. It is useful
when troubleshooting issues with federated authentication, or
when discovering what attributes published by an IdP
are not currently being understood by your SP's attribute map.
To enable debug logging, change ``log4j.rootCategory=INFO`` to
``log4j.rootCategory=DEBUG`` at the top of the file. The
log file is output to ``/var/log/shibboleth/shibd.log``.
References
----------
* http://docs.openstack.org/developer/keystone/configure_federation.html
* http://docs.openstack.org/developer/keystone/extensions/shibboleth.html
* https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPConfiguration
--------------
.. include:: navigation.txt

View File

@ -1,125 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configure Identity Service (keystone) as a federated service provider
=====================================================================
The following settings must be set to configure a service provider (SP):
#. ``keystone_public_endpoint`` is automatically set by default
to the public endpoint's URI. This performs redirections and
ensures token references refer to the public endpoint.
#. ``horizon_keystone_endpoint`` is automatically set by default
to the public v3 API endpoint URL for keystone. Web-based single
sign-on for horizon requires the use of the keystone v3 API.
The value for this must use the same DNS name or IP address
registered in the SSL certificate used for the endpoint.
#. It is a requirement to have a HTTPS public endpoint for the
keystone endpoint if the IdP is ADFS.
Keystone or an SSL offloading load balancer provides the endpoint.
#. Set ``keystone_service_publicuri_proto`` to https.
This ensures keystone publishes https in its references
and ensures that Shibboleth is configured to know that it
expects SSL URL's in the assertions (otherwise it will invalidate
the assertions).
#. ADFS requires that a trusted SP have a trusted certificate that
is not self-signed.
#. Ensure the endpoint URI and the certificate match when using SSL for the
keystone endpoint. For example, if the certificate does not have
the IP address of the endpoint, then the endpoint must be published with
the appropriate name registered on the certificate. When
using a DNS name for the keystone endpoint, both
``keystone_public_endpoint`` and ``horizon_keystone_endpoint`` must
be set to use the DNS name.
#. ``horizon_endpoint_type`` must be set to ``publicURL`` to ensure that
horizon uses the public endpoint for all its references and
queries.
#. ``keystone_sp`` is a dictionary attribute which contains various
settings that describe both the SP and the IDP's it trusts. For example:
.. code-block:: yaml
keystone_sp:
cert_duration_years: 5
trusted_dashboard_list:
- "https://{{ external_lb_vip_address }}/auth/websso/"
trusted_idp_list:
- name: 'testshib-idp'
entity_ids:
- 'https://idp.testshib.org/idp/shibboleth'
metadata_uri: 'http://www.testshib.org/metadata/testshib-providers.xml'
metadata_file: 'metadata-testshib-idp.xml'
metadata_reload: 1800
federated_identities:
- domain: Default
project: fedproject
group: fedgroup
role: _member_
protocols:
- name: saml2
mapping:
name: testshib-idp-mapping
rules:
- remote:
- type: eppn
local:
- group:
name: fedgroup
domain:
name: Default
- user:
name: '{0}'
#. ``cert_duration_years`` designates the valid duration for the SP's
signing certificate (for example, ``/etc/shibboleth/sp-key.pem``).
#. ``trusted_dashboard_list`` designates the list of trusted URLs that
keystone accepts redirects for Web Single-Sign. This
list contains all URLs that horizon is presented on,
suffixed by ``/auth/websso/``. This is the path for horizon's WebSSO
component.
#. ``trusted_idp_list`` is a dictionary attribute containing the list
of settings which pertain to each trusted IdP for the SP.
#. ``trusted_idp_list.name`` is IDP's name. Configure this in
in keystone and list in horizon's login selection.
#. ``entity_ids`` is a list of reference entity IDs. This specify's the
redirection of the login request to the SP when authenticating to
IdP.
#. ``metadata_uri`` is the location of the IdP's metadata. This provides
the SP with the signing key and all the IdP's supported endpoints.
#. ``metadata_file`` is the file name of the local cached version of
the metadata which will be stored in ``/var/cache/shibboleth/``.
#. ``metadata_reload`` is the number of seconds between metadata
refresh polls.
#. ``federated_identities`` is a mapping list of domain, project, group, and
users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information.
#. ``protocols`` is a list of protocols supported for the IdP and the set
of mappings and attributes for each protocol. This only supports protocols
with the name ``saml2``.
#. ``mapping`` is the local to remote mapping configuration for federated
users. See
`Configure Identity Service (keystone) Domain-Project-Group-Role mappings`_
for more information.
.. _Configure Identity Service (keystone) Domain-Project-Group-Role mappings: configure-federation-mapping.html
--------------
.. include:: navigation.txt

View File

@ -1,326 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Identity Service to Identity Service federation example use-case
================================================================
The following is the configuration steps necessary to reproduce the
federation scenario described below:
* Federate Cloud 1 and Cloud 2.
* Create mappings between Cloud 1 Group A and Cloud 2 Project X and Role R.
* Create mappings between Cloud 1 Group B and Cloud 2 Project Y and Role S.
* Create User U in Cloud 1, assign to Group A.
* Authenticate with Cloud 2 and confirm scope to Role R in Project X.
* Assign User U to Group B, confirm scope to Role S in Project Y.
Keystone identity provider (IdP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is the configuration for the keystone IdP instance:
.. code::
keystone_idp:
certfile: "/etc/keystone/ssl/idp_signing_cert.pem"
keyfile: "/etc/keystone/ssl/idp_signing_key.pem"
self_signed_cert_subject: "/C=US/ST=Texas/L=San Antonio/O=IT/CN={{ external_lb_vip_address }}"
regen_cert: false
idp_entity_id: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/idp"
idp_sso_endpoint: "{{ keystone_service_publicurl_v3 }}/OS-FEDERATION/saml2/sso"
idp_metadata_path: /etc/keystone/saml2_idp_metadata.xml
service_providers:
- id: "cloud2"
auth_url: https://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/cloud1/protocols/saml2/auth
sp_url: https://cloud2.com:5000/Shibboleth.sso/SAML2/ECP
In this example, the last three lines are specific to a particular
installation, as they reference the service provider cloud (referred to as
"Cloud 2" in the original scenario). In the example, the
cloud is located at https://cloud2.com, and the unique ID for this cloud
is "cloud2".
.. note::
In the ``auth_url`` there is a reference to the IdP cloud (or
"Cloud 1"), as known by the service provider (SP). The ID used for the IdP
cloud in this example is "cloud1".
Keystone service provider (SP) configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The configuration for keystone SP needs to define the remote-to-local user
mappings. The following is the complete configuration:
.. code::
keystone_sp:
cert_duration_years: 5
trusted_dashboard_list:
- "https://{{ external_lb_vip_address }}/auth/websso/"
trusted_idp_list:
- name: "cloud1"
entity_ids:
- 'https://cloud1.com:5000/v3/OS-FEDERATION/saml2/idp'
metadata_uri: 'https://cloud1.com:5000/v3/OS-FEDERATION/saml2/metadata'
metadata_file: 'metadata-cloud1.xml'
metadata_reload: 1800
federated_identities:
- domain: Default
project: X
role: R
group: federated_group_1
- domain: Default
project: Y
role: S
group: federated_group_2
protocols:
- name: saml2
mapping:
name: cloud1-mapping
rules:
- remote:
- any_one_of:
- A
type: openstack_project
local:
- group:
name: federated_group_1
domain:
name: Default
- remote:
- any_one_of:
- B
type: openstack_project
local:
- group:
name: federated_group_2
domain:
name: Default
attributes:
- name: openstack_user
id: openstack_user
- name: openstack_roles
id: openstack_roles
- name: openstack_project
id: openstack_project
- name: openstack_user_domain
id: openstack_user_domain
- name: openstack_project_domain
id: openstack_project_domain
``cert_duration_years`` is for the self-signed certificate used by
Shibboleth. Only implement the ``trusted_dashboard_list`` if horizon SSO
login is necessary. When given, it works as a security measure,
as keystone will only redirect to these URLs.
Configure the IdPs known to SP in ``trusted_idp_list``. In
this example there is only one IdP, the "Cloud 1". Configure "Cloud 1" with
the ID "cloud1". This matches the reference in the IdP configuration shown in the
previous section.
The ``entity_ids`` is given the unique URL that represents the "Cloud 1" IdP.
For this example, it is hosted at: https://cloud1.com.
The ``metadata_file`` needs to be different for each IdP. This is
a filename in the keystone containers of the SP cloud that holds cached
metadata for each registered IdP.
The ``federated_identities`` list defines the sets of identities in use
for federated users. In this example there are two sets, Project X/Role R
and Project Y/Role S. A user group is created for each set.
The ``protocols`` section is where the federation protocols are specified.
The only supported protocol is ``saml2``.
The ``mapping`` dictionary is where the assignments of remote to local
users is defined. A keystone mapping is given a ``name`` and a set of
``rules`` that keystone applies to determine how to map a given user. Each
mapping rule has a ``remote`` and a ``local`` component.
The ``remote`` part of the mapping rule specifies the criteria for the remote
user based on the attributes exposed by the IdP in the SAML2 assertion. The
use case for this scenario calls for mapping users in "Group A" and "Group B",
but the group or groups a user belongs to are not exported in the SAML2
assertion. To make the example work, the groups A and B in the use case are
projects. Export projects A and B in the assertion under the
``openstack_project`` attribute. The two rules above select the corresponding
project using the ``any_one_of`` selector.
The ``local`` part of the mapping rule specifies how keystone represents
the remote user in the local SP cloud. Configuring the two federated identities
with their own user group maps the user to the
corresponding group. This exposes the correct domain, project, and
role.
.. note::
Keystone creates a ephemeral user in the specified group as
you cannot specify user names.
The IdP exports the final setting of the configuration defines the SAML2
``attributes``. For a keystone IdP, these are the five attributes shown above.
Configure the attributes above into the Shibboleth service. This ensures they
are available to use in the mappings.
Reviewing or modifying the configuration with the OpenStack client
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use OpenStack command line client to review or make modifications to an
existing federation configuration. The following commands can be used for
the previous configuration.
Service providers on the identity provider
------------------------------------------
To see the list of known SPs:
.. code::
$ openstack service provider list
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
| ID | Enabled | Description | Auth URL |
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
| cloud2 | True | None | https://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/cloud1/protocols/saml2/auth |
+--------+---------+-------------+-----------------------------------------------------------------------------------------+
To view the information for a specific SP:
.. code::
$ openstack service provider show cloud2
+--------------------+----------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------+----------------------------------------------------------------------------------------------+
| auth_url | http://cloud2.com:5000/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth |
| description | None |
| enabled | True |
| id | cloud2 |
| relay_state_prefix | ss:mem: |
| sp_url | http://cloud2.com:5000/Shibboleth.sso/SAML2/ECP |
+--------------------+----------------------------------------------------------------------------------------------+
To make modifications, use the ``set`` command. The following are the available
options for this command:
.. code::
$ openstack service provider set
usage: openstack service provider set [-h] [--auth-url <auth-url>]
[--description <description>]
[--service-provider-url <sp-url>]
[--enable | --disable]
<service-provider>
Identity providers on the service provider
------------------------------------------
To see the list of known IdPs:
.. code::
$ openstack identity provider list
+----------------+---------+-------------+
| ID | Enabled | Description |
+----------------+---------+-------------+
| cloud1 | True | None |
+----------------+---------+-------------+
To view the information for a specific IdP:
.. code::
$ openstack identity provider show keystone-idp
+-------------+--------------------------------------------------------+
| Field | Value |
+-------------+--------------------------------------------------------+
| description | None |
| enabled | True |
| id | cloud1 |
| remote_ids | [u'http://cloud1.com:5000/v3/OS-FEDERATION/saml2/idp'] |
+-------------+--------------------------------------------------------+
To make modifications, use the ``set`` command. The following are the available
options for this command:
.. code::
$ openstack identity provider set
usage: openstack identity provider set [-h]
[--remote-id <remote-id> | --remote-id-file <file-name>]
[--enable | --disable]
<identity-provider>
Federated identities on the service provider
--------------------------------------------
You can use the OpenStack commandline client to view or modify
the created domain, project, role, group, and user entities for the
purpose of federation as these are regular keystone entities. For example:
.. code::
$ openstack domain list
$ openstack project list
$ openstack role list
$ openstack group list
$ openstack user list
Add the ``--domain`` option when using a domain other than the default.
Use the ``set`` option to modify these entities.
Federation mappings
-------------------
To view the list of mappings:
.. code::
$ openstack mapping list
+------------------+
| ID |
+------------------+
| cloud1-mapping |
+------------------+
To view a mapping in detail:
..code::
$ openstack mapping show cloud1-mapping
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
| id | keystone-idp-mapping-2 |
| rules | [{"remote": [{"type": "openstack_project", "any_one_of": ["A"]}], "local": [{"group": {"domain": {"name": "Default"}, "name": |
| | "federated_group_1"}}]}, {"remote": [{"type": "openstack_project", "any_one_of": ["B"]}], "local": [{"group": {"domain": {"name": "Default"}, |
| | "name": "federated_group_2"}}]}] |
+-------+--------------------------------------------------------------------------------------------------------------------------------------------------+
To edit a mapping, use an auxiliary file. Save the JSON mapping shown above
and make the necessary modifications. Use the``set`` command to trigger
an update. For example:
.. code::
$ openstack mapping show cloud1-mapping -c rules -f value | python -m json.tool > rules.json
$ vi rules.json # <--- make any necessary changes
$ openstack mapping set cloud1-mapping --rules rules.json
Federation protocols
--------------------
To view or change the association between a federation
protocol and a mapping, use the following command:
.. code::
$ openstack federation protocol list --identity-provider keystone-idp
+-------+----------------+
| id | mapping |
+-------+----------------+
| saml2 | cloud1-mapping |
+-------+----------------+
--------------
.. include:: navigation.txt

View File

@ -1,93 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Using Identity service to Identity service federation
=====================================================
In Identity service (keystone) to Identity service (keystone)
federation (K2K) the identity provider (IdP) and service provider (SP)
keystone instances exchange information securely to enable a user on
the IdP cloud to access resources of the SP cloud.
.. important::
This section applies only to federation between keystone IdP
and keystone SP. It does not apply to non-keystone IdP.
.. note::
For the Kilo release of OpenStack, K2K is only partially supported.
It is possible to perform a federated login using command line clients and
scripting. However, horizon does not support this functionality.
The K2K authentication flow involves the following steps:
#. You log in to the IdP with your credentials.
#. You sends a request to the IdP to generate an assertion for a given
SP. An assertion is a cryptographically signed XML document that identifies
the user to the SP.
#. You submit the assertion to the SP on the configured ``sp_url``
endpoint. The Shibboleth service running on the SP receives the assertion
and verifies it. If it is valid, a session with the client starts and
returns the session ID in a cookie.
#. You now connect to the SP on the configured ``auth_url`` endpoint,
providing the Shibboleth cookie with the session ID. The SP responds with
an unscoped token that you use to access the SP.
#. You connect to the keystone service on the SP with the unscoped
token, and the desired domain and project, and receive a scoped token
and the service catalog.
#. You, now in possession of a token, can make API requests to the
endpoints in the catalog.
Identity service to Identity service federation authentication wrapper
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following steps above involve manually sending API requests.
.. note::
The infrastructure for the command line utilities that performs these steps
for the user does not exist.
To obtain access to a SP cloud, OpenStack-Ansible provides a script that wraps
the above steps. The script is called ``federated-login.sh`` and is
used as follows:
.. code::
# ./scripts/federated-login.sh -p project [-d domain] sp_id
* ``project`` is the project in the SP cloud that you want to access.
* ``domain`` is the domain in which the project lives (the default domain is
used if this argument is not given).
* ``sp_id`` is the unique ID of the SP. This is given in the IdP configuration.
The script outputs the results of all the steps in the authentication flow to
the console. At the end, it prints the available endpoints from the catalog
and the scoped token provided by the SP.
Use the endpoints and token with the openstack command line client as follows:
.. code::
# openstack --os-token=<token> --os-url=<service-endpoint> [options]
Or, alternatively:
.. code::
# export OS_TOKEN=<token>
# export OS_URL=<service-endpoint>
# openstack [options]
Ensure you select the appropriate endpoint for your operation.
For example, if you want to work with servers, the ``OS_URL``
argument must be set to the compute endpoint.
.. note::
At this time, the OpenStack client is unable to find endpoints in
the service catalog when using a federated login.
--------------
.. include:: navigation.txt

View File

@ -1,50 +0,0 @@
`Home <index.html>`__ OpenStack-Ansible Installation Guide
Configuring Identity service (keystone) federation (optional)
=============================================================
.. toctree::
configure-federation-wrapper
configure-federation-sp-overview.rst
configure-federation-sp.rst
configure-federation-idp.rst
configure-federation-idp-adfs.rst
configure-federation-mapping.rst
configure-federation-use-case.rst
In keystone federation, the identity provider (IdP) and service
provider (SP) exchange information securely to enable a user on the IdP cloud
to access resources of the SP cloud.
.. note::
For the Kilo release of OpenStack, federation is only partially supported.
It is possible to perform a federated login using command line clients and
scripting, but Dashboard (horizon) does not support this functionality.
The following procedure describes how to set up federation.
#. `Configure Identity Service (keystone) service providers. <configure-federation-sp.html>`_
#. Configure the identity provider:
* `Configure Identity Service (keystone) as an identity provider. <configure-federation-idp.html>`_
* `Configure Active Directory Federation Services (ADFS) 3.0 as an identity provider. <configure-federation-idp-adfs.html>`_
#. Configure the service provider:
* `Configure Identity Service (keystone) as a federated service provider. <configure-federation-sp.html>`_
* `Configure Identity Service (keystone) Domain-Project-Group-Role mappings. <configure-federation-mapping.html>`_
#. `Run the authentication wrapper to use Identity Service to Identity Service federation. <configure-federation-wrapper.html>`_
For examples of how to set up keystone to keystone federation,
see the `Identity Service to Identity Service
federation example use-case. <configure-federation-use-case.html>`_
--------------
.. include:: navigation.txt

View File

@ -1,172 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring HAProxy (optional)
==============================
HAProxy provides load balancing services and SSL termination when hardware
load balancers are not available for high availability architectures deployed
by OpenStack-Ansible. The default HAProxy configuration provides highly-
available load balancing services via keepalived if there is more than one
host in the ``haproxy_hosts`` group.
.. important::
Ensure you review the services exposed by HAProxy and limit access
to these services to trusted users and networks only. For more details,
refer to the :ref:`least-access-openstack-services` section.
.. note::
For a successful installation, you require a load balancer. You may
prefer to make use of hardware load balancers instead of HAProxy. If hardware
load balancers are in use, then implement the load balancing configuration for
services prior to executing the deployment.
To deploy HAProxy within your OpenStack-Ansible environment, define target
hosts to run HAProxy:
.. code-block:: yaml
haproxy_hosts:
infra1:
ip: 172.29.236.101
infra2:
ip: 172.29.236.102
infra3:
ip: 172.29.236.103
There is an example configuration file already provided in
``/etc/openstack_deploy/conf.d/haproxy.yml.example``. Rename the file to
``haproxy.yml`` and configure it with the correct target hosts to use HAProxy
in an OpenStack-Ansible deployment.
Making HAProxy highly-available
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If multiple hosts are found in the inventory, deploy
HAProxy in a highly-available manner by installing keepalived.
Edit the ``/etc/openstack_deploy/user_variables.yml`` to skip the deployment
of keepalived along HAProxy when installing HAProxy on multiple hosts.
To do this, set the following::
.. code-block:: yaml
haproxy_use_keepalived: False
To make keepalived work, edit at least the following variables
in ``user_variables.yml``:
.. code-block:: yaml
haproxy_keepalived_external_vip_cidr: 192.168.0.4/25
haproxy_keepalived_internal_vip_cidr: 172.29.236.54/16
haproxy_keepalived_external_interface: br-flat
haproxy_keepalived_internal_interface: br-mgmt
- ``haproxy_keepalived_internal_interface`` and
``haproxy_keepalived_external_interface`` represent the interfaces on the
deployed node where the keepalived nodes bind the internal and external
vip. By default, use ``br-mgmt``.
- On the interface listed above, ``haproxy_keepalived_internal_vip_cidr`` and
``haproxy_keepalived_external_vip_cidr`` represent the internal and
external (respectively) vips (with their prefix length).
- Set additional variables to adapt keepalived in your deployment.
Refer to the ``user_variables.yml`` for more descriptions.
To always deploy (or upgrade to) the latest stable version of keepalived.
Edit the ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keepalived_use_latest_stable: True
The HAProxy playbook reads the ``vars/configs/keepalived_haproxy.yml``
variable file and provides content to the keepalived role for
keepalived master and backup nodes.
Keepalived pings a public IP address to check its status. The default
address is ``193.0.14.129``. To change this default,
set the ``keepalived_ping_address`` variable in the
``user_variables.yml`` file.
.. note::
The keepalived test works with IPv4 addresses only.
You can define additional variables to adapt keepalived to your
deployment. Refer to the ``user_variables.yml`` file for
more information. Optionally, you can use your own variable file.
For example:
.. code-block:: yaml
haproxy_keepalived_vars_file: /path/to/myvariablefile.yml
Configuring keepalived ping checks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-Ansible configures keepalived with a check script that pings an
external resource and uses that ping to determine if a node has lost network
connectivity. If the pings fail, keepalived fails over to another node and
HAProxy serves requests there.
The destination address, ping count and ping interval are configurable via
Ansible variables in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keepalived_ping_address: # IP address to ping
keepalived_ping_count: # ICMP packets to send (per interval)
keepalived_ping_interval: # How often ICMP packets are sent
By default, OpenStack-Ansible configures keepalived to ping one of the root
DNS servers operated by RIPE. You can change this IP address to a different
external address or another address on your internal network.
Securing HAProxy communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure HAProxy
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are used with HAProxy. However, you can
provide your own certificates by using the following Ansible variables:
.. code-block:: yaml
haproxy_user_ssl_cert: # Path to certificate
haproxy_user_ssl_key: # Path to private key
haproxy_user_ssl_ca_cert: # Path to CA certificate
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how you can provide your own
certificates and keys to use with HAProxy.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Configuring additional services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Additional haproxy service entries can be configured by setting
``haproxy_extra_services`` in ``/etc/openstack_deploy/user_variables.yml``
For more information on the service dict syntax, please reference
``playbooks/vars/configs/haproxy_config.yml``
An example HTTP service could look like:
.. code-block:: yaml
haproxy_extra_services:
- service:
haproxy_service_name: extra-web-service
haproxy_backend_nodes: "{{ groups['service_group'] | default([]) }}"
haproxy_ssl: "{{ haproxy_ssl }}"
haproxy_port: 10000
haproxy_balance_type: http
--------------
.. include:: navigation.txt

View File

@ -1,35 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Dashboard (horizon) (optional)
==============================================
Customize your horizon deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing horizon communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure Dashboard
(horizon) communications with self-signed or user-provided SSL certificates.
Refer to `Securing services with SSL certificates`_ for available configuration
options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Configuring a horizon customization module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Openstack-Ansible supports deployment of a horizon `customization module`_.
After building your customization module, configure the
``horizon_customization_module`` variable with a path to your module.
.. code-block:: yaml
horizon_customization_module: /path/to/customization_module.py
.. _customization module: http://docs.openstack.org/developer/horizon/topics/customizing.html#horizon-customization-module-overrides
--------------
.. include:: navigation.txt

View File

@ -1,18 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the hypervisor (optional)
=====================================
By default, the KVM hypervisor is used. If you are deploying to a host
that does not support KVM hardware acceleration extensions, select a
suitable hypervisor type such as ``qemu`` or ``lxc``. To change the
hypervisor type, uncomment and edit the following line in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
# nova_virt_type: kvm
--------------
.. include:: navigation.txt

View File

@ -1,221 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Bare Metal (ironic) service (optional)
======================================================
.. note::
This feature is experimental at this time and it has not been fully
production tested yet. These implementation instructions assume that
ironic is being deployed as the sole hypervisor for the region.
Ironic is an OpenStack project which provisions bare metal (as opposed to
virtual) machines by leveraging common technologies such as PXE boot and IPMI
to cover a wide range of hardware, while supporting pluggable drivers to allow
vendor-specific functionality to be added.
OpenStack's ironic project makes physical servers as easy to provision as
virtual machines in a cloud.
OpenStack-Ansible deployment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Modify the environment files and force ``nova-compute`` to run from
within a container:
.. code-block:: bash
sed -i '/is_metal.*/d' /etc/openstack_deploy/env.d/nova.yml
Setup a neutron network for use by ironic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In a general case, neutron networking can be a simple flat network. However,
in a complex case, this can be whatever you need and want. Ensure
you adjust the deployment accordingly. The following is an example:
.. code-block:: bash
neutron net-create cleaning-net --shared \
--provider:network_type flat \
--provider:physical_network ironic-net
neutron subnet-create ironic-net 172.19.0.0/22 --name ironic-subnet
--ip-version=4 \
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
--enable-dhcp \
--dns-nameservers list=true 8.8.4.4 8.8.8.8
Building ironic images
~~~~~~~~~~~~~~~~~~~~~~
Images using the ``diskimage-builder`` must be built outside of a container.
For this process, use one of the physical hosts within the environment.
#. Install the necessary packages:
.. code-block:: bash
apt-get install -y qemu uuid-runtime curl
#. Install the ``disk-imagebuilder`` package:
.. code-block:: bash
pip install diskimage-builder --isolated
.. important::
Only use the ``--isolated`` flag if you are building on a node
deployed by OpenStack-Ansible, otherwise pip will not
resolve the external package.
#. Optional: Force the ubuntu ``image-create`` process to use a modern kernel:
.. code-block:: bash
echo 'linux-image-generic-lts-xenial:' > \
/usr/local/share/diskimage-builder/elements/ubuntu/package-installs.yaml
#. Create Ubuntu ``initramfs``:
.. code-block:: bash
disk-image-create ironic-agent ubuntu -o ${IMAGE_NAME}
#. Upload the created deploy images into the Image (glance) Service:
.. code-block:: bash
# Upload the deploy image kernel
glance image-create --name ${IMAGE_NAME}.kernel --visibility public \
--disk-format aki --container-format aki < ${IMAGE_NAME}.kernel
# Upload the user image initramfs
glance image-create --name ${IMAGE_NAME}.initramfs --visibility public \
--disk-format ari --container-format ari < ${IMAGE_NAME}.initramfs
#. Create Ubuntu user image:
.. code-block:: bash
disk-image-create ubuntu baremetal localboot local-config dhcp-all-interfaces grub2 -o ${IMAGE_NAME}
#. Upload the created user images into the Image (glance) Service:
.. code-block:: bash
# Upload the user image vmlinuz and store uuid
VMLINUZ_UUID="$(glance image-create --name ${IMAGE_NAME}.vmlinuz --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.vmlinuz | awk '/\| id/ {print $4}')"
# Upload the user image initrd and store uuid
INITRD_UUID="$(glance image-create --name ${IMAGE_NAME}.initrd --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initrd | awk '/\| id/ {print $4}')"
# Create image
glance image-create --name ${IMAGE_NAME} --visibility public --disk-format qcow2 --container-format bare --property kernel_id=${VMLINUZ_UUID} --property ramdisk_id=${INITRD_UUID} < ${IMAGE_NAME}.qcow2
Creating an ironic flavor
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Create a new flavor called ``my-baremetal-flavor``.
.. note::
The following example sets the CPU architecture for the newly created
flavor to be `x86_64`.
.. code-block:: bash
nova flavor-create ${FLAVOR_NAME} ${FLAVOR_ID} ${FLAVOR_RAM} ${FLAVOR_DISK} ${FLAVOR_CPU}
nova flavor-key ${FLAVOR_NAME} set cpu_arch=x86_64
nova flavor-key ${FLAVOR_NAME} set capabilities:boot_option="local"
.. note::
Ensure the flavor and nodes match when enrolling into ironic.
See the documentation on flavors for more information:
http://docs.openstack.org/openstack-ops/content/flavors.html
After successfully deploying the ironic node on subsequent boots, the instance
boots from your local disk as first preference. This speeds up the deployed
node's boot time. Alternatively, if this is not set, the ironic node PXE boots
first and allows for operator-initiated image updates and other operations.
.. note::
The operational reasoning and building an environment to support this
use case is not covered here.
Enroll ironic nodes
-------------------
#. From the utility container, enroll a new baremetal node by executing the
following:
.. code-block:: bash
# Source credentials
. ~/openrc
# Create the node
NODE_HOSTNAME="myfirstnodename"
IPMI_ADDRESS="10.1.2.3"
IPMI_USER="my-ipmi-user"
IPMI_PASSWORD="my-ipmi-password"
KERNEL_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.kernel/ {print \$2}")
INITRAMFS_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.initramfs/ {print \$2}")
ironic node-create \
-d agent_ipmitool \
-i ipmi_address="${IPMI_ADDRESS}" \
-i ipmi_username="${IPMI_USER}" \
-i ipmi_password="${IPMI_PASSWORD}" \
-i deploy_ramdisk="${INITRAMFS_IMAGE}" \
-i deploy_kernel="${KERNEL_IMAGE}" \
-n ${NODE_HOSTNAME}
# Create a port for the node
NODE_MACADDRESS="aa:bb:cc:dd:ee:ff"
ironic port-create \
-n $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") \
-a ${NODE_MACADDRESS}
# Associate an image to the node
ROOT_DISK_SIZE_GB=40
ironic node-update $(ironic node-list | awk "/${IMAGE_NAME}/ {print \$2}") add \
driver_info/deploy_kernel=$KERNEL_IMAGE \
driver_info/deploy_ramdisk=$INITRAMFS_IMAGE \
instance_info/deploy_kernel=$KERNEL_IMAGE \
instance_info/deploy_ramdisk=$INITRAMFS_IMAGE \
instance_info/root_gb=${ROOT_DISK_SIZE_GB}
# Add node properties
# The property values used here should match the hardware used
ironic node-update $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") add \
properties/cpus=48 \
properties/memory_mb=254802 \
properties/local_gb=80 \
properties/size=3600 \
properties/cpu_arch=x86_64 \
properties/capabilities=memory_mb:254802,local_gb:80,cpu_arch:x86_64,cpus:48,boot_option:local
Deploy a baremetal node kicked with ironic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. important::
You will not have access unless you have a key set within nova before
your ironic deployment. If you do not have an ssh key readily
available, set one up with ``ssh-keygen``.
.. code-block:: bash
nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin
Now boot a node:
.. code-block:: bash
nova boot --flavor ${FLAVOR_NAME} --image ${IMAGE_NAME} --key-name admin ${NODE_NAME}

View File

@ -1,122 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Identity service (keystone) (optional)
======================================================
Customize your keystone deployment in
``/etc/openstack_deploy/user_variables.yml``.
Securing keystone communication with SSL certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure keystone
communications with self-signed or user-provided SSL certificates. By default,
self-signed certificates are in use. However, you can
provide your own certificates by using the following Ansible variables in
``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
keystone_user_ssl_cert: # Path to certificate
keystone_user_ssl_key: # Path to private key
keystone_user_ssl_ca_cert: # Path to CA certificate
.. note::
If you are providing certificates, keys, and CA file for a
CA without chain of trust (or an invalid/self-generated ca), the variables
``keystone_service_internaluri_insecure`` and
``keystone_service_adminuri_insecure`` should be set to ``True``.
Refer to `Securing services with SSL certificates`_ for more information on
these configuration options and how you can provide your own
certificates and keys to use with keystone.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Implementing LDAP (or Active Directory) backends
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the built-in keystone support for services if you already have
LDAP or Active Directory (AD) infrastructure on your deployment.
Keystone uses the existing users, groups, and user-group relationships to
handle authentication and access control in an OpenStack deployment.
.. note::
We do not recommend configuring the default domain in keystone to use
LDAP or AD identity backends. Create additional domains
in keystone and configure either LDAP or active directory backends for
that domain.
This is critical in situations where the identity backend cannot
be reached due to network issues or other problems. In those situations,
the administrative users in the default domain would still be able to
authenticate to keystone using the default domain which is not backed by
LDAP or AD.
You can add domains with LDAP backends by adding variables in
``/etc/openstack_deploy/user_variables.yml``. For example, this dictionary
adds a new keystone domain called ``Users`` that is backed by an LDAP server:
.. code-block:: yaml
keystone_ldap:
Users:
url: "ldap://10.10.10.10"
user: "root"
password: "secrete"
Adding the YAML block above causes the keystone playbook to create a
``/etc/keystone/domains/keystone.Users.conf`` file within each keystone service
container that configures the LDAP-backed domain called ``Users``.
You can create more complex configurations that use LDAP filtering and
consume LDAP as a read-only resource. The following example shows how to apply
these configurations:
.. code-block:: yaml
keystone_ldap:
MyCorporation:
url: "ldaps://ldap.example.com"
user_tree_dn: "ou=Users,o=MyCorporation"
group_tree_dn: "cn=openstack-users,ou=Users,o=MyCorporation"
user_objectclass: "inetOrgPerson"
user_allow_create: "False"
user_allow_update: "False"
user_allow_delete: "False"
group_allow_create: "False"
group_allow_update: "False"
group_allow_delete: "False"
user_id_attribute: "cn"
user_name_attribute: "uid"
user_filter: "(groupMembership=cn=openstack-users,ou=Users,o=MyCorporation)"
In the `MyCorporation` example above, keystone uses the LDAP server as a
read-only resource. The configuration also ensures that keystone filters the
list of possible users to the ones that exist in the
``cn=openstack-users,ou=Users,o=MyCorporation`` group.
Horizon offers multi-domain support that can be enabled with an Ansible
variable during deployment:
.. code-block:: yaml
horizon_keystone_multidomain_support: True
Enabling multi-domain support in horizon adds the ``Domain`` input field on
the horizon login page and it adds other domain-specific features in the
keystone section.
More details regarding valid configuration for the LDAP Identity backend can
be found in the `Keystone Developer Documentation`_ and the
`OpenStack Administrator Guide`_.
.. _Keystone Developer Documentation: http://docs.openstack.org/developer/keystone/configuration.html#configuring-the-ldap-identity-provider
.. _OpenStack Administrator Guide: http://docs.openstack.org/admin-guide/keystone_integrate_identity_backend_ldap.html
--------------
.. include:: navigation.txt

View File

@ -1,188 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Networking service (neutron) (optional)
=======================================================
The OpenStack Networking service (neutron) includes the following services:
Firewall as a Service (FWaaS)
Provides a software-based firewall that filters traffic from the router.
Load Balancer as a Service (LBaaS)
Provides load balancers that direct traffic to OpenStack instances or other
servers outside the OpenStack deployment.
VPN as a Service (VPNaaS)
Provides a method for extending a private network across a public network.
Firewall service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable FWaaS.
#. Override the default list of neutron plugins to include
``firewall``:
.. code-block:: yaml
neutron_plugin_base:
- firewall
- ...
#. ``neutron_plugin_base`` is as follows:
.. code-block:: yaml
neutron_plugin_base:
- router
- firewall
- lbaas
- vpnaas
- metering
- qos
#. Execute the neutron install playbook in order to update the configuration:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
#. Execute the horizon install playbook to show the FWaaS panels:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-horizon-install.yml
The FWaaS default configuration options may be changed through the
`conf override`_ mechanism using the ``neutron_neutron_conf_overrides``
dict.
Load balancing service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The `neutron-lbaas`_ plugin for neutron provides a software load balancer
service and can direct traffic to multiple servers. The service runs as an
agent and it manages `HAProxy`_ configuration files and daemons.
The Newton release contains only the LBaaS v2 API. For more details about
transitioning from LBaaS v1 to v2, review the :ref:`lbaas-special-notes`
section below.
Deployers can make changes to the LBaaS default configuration options via the
``neutron_lbaas_agent_ini_overrides`` dictionary. Review the documentation on
the `conf override`_ mechanism for more details.
.. _neutron-lbaas: https://wiki.openstack.org/wiki/Neutron/LBaaS
.. _HAProxy: http://www.haproxy.org/
Deploying LBaaS v2
------------------
#. Add the LBaaS v2 plugin to the ``neutron_plugin_base`` variable
in ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
- neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
Ensure that ``neutron_plugin_base`` includes all of the plugins that you
want to deploy with neutron in addition to the LBaaS plugin.
#. Run the neutron and horizon playbooks to deploy the LBaaS v2 agent and
enable the LBaaS v2 panels in horizon:
.. code-block:: console
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
# openstack-ansible os-horizon-install.yml
.. _lbaas-special-notes:
Special notes about LBaaS
-------------------------
**LBaaS v1 was deprecated in the Mitaka release and is not available in the
Newton release.**
LBaaS v1 and v2 agents are unable to run at the same time. If you switch
LBaaS v1 to v2, the v2 agent is the only agent running. The LBaaS v1 agent
stops along with any load balancers provisioned under the v1 agent.
Load balancers are not migrated between LBaaS v1 and v2 automatically. Each
implementation has different code paths and database tables. You need
to manually delete load balancers, pools, and members before switching LBaaS
versions. Recreate these objects afterwards.
Virtual private network service (optional)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure describes how to modify the
``/etc/openstack_deploy/user_variables.yml`` file to enable VPNaaS.
#. Override the default list of neutron plugins to include
``vpnaas``:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
#. ``neutron_plugin_base`` is as follows:
.. code-block:: yaml
neutron_plugin_base:
- router
- metering
- vpnaas
#. Override the default list of specific kernel modules
in order to include the necessary modules to run ipsec:
.. code-block:: yaml
openstack_host_specific_kernel_modules:
- { name: "ebtables", pattern: "CONFIG_BRIDGE_NF_EBTABLES=", group: "network_hosts" }
- { name: "af_key", pattern: "CONFIG_NET_KEY=", group: "network_hosts" }
- { name: "ah4", pattern: "CONFIG_INET_AH=", group: "network_hosts" }
- { name: "ipcomp", pattern: "CONFIG_INET_IPCOMP=", group: "network_hosts" }
#. Execute the openstack hosts setup in order to load the kernel modules at
boot and runtime in the network hosts
.. code-block:: shell-session
# openstack-ansible openstack-hosts-setup.yml --limit network_hosts\
--tags "openstack_hosts-config"
#. Execute the neutron install playbook in order to update the configuration:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-neutron-install.yml
#. Execute the horizon install playbook to show the VPNaaS panels:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-horizon-install.yml
The VPNaaS default configuration options are changed through the
`conf override`_ mechanism using the ``neutron_neutron_conf_overrides``
dict.
.. _conf override: http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-openstack.html
--------------
.. include:: navigation.txt

View File

@ -1,172 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the Compute (nova) service (optional)
=================================================
The Compute service (nova) handles the creation of virtual machines within an
OpenStack environment. Many of the default options used by OpenStack-Ansible
are found within ``defaults/main.yml`` within the nova role.
Availability zones
~~~~~~~~~~~~~~~~~~
Deployers with multiple availability zones can set the
``nova_default_schedule_zone`` Ansible variable to specify an availability zone
for new requests. This is useful in environments with different types
of hypervisors, where builds are sent to certain hardware types based on
their resource requirements.
For example, if you have servers running on two racks without sharing the PDU.
These two racks can be grouped into two availability zones.
When one rack loses power, the other one still works. By spreading
your containers onto the two racks (availability zones), you will
improve your service availability.
Block device tuning for Ceph (RBD)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enabling Ceph and defining ``nova_libvirt_images_rbd_pool`` changes two
libvirt configurations by default:
* hw_disk_discard: ``unmap``
* disk_cachemodes: ``network=writeback``
Setting ``hw_disk_discard`` to ``unmap`` in libvirt enables
discard (sometimes called TRIM) support for the underlying block device. This
allows reclaiming of unused blocks on the underlying disks.
Setting ``disk_cachemodes`` to ``network=writeback`` allows data to be written
into a cache on each change, but those changes are flushed to disk at a regular
interval. This can increase write performance on Ceph block devices.
You have the option to customize these settings using two Ansible
variables (defaults shown here):
.. code-block:: yaml
nova_libvirt_hw_disk_discard: 'unmap'
nova_libvirt_disk_cachemodes: 'network=writeback'
You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
``ignore``. The ``nova_libvirt_disk_cachemodes`` can be set to an empty
string to disable ``network=writeback``.
The following minimal example configuration sets nova to use the
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication,
and requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
.. code-block:: console
nova_libvirt_images_rbd_pool: ephemeral-vms
ceph_mons:
- 172.29.244.151
- 172.29.244.152
- 172.29.244.153
If you have a different Ceph username for the pool, use it as:
.. code-block:: console
cinder_ceph_client: <ceph-username>
* The `Ceph documentation for OpenStack`_ has additional information about
these settings.
* `OpenStack-Ansible and Ceph Working Example`_
.. _Ceph documentation for OpenStack: http://docs.ceph.com/docs/master/rbd/rbd-openstack/
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
Config drive
~~~~~~~~~~~~
By default, OpenStack-Ansible does not configure nova to force config drives
to be provisioned with every instance that nova builds. The metadata service
provides configuration information that is used by ``cloud-init`` inside the
instance. Config drives are only necessary when an instance does not have
``cloud-init`` installed or does not have support for handling metadata.
A deployer can set an Ansible variable to force config drives to be deployed
with every virtual machine:
.. code-block:: yaml
nova_force_config_drive: True
Certain formats of config drives can prevent instances from migrating properly
between hypervisors. If you need forced config drives and the ability
to migrate instances, set the config drive format to ``vfat`` using
the ``nova_nova_conf_overrides`` variable:
.. code-block:: yaml
nova_nova_conf_overrides:
DEFAULT:
config_drive_format: vfat
force_config_drive: True
Libvirtd connectivity and authentication
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, OpenStack-Ansible configures the libvirt daemon in the following
way:
* TLS connections are enabled
* TCP plaintext connections are disabled
* Authentication over TCP connections uses SASL
You can customize these settings using the following Ansible variables:
.. code-block:: yaml
# Enable libvirtd's TLS listener
nova_libvirtd_listen_tls: 1
# Disable libvirtd's plaintext TCP listener
nova_libvirtd_listen_tcp: 0
# Use SASL for authentication
nova_libvirtd_auth_tcp: sasl
Multipath
~~~~~~~~~
Nova supports multipath for iSCSI-based storage. Enable multipath support in
nova through a configuration override:
.. code-block:: yaml
nova_nova_conf_overrides:
libvirt:
iscsi_use_multipath: true
Shared storage and synchronized UID/GID
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Specify a custom UID for the nova user and GID for the nova group
to ensure they are identical on each host. This is helpful when using shared
storage on Compute nodes because it allows instances to migrate without
filesystem ownership failures.
By default, Ansible creates the nova user and group without specifying the
UID or GID. To specify custom values for the UID or GID, set the following
Ansible variables:
.. code-block:: yaml
nova_system_user_uid = <specify a UID>
nova_system_group_gid = <specify a GID>
.. warning::
Setting this value after deploying an environment with
OpenStack-Ansible can cause failures, errors, and general instability. These
values should only be set once before deploying an OpenStack environment
and then never changed.
--------------
.. include:: navigation.txt

View File

@ -1,43 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring RabbitMQ (optional)
===============================
RabbitMQ provides the messaging broker for various OpenStack services. The
OpenStack-Ansible project configures a plaintext listener on port 5672 and
a SSL/TLS encrypted listener on port 5671.
Customize your RabbitMQ deployment in
``/etc/openstack_deploy/user_variables.yml``.
Add a TLS encrypted listener to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OpenStack-Ansible project provides the ability to secure RabbitMQ
communications with self-signed or user-provided SSL certificates. Refer to
`Securing services with SSL certificates`_ for available configuration
options.
.. _Securing services with SSL certificates: configure-sslcertificates.html
Enable encrypted connections to RabbitMQ
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The control of SSL communication between various OpenStack services and
RabbitMQ is via the Ansible variable ``rabbitmq_use_ssl``:
.. code-block:: yaml
rabbitmq_use_ssl: true
Setting this variable to ``true`` adjusts the RabbitMQ port to 5671 (the
default SSL/TLS listener port) and enables SSL connectivity between each
OpenStack service and RabbitMQ.
Setting this variable to ``false`` disables SSL encryption between
OpenStack services and RabbitMQ. Use the plaintext port for RabbitMQ, 5672,
for all services.
--------------
.. include:: navigation.txt

View File

@ -1,38 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Add to existing deployment
==========================
Complete the following procedure to deploy swift on an
existing deployment.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all keystone users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to swift.
If this value is ``False``, by default only users with the
``admin`` or ``swiftoperator`` role can create containers or
manage tenants.
When the backend type for the glance is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
#. Run the swift play:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml
--------------
.. include:: navigation.txt

View File

@ -1,325 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Configuring the service
=======================
**Procedure 5.2. Updating the Object Storage configuration ``swift.yml``
file**
#. Copy the ``/etc/openstack_deploy/conf.d/swift.yml.example`` file to
``/etc/openstack_deploy/conf.d/swift.yml``:
.. code-block:: shell-session
# cp /etc/openstack_deploy/conf.d/swift.yml.example \
/etc/openstack_deploy/conf.d/swift.yml
#. Update the global override values:
.. code-block:: yaml
# global_overrides:
# swift:
# part_power: 8
# weight: 100
# min_part_hours: 1
# repl_number: 3
# storage_network: 'br-storage'
# replication_network: 'br-repl'
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# mount_point: /srv/node
# account:
# container:
# storage_policies:
# - policy:
# name: gold
# index: 0
# default: True
# - policy:
# name: silver
# index: 1
# repl_number: 3
# deprecated: True
# statsd_host: statsd.example.com
# statsd_port: 8125
# statsd_metric_prefix:
# statsd_default_sample_rate: 1.0
# statsd_sample_rate_factor: 1.0
``part_power``
Set the partition power value based on the total amount of
storage the entire ring uses.
Multiply the maximum number of drives ever used with the swift
installation by 100 and round that value up to the
closest power of two value. For example, a maximum of six drives,
times 100, equals 600. The nearest power of two above 600 is two
to the power of nine, so the partition power is nine. The
partition power cannot be changed after the swift rings
are built.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. For
example, a 1 TB disk would have a weight of 100, while a 2 TB
drive would have a weight of 200.
``min_part_hours``
The default value is 1. Set the minimum partition hours to the
amount of time to lock a partition's replicas after moving a partition.
Moving multiple replicas at the same time
makes data inaccessible. This value can be set separately in the
swift, container, account, and policy sections with the value in
lower sections superseding the value in the swift section.
``repl_number``
The default value is 3. Set the replication number to the number
of replicas of each object. This value can be set separately in
the swift, container, account, and policy sections with the value
in the more granular sections superseding the value in the swift
section.
``storage_network``
By default, the swift services listen on the default
management IP. Optionally, specify the interface of the storage
network.
If the ``storage_network`` is not set, but the ``storage_ips``
per host are set (or the ``storage_ip`` is not on the
``storage_network`` interface) the proxy server is unable
to connect to the storage services.
``replication_network``
Optionally, specify a dedicated replication network interface, so
dedicated replication can be setup. If this value is not
specified, no dedicated ``replication_network`` is set.
Replication does not work properly if the ``repl_ip`` is not set on
the ``replication_network`` interface.
``drives``
Set the default drives per host. This is useful when all hosts
have the same drives. These can be overridden on a per host
basis.
``mount_point``
Set the ``mount_point`` value to the location where the swift
drives are mounted. For example, with a mount point of ``/srv/node``
and a drive of ``sdc``, a drive is mounted at ``/srv/node/sdc`` on the
``swift_host``. This can be overridden on a per-host basis.
``storage_policies``
Storage policies determine on which hardware data is stored, how
the data is stored across that hardware, and in which region the
data resides. Each storage policy must have an unique ``name``
and a unique ``index``. There must be a storage policy with an
index of 0 in the ``swift.yml`` file to use any legacy containers
created before storage policies were instituted.
``default``
Set the default value to ``yes`` for at least one policy. This is
the default storage policy for any non-legacy containers that are
created.
``deprecated``
Set the deprecated value to ``yes`` to turn off storage policies.
For account and container rings, ``min_part_hours`` and
``repl_number`` are the only values that can be set. Setting them
in this section overrides the defaults for the specific ring.
``statsd_host``
Swift supports sending extra metrics to a ``statsd`` host. This option
sets the ``statsd`` host to receive ``statsd`` metrics. Specifying
this here applies to all hosts in the cluster.
If ``statsd_host`` is left blank or omitted, then ``statsd`` are
disabled.
All ``statsd`` settings can be overridden or you can specify deeper in the
structure if you want to only catch ``statsdv`` metrics on certain hosts.
``statsd_port``
Optionally, use this to specify the ``statsd`` server's port you are
sending metrics to. Defaults to 8125 of omitted.
``statsd_default_sample_rate`` and ``statsd_sample_rate_factor``
These ``statsd`` related options are more complex and are
used to tune how many samples are sent to ``statsd``. Omit them unless
you need to tweak these settings, if so first read:
http://docs.openstack.org/developer/swift/admin_guide.html
#. Update the swift proxy hosts values:
.. code-block:: yaml
# swift-proxy_hosts:
# infra-node1:
# ip: 192.0.2.1
# statsd_metric_prefix: proxy01
# infra-node2:
# ip: 192.0.2.2
# statsd_metric_prefix: proxy02
# infra-node3:
# ip: 192.0.2.3
# statsd_metric_prefix: proxy03
``swift-proxy_hosts``
Set the ``IP`` address of the hosts so Ansible connects to
to deploy the ``swift-proxy`` containers. The ``swift-proxy_hosts``
value matches the infra nodes.
``statsd_metric_prefix``
This metric is optional, and also only evaluated it you have defined
``statsd_host`` somewhere. It allows you define a prefix to add to
all ``statsd`` metrics sent from this hose. If omitted, use the node name.
#. Update the swift hosts values:
.. code-block:: yaml
# swift_hosts:
# swift-node1:
# ip: 192.0.2.4
# container_vars:
# swift_vars:
# zone: 0
# statsd_metric_prefix: node1
# swift-node2:
# ip: 192.0.2.5
# container_vars:
# swift_vars:
# zone: 1
# statsd_metric_prefix: node2
# swift-node3:
# ip: 192.0.2.6
# container_vars:
# swift_vars:
# zone: 2
# statsd_metric_prefix: node3
# swift-node4:
# ip: 192.0.2.7
# container_vars:
# swift_vars:
# zone: 3
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
``swift_hosts``
Specify the hosts to be used as the storage nodes. The ``ip`` is
the address of the host to which Ansible connects. Set the name
and IP address of each swift host. The ``swift_hosts``
section is not required.
``swift_vars``
Contains the swift host specific values.
``storage_ip`` and ``repl_ip``
Base these values on the IP addresses of the host's
``storage_network`` or ``replication_network``. For example, if
the ``storage_network`` is ``br-storage`` and host1 has an IP
address of 1.1.1.1 on ``br-storage``, then this is the IP address
in use for ``storage_ip``. If only the ``storage_ip``
is specified, then the ``repl_ip`` defaults to the ``storage_ip``.
If neither are specified, both default to the host IP
address.
Overriding these values on a host or drive basis can cause
problems if the IP address that the service listens on is based
on a specified ``storage_network`` or ``replication_network`` and
the ring is set to a different IP address.
``zone``
The default is 0. Optionally, set the swift zone for the
ring.
``region``
Optionally, set the swift region for the ring.
``weight``
The default weight is 100. If the drives are different sizes, set
the weight value to avoid uneven distribution of data. This value
can be specified on a host or drive basis (if specified at both,
the drive setting takes precedence).
``groups``
Set the groups to list the rings to which a host's drive belongs.
This can be set on a per drive basis which overrides the host
setting.
``drives``
Set the names of the drives on the swift host. Specify at least
one name.
``statsd_metric_prefix``
This metric is optional, and only evaluates if ``statsd_host`` is defined
somewhere. This allows you to define a prefix to add to
all ``statsd`` metrics sent from the hose. If omitted, use the node name.
In the following example, ``swift-node5`` shows values in the
``swift_hosts`` section that override the global values. Groups
are set, which overrides the global settings for drive ``sdb``. The
weight is overridden for the host and specifically adjusted on drive
``sdb``. Also, the ``storage_ip`` and ``repl_ip`` are set differently
for ``sdb``.
.. code-block:: yaml
# swift-node5:
# ip: 192.0.2.8
# container_vars:
# swift_vars:
# storage_ip: 198.51.100.8
# repl_ip: 203.0.113.8
# zone: 4
# region: 3
# weight: 200
# groups:
# - account
# - container
# - silver
# drives:
# - name: sdb
# storage_ip: 198.51.100.9
# repl_ip: 203.0.113.9
# weight: 75
# groups:
# - gold
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
#. Ensure the ``swift.yml`` is in the ``/etc/openstack_deploy/conf.d/``
folder.
--------------
.. include:: navigation.txt

View File

@ -1,104 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage devices
===============
This section offers a set of prerequisite instructions for setting up
Object Storage (swift) storage devices. The storage devices must be set up
before installing swift.
**Procedure 5.1. Configuring and mounting storage devices**
Object Storage recommends a minimum of three swift hosts
with five storage disks. The example commands in this procedure
use the storage devices ``sdc`` through to ``sdg``.
#. Determine the storage devices on the node to be used for swift.
#. Format each device on the node used for storage with XFS. While
formatting the devices, add a unique label for each device.
Without labels, a failed drive causes mount points to shift and
data to become inaccessible.
For example, create the file systems on the devices using the
``mkfs`` command:
.. code-block:: shell-session
# apt-get install xfsprogs
# mkfs.xfs -f -i size=1024 -L sdc /dev/sdc
# mkfs.xfs -f -i size=1024 -L sdd /dev/sdd
# mkfs.xfs -f -i size=1024 -L sde /dev/sde
# mkfs.xfs -f -i size=1024 -L sdf /dev/sdf
# mkfs.xfs -f -i size=1024 -L sdg /dev/sdg
#. Add the mount locations to the ``fstab`` file so that the storage
devices are remounted on boot. The following example mount options
are recommended when using XFS:
.. code-block:: shell-session
LABEL=sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdd /srv/node/sdd xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sde /srv/node/sde xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
LABEL=sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
#. Create the mount points for the devices using the ``mkdir`` command:
.. code-block:: shell-session
# mkdir -p /srv/node/sdc
# mkdir -p /srv/node/sdd
# mkdir -p /srv/node/sde
# mkdir -p /srv/node/sdf
# mkdir -p /srv/node/sdg
The mount point is referenced as the ``mount_point`` parameter in
the ``swift.yml`` file (``/etc/rpc_deploy/conf.d/swift.yml``):
.. code-block:: shell-session
# mount /srv/node/sdc
# mount /srv/node/sdd
# mount /srv/node/sde
# mount /srv/node/sdf
# mount /srv/node/sdg
To view an annotated example of the ``swift.yml`` file, see `Appendix A,
*OSA configuration files* <app-configfiles.html>`_.
For the following mounted devices:
+--------------------------------------+--------------------------------------+
| Device | Mount location |
+======================================+======================================+
| /dev/sdc | /srv/node/sdc |
+--------------------------------------+--------------------------------------+
| /dev/sdd | /srv/node/sdd |
+--------------------------------------+--------------------------------------+
| /dev/sde | /srv/node/sde |
+--------------------------------------+--------------------------------------+
| /dev/sdf | /srv/node/sdf |
+--------------------------------------+--------------------------------------+
| /dev/sdg | /srv/node/sdg |
+--------------------------------------+--------------------------------------+
Table: Table 5.1. Mounted devices
The entry in the ``swift.yml``:
.. code-block:: yaml
# drives:
# - name: sdc
# - name: sdd
# - name: sde
# - name: sdf
# - name: sdg
# mount_point: /srv/node
--------------
.. include:: navigation.txt

View File

@ -1,69 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Integrate with the Image Service (glance)
=========================================
As an option, you can create images in Image Service (glance) and
store them using Object Storage (swift).
If there is an existing glance backend (for example,
cloud files) but you want to add swift to use as the glance backend,
you can re-add any images from glance after moving
to swift. Images are no longer available if there is a change in the
glance variables when you begin using swift.
**Procedure 5.3. Integrating Object Storage with Image Service**
This procedure requires the following:
- OSA Kilo (v11)
- Object Storage v2.2.0
#. Update the glance options in the
``/etc/openstack_deploy/user_variables.yml`` file:
.. code-block:: yaml
# Glance Options
glance_default_store: swift
glance_swift_store_auth_address: '{{ auth_identity_uri }}'
glance_swift_store_container: glance_images
glance_swift_store_endpoint_type: internalURL
glance_swift_store_key: '{{ glance_service_password }}'
glance_swift_store_region: RegionOne
glance_swift_store_user: 'service:glance'
- ``glance_default_store``: Set the default store to ``swift``.
- ``glance_swift_store_auth_address``: Set to the local
authentication address using the
``'{{ auth_identity_uri }}'`` variable.
- ``glance_swift_store_container``: Set the container name.
- ``glance_swift_store_endpoint_type``: Set the endpoint type to
``internalURL``.
- ``glance_swift_store_key``: Set the glance password using
the ``{{ glance_service_password }}`` variable.
- ``glance_swift_store_region``: Set the region. The default value
is ``RegionOne``.
- ``glance_swift_store_user``: Set the tenant and user name to
``'service:glance'``.
#. Rerun the glance configuration plays.
#. Run the glance playbook:
.. code-block:: shell-session
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-glance-install.yml --tags "glance-config"
--------------
.. include:: navigation.txt

View File

@ -1,51 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
Storage policies
================
Storage policies allow segmenting the cluster for various purposes
through the creation of multiple object rings. Using policies, different
devices can belong to different rings with varying levels of
replication. By supporting multiple object rings, swift can
segregate the objects within a single cluster.
Use storage policies for the following situations:
- Differing levels of replication: A provider may want to offer 2x
replication and 3x replication, but does not want to maintain two
separate clusters. They can set up a 2x policy and a 3x policy and
assign the nodes to their respective rings.
- Improving performance: Just as solid state drives (SSD) can be used
as the exclusive members of an account or database ring, an SSD-only
object ring can be created to implement a low-latency or high
performance policy.
- Collecting nodes into groups: Different object rings can have
different physical servers so that objects in specific storage
policies are always placed in a specific data center or geography.
- Differing storage implementations: A policy can be used to direct
traffic to collected nodes that use a different disk file (for
example: Kinetic, GlusterFS).
Most storage clusters do not require more than one storage policy. The
following problems can occur if using multiple storage policies per
cluster:
- Creating a second storage policy without any specified drives (all
drives are part of only the account, container, and default storage
policy groups) creates an empty ring for that storage policy.
- Only use a non-default storage policy if specified when creating
a container, using the ``X-Storage-Policy: <policy-name>`` header.
After creating the container, it uses the storage policy.
Other containers continue using the default or another specified
storage policy.
For more information about storage policies, see: `Storage
Policies <http://docs.openstack.org/developer/swift/overview_policies.html>`_
--------------
.. include:: navigation.txt

View File

@ -1,65 +0,0 @@
`Home <index.html>`_ OpenStack-Ansible Installation Guide
.. _configure-swift:
Configuring the Object Storage (swift) service (optional)
=========================================================
.. toctree::
configure-swift-devices.rst
configure-swift-config.rst
configure-swift-glance.rst
configure-swift-add.rst
configure-swift-policies.rst
Object Storage (swift) is a multi-tenant Object Storage system. It is
highly scalable, can manage large amounts of unstructured data, and
provides a RESTful HTTP API.
The following procedure describes how to set up storage devices and
modify the Object Storage configuration files to enable swift
usage.
#. `The section called "Configure and mount storage
devices" <configure-swift-devices.html>`_
#. `The section called "Configure an Object Storage
deployment" <configure-swift-config.html>`_
#. Optionally, allow all Identity (keystone) users to use swift by setting
``swift_allow_all_users`` in the ``user_variables.yml`` file to
``True``. Any users with the ``_member_`` role (all authorized
keystone users) can create containers and upload objects
to Object Storage.
If this value is ``False``, then by default, only users with the
admin or ``swiftoperator`` role are allowed to create containers or
manage tenants.
When the backend type for the Image Service (glance) is set to
``swift``, glance can access the swift cluster
regardless of whether this value is ``True`` or ``False``.
Overview
~~~~~~~~
Object Storage (swift) is configured using the
``/etc/openstack_deploy/conf.d/swift.yml`` file and the
``/etc/openstack_deploy/user_variables.yml`` file.
When installing swift, use the group variables in the
``/etc/openstack_deploy/conf.d/swift.yml`` file for the Ansible
playbooks. Some variables cannot
be changed after they are set, while some changes require re-running the
playbooks. The values in the ``swift_hosts`` section supersede values in
the ``swift`` section.
To view the configuration files, including information about which
variables are required and which are optional, see `Appendix A, *OSA
configuration files* <app-configfiles.html>`_.
--------------
.. include:: navigation.txt