DOCS: Configuration section - cleanup
As per discussion in the OSA docs summit session, clean up of installation guide. This fixes typos, minor RST mark up changes, and passive voice. This patch also merges a some of the sections into the larger chapter. This is in an effort to remove multiple smaller files. This patch is the first of many to avoid major conflicts. Change-Id: I2b1582812e638e2b3b455b7c34b93d13e08a168a
This commit is contained in:
parent
7fd7978e8e
commit
55155f301e
@ -1,37 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Availability zones
|
||||
------------------
|
||||
|
||||
Multiple availability zones can be created to manage Block Storage
|
||||
storage hosts. Edit the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` and
|
||||
``/etc/openstack_deploy/user_variables.yml`` files to set up
|
||||
availability zones.
|
||||
|
||||
#. For each cinder storage host, configure the availability zone under
|
||||
the ``container_vars`` stanza:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_storage_availability_zone: CINDERAZ
|
||||
|
||||
Replace ``CINDERAZ`` with a suitable name. For example
|
||||
``cinderAZ_2``
|
||||
|
||||
#. If more than one availability zone is created, configure the default
|
||||
availability zone for all the hosts by creating a
|
||||
``cinder_default_availability_zone`` in your
|
||||
``/etc/openstack_deploy/user_variables.yml``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_default_availability_zone: CINDERAZ_DEFAULT
|
||||
|
||||
Replace ``CINDERAZ_DEFAULT`` with a suitable name. For example,
|
||||
``cinderAZ_1``. The default availability zone should be the same
|
||||
for all cinder hosts.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
@ -1,79 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Backup
|
||||
------
|
||||
|
||||
You can configure Block Storage (cinder) to back up volumes to Object
|
||||
Storage (swift) by setting variables. If enabled, the default
|
||||
configuration backs up volumes to an Object Storage installation
|
||||
accessible within your environment. Alternatively, you can set
|
||||
``cinder_service_backup_swift_url`` and other variables listed below to
|
||||
back up to an external Object Storage installation.
|
||||
|
||||
#. Add or edit the following line in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file and set the value
|
||||
to ``True``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_program_enabled: True
|
||||
|
||||
#. By default, Block Storage will use the access credentials of the user
|
||||
initiating the backup. Default values are set in the
|
||||
``/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.yml``
|
||||
file. You can override those defaults by setting variables in
|
||||
``/etc/openstack_deploy/user_variables.yml`` to change how Block
|
||||
Storage performs backups. As needed, add and edit any of the
|
||||
following variables to the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
...
|
||||
cinder_service_backup_swift_auth: per_user
|
||||
# Options include 'per_user' or 'single_user'. We default to
|
||||
# 'per_user' so that backups are saved to a user's swift
|
||||
# account.
|
||||
cinder_service_backup_swift_url:
|
||||
# This is your swift storage url when using 'per_user', or keystone
|
||||
# endpoint when using 'single_user'. When using 'per_user', you
|
||||
# can leave this as empty or as None to allow cinder-backup to
|
||||
# obtain storage url from environment.
|
||||
cinder_service_backup_swift_url:
|
||||
cinder_service_backup_swift_auth_version: 2
|
||||
cinder_service_backup_swift_user:
|
||||
cinder_service_backup_swift_tenant:
|
||||
cinder_service_backup_swift_key:
|
||||
cinder_service_backup_swift_container: volumebackups
|
||||
cinder_service_backup_swift_object_size: 52428800
|
||||
cinder_service_backup_swift_retry_attempts: 3
|
||||
cinder_service_backup_swift_retry_backoff: 2
|
||||
cinder_service_backup_compression_algorithm: zlib
|
||||
cinder_service_backup_metadata_version: 2
|
||||
|
||||
During installation of Block Storage, the backup service is configured.
|
||||
For more information about swift, refer to the Standalone Object Storage
|
||||
Deployment guide.
|
||||
|
||||
Using Ceph for Cinder backups
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deployers can use Ceph to hold Cinder volume backups if they have Ceph
|
||||
deployed. To get started, set the ``cinder_service_backup_driver`` Ansible
|
||||
variable:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_driver: cinder.backup.drivers.ceph
|
||||
|
||||
Next, configure the Ceph user and the pool to use for backups. The defaults
|
||||
are shown here:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_ceph_user: cinder-backup
|
||||
cinder_service_backup_ceph_pool: backups
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
@ -1,26 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Horizon configuration for Cinder
|
||||
--------------------------------
|
||||
|
||||
A deployer can configure variables to set the behavior for Cinder
|
||||
volume management in Horizon. By default, no Horizon configuration is
|
||||
set.
|
||||
|
||||
#. If multiple availability zones are used and
|
||||
``cinder_default_availability_zone`` is not defined, the default
|
||||
destination availability zone is ``nova``. Volume creation with
|
||||
Horizon might fail if there is no availability zone named ``nova``.
|
||||
Set ``cinder_default_availability_zone`` to an appropriate
|
||||
availability zone name so that :guilabel:`Any availability zone`
|
||||
works in Horizon.
|
||||
|
||||
#. Horizon does not populate the volume type by default. On the new
|
||||
volume page, a request for the creation of a volume with the
|
||||
default parameters fails. Set ``cinder_default_volume_type`` so
|
||||
that a volume creation request without an explicit volume type
|
||||
succeeds.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
@ -1,44 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
NFS back-end
|
||||
------------
|
||||
|
||||
If the NetApp back end is configured to use an NFS storage protocol,
|
||||
edit ``/etc/openstack_deploy/openstack_user_config.yml``, and configure
|
||||
the NFS client on each storage node that will use it.
|
||||
|
||||
#. Add the ``cinder_backends`` stanza (which includes
|
||||
``cinder_nfs_client``) under the ``container_vars`` stanza for
|
||||
each storage node:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
cinder_nfs_client:
|
||||
|
||||
#. Configure the location of the file that lists shares available to the
|
||||
block storage service. This configuration file must include
|
||||
``nfs_shares_config``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
nfs_shares_config: SHARE_CONFIG
|
||||
|
||||
Replace ``SHARE_CONFIG`` with the location of the share
|
||||
configuration file. For example, ``/etc/cinder/nfs_shares``.
|
||||
|
||||
#. Configure one or more NFS shares:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
shares:
|
||||
- { ip: "NFS_HOST", share: "NFS_SHARE" }
|
||||
|
||||
Replace ``NFS_HOST`` with the IP address or hostname of the NFS
|
||||
server, and the ``NFS_SHARE`` with the absolute path to an existing
|
||||
and accessible NFS share.
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
@ -1,29 +1,195 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring the Block Storage service (optional)
|
||||
------------------------------------------------
|
||||
Configuring the Block (cinder) storage service (optional)
|
||||
=========================================================
|
||||
|
||||
.. toctree::
|
||||
By default, the Block (cinder) storage service installs on the host itself using
|
||||
the LVM backend.
|
||||
|
||||
configure-cinder-nfs.rst
|
||||
configure-cinder-backup.rst
|
||||
configure-cinder-az.rst
|
||||
configure-cinder-horizon.rst
|
||||
.. note::
|
||||
|
||||
By default, the Block Storage service installs on the host itself using the LVM
|
||||
backend. While this is the default for cinder it should be noted that using a
|
||||
LVM backend results in a Single Point of Failure. As a result of the volume
|
||||
service being deployed directly to the host is_metal is true when using LVM.
|
||||
While this is the default for cinder, using the LVM backend results in a
|
||||
Single Point of Failure. As a result of the volume service being deployed
|
||||
directly to the host, ``is_metal`` is ``true`` when using LVM.
|
||||
|
||||
Configuring Cinder to use LVM
|
||||
NFS backend
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Edit ``/etc/openstack_deploy/openstack_user_config.yml`` and configure
|
||||
the NFS client on each storage node if the NetApp backend is configured to use
|
||||
an NFS storage protocol.
|
||||
|
||||
#. Add the ``cinder_backends`` stanza (which includes
|
||||
``cinder_nfs_client``) under the ``container_vars`` stanza for
|
||||
each storage node:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
container_vars:
|
||||
cinder_backends:
|
||||
cinder_nfs_client:
|
||||
|
||||
#. Configure the location of the file that lists shares available to the
|
||||
block storage service. This configuration file must include
|
||||
``nfs_shares_config``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
nfs_shares_config: SHARE_CONFIG
|
||||
|
||||
Replace ``SHARE_CONFIG`` with the location of the share
|
||||
configuration file. For example, ``/etc/cinder/nfs_shares``.
|
||||
|
||||
#. Configure one or more NFS shares:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
shares:
|
||||
- { ip: "NFS_HOST", share: "NFS_SHARE" }
|
||||
|
||||
Replace ``NFS_HOST`` with the IP address or hostname of the NFS
|
||||
server, and the ``NFS_SHARE`` with the absolute path to an existing
|
||||
and accessible NFS share.
|
||||
|
||||
Backup
|
||||
~~~~~~
|
||||
|
||||
You can configure cinder to backup volumes to Object
|
||||
Storage (swift). Enable the default
|
||||
configuration to back up volumes to a swift installation
|
||||
accessible within your environment. Alternatively, you can set
|
||||
``cinder_service_backup_swift_url`` and other variables to
|
||||
back up to an external swift installation.
|
||||
|
||||
#. Add or edit the following line in the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file and set the value
|
||||
to ``True``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_program_enabled: True
|
||||
|
||||
#. By default, cinder uses the access credentials of the user
|
||||
initiating the backup. Default values are set in the
|
||||
``/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.yml``
|
||||
file. You can override those defaults by setting variables in
|
||||
``/etc/openstack_deploy/user_variables.yml`` to change how cinder
|
||||
performs backups. Add and edit any of the
|
||||
following variables to the
|
||||
``/etc/openstack_deploy/user_variables.yml`` file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
...
|
||||
cinder_service_backup_swift_auth: per_user
|
||||
# Options include 'per_user' or 'single_user'. We default to
|
||||
# 'per_user' so that backups are saved to a user's swift
|
||||
# account.
|
||||
cinder_service_backup_swift_url:
|
||||
# This is your swift storage url when using 'per_user', or keystone
|
||||
# endpoint when using 'single_user'. When using 'per_user', you
|
||||
# can leave this as empty or as None to allow cinder-backup to
|
||||
# obtain a storage url from environment.
|
||||
cinder_service_backup_swift_url:
|
||||
cinder_service_backup_swift_auth_version: 2
|
||||
cinder_service_backup_swift_user:
|
||||
cinder_service_backup_swift_tenant:
|
||||
cinder_service_backup_swift_key:
|
||||
cinder_service_backup_swift_container: volumebackups
|
||||
cinder_service_backup_swift_object_size: 52428800
|
||||
cinder_service_backup_swift_retry_attempts: 3
|
||||
cinder_service_backup_swift_retry_backoff: 2
|
||||
cinder_service_backup_compression_algorithm: zlib
|
||||
cinder_service_backup_metadata_version: 2
|
||||
|
||||
During installation of cinder, the backup service is configured.
|
||||
|
||||
|
||||
Using Ceph for cinder backups
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List the container_vars which contain the storage options for this target host.
|
||||
Note that the vars related to the Cinder availability zone and the
|
||||
limit_container_types are optional.
|
||||
You can deploy Ceph to hold cinder volume backups.
|
||||
To get started, set the ``cinder_service_backup_driver`` Ansible
|
||||
variable:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_driver: cinder.backup.drivers.ceph
|
||||
|
||||
Configure the Ceph user and the pool to use for backups. The defaults
|
||||
are shown here:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_service_backup_ceph_user: cinder-backup
|
||||
cinder_service_backup_ceph_pool: backups
|
||||
|
||||
|
||||
To configure an LVM you would utilize the following example:
|
||||
Availability zones
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create multiple availability zones to manage cinder storage hosts. Edit the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` and
|
||||
``/etc/openstack_deploy/user_variables.yml`` files to set up
|
||||
availability zones.
|
||||
|
||||
#. For each cinder storage host, configure the availability zone under
|
||||
the ``container_vars`` stanza:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_storage_availability_zone: CINDERAZ
|
||||
|
||||
Replace ``CINDERAZ`` with a suitable name. For example
|
||||
``cinderAZ_2``.
|
||||
|
||||
#. If more than one availability zone is created, configure the default
|
||||
availability zone for all the hosts by creating a
|
||||
``cinder_default_availability_zone`` in your
|
||||
``/etc/openstack_deploy/user_variables.yml``
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
cinder_default_availability_zone: CINDERAZ_DEFAULT
|
||||
|
||||
Replace ``CINDERAZ_DEFAULT`` with a suitable name. For example,
|
||||
``cinderAZ_1``. The default availability zone should be the same
|
||||
for all cinder hosts.
|
||||
|
||||
OpenStack Dashboard (horizon) configuration for cinder
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can configure variables to set the behavior for cinder
|
||||
volume management in OpenStack Dashboard (horizon).
|
||||
By default, no horizon configuration is set.
|
||||
|
||||
#. The default destination availability zone is ``nova`` if you use
|
||||
multiple availability zones and ``cinder_default_availability_zone``
|
||||
has no definition. Volume creation with
|
||||
horizon might fail if there is no availability zone named ``nova``.
|
||||
Set ``cinder_default_availability_zone`` to an appropriate
|
||||
availability zone name so that :guilabel:`Any availability zone`
|
||||
works in horizon.
|
||||
|
||||
#. horizon does not populate the volume type by default. On the new
|
||||
volume page, a request for the creation of a volume with the
|
||||
default parameters fails. Set ``cinder_default_volume_type`` so
|
||||
that a volume creation request without an explicit volume type
|
||||
succeeds.
|
||||
|
||||
|
||||
Configuring cinder to use LVM
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List the ``container_vars`` that contain the storage options for the target host.
|
||||
|
||||
.. note::
|
||||
|
||||
The vars related to the cinder availability zone and the
|
||||
``limit_container_types`` are optional.
|
||||
|
||||
|
||||
To configure an LVM, utilize the following example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -41,22 +207,22 @@ Configuring Cinder to use LVM
|
||||
iscsi_ip_address: "{{ storage_address }}"
|
||||
limit_container_types: cinder_volume
|
||||
|
||||
If you rather use another backend (like Ceph, NetApp, etc.) in a
|
||||
container instead of bare metal, you may edit
|
||||
To use another backend in a
|
||||
container instead of bare metal, edit
|
||||
the ``/etc/openstack_deploy/env.d/cinder.yml`` and remove the
|
||||
``is_metal: true`` stanza under the cinder_volumes_container properties.
|
||||
``is_metal: true`` stanza under the ``cinder_volumes_container`` properties.
|
||||
|
||||
Configuring Cinder to use Ceph
|
||||
Configuring cinder to use Ceph
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In order for Cinder to use Ceph it will be necessary to configure for both
|
||||
In order for cinder to use Ceph, it is necessary to configure for both
|
||||
the API and backend. When using any forms of network storage
|
||||
(iSCSI, NFS, Ceph ) for cinder, the API containers can be considered
|
||||
as backend servers, so a separate storage host is not required.
|
||||
(iSCSI, NFS, Ceph) for cinder, the API containers can be considered
|
||||
as backend servers. A separate storage host is not required.
|
||||
|
||||
In ``env.d/cinder.yml`` remove/disable ``is_metal: true``
|
||||
In ``env.d/cinder.yml`` remove ``is_metal: true``
|
||||
|
||||
#. List of target hosts on which to deploy the cinder API. It is recommended
|
||||
#. List of target hosts on which to deploy the cinder API. We recommend
|
||||
that a minumum of three target hosts are used for this service.
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -70,7 +236,7 @@ In ``env.d/cinder.yml`` remove/disable ``is_metal: true``
|
||||
ip: 172.29.236.103
|
||||
|
||||
|
||||
To configure an RBD backend utilize the following example:
|
||||
To configure an RBD backend, utilize the following example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -97,7 +263,7 @@ The example uses cephx authentication and requires existing ``cinder``
|
||||
account for ``cinder_volumes`` pool.
|
||||
|
||||
|
||||
in ``user_variables.yml``
|
||||
In ``user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -108,9 +274,7 @@ in ``user_variables.yml``
|
||||
- 172.29.244.153
|
||||
|
||||
|
||||
|
||||
|
||||
in ``openstack_user_config.yml``
|
||||
In ``openstack_user_config.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -155,23 +319,24 @@ in ``openstack_user_config.yml``
|
||||
|
||||
|
||||
|
||||
This link provides a complete working example of ceph setup and
|
||||
integration with cinder (nova and glance included)
|
||||
This link provides a complete working example of Ceph setup and
|
||||
integration with cinder (nova and glance included):
|
||||
|
||||
* `OpenStack-Ansible and Ceph Working Example`_
|
||||
|
||||
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
|
||||
|
||||
|
||||
|
||||
Configuring Cinder to use a NetApp appliance
|
||||
Configuring cinder to use a NetApp appliance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use a NetApp storage appliance back end, edit the
|
||||
``/etc/openstack_deploy/openstack_user_config.yml`` file and configure
|
||||
each storage node that will use it:
|
||||
each storage node that will use it.
|
||||
|
||||
Ensure that the NAS Team enables httpd.admin.access.
|
||||
.. note::
|
||||
|
||||
Ensure that the NAS Team enables ``httpd.admin.access``.
|
||||
|
||||
#. Add the ``netapp`` stanza under the ``cinder_backends`` stanza for
|
||||
each storage node:
|
||||
@ -183,8 +348,7 @@ Ensure that the NAS Team enables httpd.admin.access.
|
||||
|
||||
The options in subsequent steps fit under the ``netapp`` stanza.
|
||||
|
||||
The back end name is arbitrary and becomes a volume type within the
|
||||
Block Storage service.
|
||||
The backend name is arbitrary and becomes a volume type within cinder.
|
||||
|
||||
#. Configure the storage family:
|
||||
|
||||
@ -205,9 +369,8 @@ Ensure that the NAS Team enables httpd.admin.access.
|
||||
Replace ``STORAGE_PROTOCOL`` with ``iscsi`` for iSCSI or ``nfs``
|
||||
for NFS.
|
||||
|
||||
For the NFS protocol, you must also specify the location of the
|
||||
configuration file that lists the shares available to the Block
|
||||
Storage service:
|
||||
For the NFS protocol, specify the location of the
|
||||
configuration file that lists the shares available to cinder:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -255,10 +418,10 @@ Ensure that the NAS Team enables httpd.admin.access.
|
||||
|
||||
volume_backend_name: BACKEND_NAME
|
||||
|
||||
Replace ``BACKEND_NAME`` with a suitable value that provides a hint
|
||||
for the Block Storage scheduler. For example, ``NETAPP_iSCSI``.
|
||||
Replace ``BACKEND_NAME`` with a value that provides a hint
|
||||
for the cinder scheduler. For example, ``NETAPP_iSCSI``.
|
||||
|
||||
#. Check that the ``openstack_user_config.yml`` configuration is
|
||||
#. Ensure the ``openstack_user_config.yml`` configuration is
|
||||
accurate:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -288,6 +451,8 @@ Ensure that the NAS Team enables httpd.admin.access.
|
||||
``nfs-common`` file across the hosts, transitioning from an LVM to a
|
||||
NetApp back end.
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
.. include:: navigation.txt
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring service credentials
|
||||
-------------------------------
|
||||
===============================
|
||||
|
||||
Configure credentials for each service in the
|
||||
``/etc/openstack_deploy/*_secrets.yml`` files. Consider using `Ansible
|
||||
@ -17,17 +17,20 @@ interfaces:
|
||||
- ``keystone_auth_admin_password`` configures the ``admin`` tenant
|
||||
password for both the OpenStack API and dashboard access.
|
||||
|
||||
Recommended: Use the ``pw-token-gen.py`` script to generate random
|
||||
values for the variables in each file that contains service credentials:
|
||||
.. note::
|
||||
|
||||
.. code-block:: shell-session
|
||||
We recommend using the ``pw-token-gen.py`` script to generate random
|
||||
values for the variables in each file that contains service credentials:
|
||||
|
||||
# cd /opt/openstack-ansible/scripts
|
||||
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
|
||||
.. code-block:: shell-session
|
||||
|
||||
# cd /opt/openstack-ansible/scripts
|
||||
# python pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml
|
||||
|
||||
To regenerate existing passwords, add the ``--regen`` flag.
|
||||
|
||||
.. warning::
|
||||
|
||||
The playbooks do not currently manage changing passwords in an existing
|
||||
environment. Changing passwords and re-running the playbooks will fail
|
||||
and may break your OpenStack environment.
|
||||
|
@ -1,24 +1,22 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring the Image Service
|
||||
-----------------------------
|
||||
Configuring the Image (glance) service
|
||||
======================================
|
||||
|
||||
In an all-in-one deployment with a single infrastructure node, the Image
|
||||
Service uses the local file system on the target host to store images.
|
||||
When deploying production clouds we recommend backing Glance with a
|
||||
swift backend or some form or another of shared storage.
|
||||
(glance) service uses the local file system on the target host to store images.
|
||||
When deploying production clouds, we recommend backing glance with a
|
||||
swift backend or some form of shared storage.
|
||||
|
||||
Configuring default and additional stores
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack-Ansible provides two configurations for controlling where the Image
|
||||
Service stores files: the default store and additional stores. As mentioned
|
||||
in the previous section, the Image Service stores images in file-based storage
|
||||
by default. Two additional stores, http and cinder, are also enabled by
|
||||
default.
|
||||
OpenStack-Ansible provides two configurations for controlling where glance stores
|
||||
files: the default store and additional stores. glance stores images in file-based
|
||||
storage by default. Two additional stores, ``http`` and ``cinder`` (Block Storage),
|
||||
are also enabled by default.
|
||||
|
||||
Deployers can choose alternative default stores, as shown in the swift example
|
||||
in the next section, and deployers can choose alternative additional stores.
|
||||
You can choose alternative default stores and alternative additional stores.
|
||||
For example, a deployer that uses Ceph may configure the following Ansible
|
||||
variables:
|
||||
|
||||
@ -30,17 +28,17 @@ variables:
|
||||
- http
|
||||
- cinder
|
||||
|
||||
The configuration above will configure the Image Service to use rbd (Ceph) by
|
||||
default, but the swift, http and cinder stores will also be enabled in the
|
||||
Image Service configuration files.
|
||||
|
||||
The configuration above configures glance to use ``rbd`` (Ceph) by
|
||||
default, but ``glance_additional_stores`` list enables ``swift``,
|
||||
``http`` and ``cinder`` stores in the glance
|
||||
configuration files.
|
||||
|
||||
The following example sets glance to use the ``images`` pool.
|
||||
The example uses cephx authentication and requires an existing ``glance``
|
||||
This example uses cephx authentication and requires an existing ``glance``
|
||||
account for the ``images`` pool.
|
||||
|
||||
|
||||
in ``user_variables.yml``
|
||||
In ``user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -81,8 +79,7 @@ usage.
|
||||
glance_swift_store_auth_version: 2
|
||||
glance_swift_store_auth_address: https://127.0.0.1/v2.0
|
||||
|
||||
#. Set the swift account credentials (see *Special Considerations* at the
|
||||
bottom of this page):
|
||||
#. Set the swift account credentials:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -104,7 +101,7 @@ usage.
|
||||
glance_swift_store_container: STORE_NAME
|
||||
|
||||
Replace ``STORE_NAME`` with the container name in swift to be
|
||||
used for storing images. If the container doesn't exist, it will be
|
||||
used for storing images. If the container does not exist, it is
|
||||
automatically created.
|
||||
|
||||
#. Define the store region:
|
||||
@ -121,10 +118,10 @@ usage.
|
||||
|
||||
glance_flavor: GLANCE_FLAVOR
|
||||
|
||||
By default, the Image Service uses caching and authenticates with the
|
||||
Identity service. The default maximum size of the image cache is 10
|
||||
GB. The default Image Service container size is 12 GB. In some
|
||||
configurations, the Image Service might attempt to cache an image
|
||||
By default, glance uses caching and authenticates with the
|
||||
Identity (keystone) service. The default maximum size of the image cache is 10GB.
|
||||
The default glance container size is 12GB. In some
|
||||
configurations, glance attempts to cache an image
|
||||
which exceeds the available disk space. If necessary, you can disable
|
||||
caching. For example, to use Identity without caching, replace
|
||||
``GLANCE_FLAVOR`` with ``keystone``:
|
||||
@ -163,12 +160,12 @@ usage.
|
||||
|
||||
- ``trusted-auth+cachemanagement``
|
||||
|
||||
Special Considerations
|
||||
Special considerations
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the swift password or key contains a dollar sign (``$``), it must
|
||||
be escaped with an additional dollar sign (``$$``). For example, a password of
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is needed
|
||||
``super$ecure`` would need to be entered as ``super$$ecure``. This is necessary
|
||||
due to the way `oslo.config formats strings`_.
|
||||
|
||||
.. _oslo.config formats strings: https://bugs.launchpad.net/oslo-incubator/+bug/1259729
|
||||
|
@ -1,7 +1,7 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring target hosts
|
||||
------------------------
|
||||
========================
|
||||
|
||||
Modify the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
||||
configure the target hosts.
|
||||
@ -70,7 +70,7 @@ the ``br-mgmt`` container management bridge on each target host.
|
||||
network02: ...
|
||||
|
||||
Providing more than one network host in the ``network_hosts`` block will
|
||||
enable `L3HA support using VRRP`_ in the neutron-agent containers.
|
||||
enable `L3HA support using VRRP`_ in the ``neutron-agent`` containers.
|
||||
|
||||
.. _L3HA support using VRRP: http://docs.openstack.org/liberty/networking-guide/scenario_l3ha_lb.html
|
||||
|
||||
@ -120,12 +120,12 @@ the ``br-mgmt`` container management bridge on each target host.
|
||||
ip: STORAGE01_IP_ADDRESS
|
||||
storage02: ...
|
||||
|
||||
Each storage host also requires additional configuration to define the back end
|
||||
Each storage host requires additional configuration to define the back end
|
||||
driver.
|
||||
|
||||
The default configuration includes an optional storage host. To
|
||||
install without storage hosts, comment out the stanza beginning with
|
||||
the *storage\_hosts:* line.
|
||||
the *storage_hosts:* line.
|
||||
|
||||
--------------
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring the hypervisor (optional)
|
||||
-------------------------------------
|
||||
=====================================
|
||||
|
||||
By default, the KVM hypervisor is used. If you are deploying to a host
|
||||
that does not support KVM hardware acceleration extensions, select a
|
||||
|
@ -1,12 +1,12 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Initial environment configuration
|
||||
---------------------------------
|
||||
=================================
|
||||
|
||||
OpenStack-Ansible depends on various files that are used to build an inventory
|
||||
for Ansible. Start by getting those files into the correct places:
|
||||
|
||||
#. Recursively copy the contents of the
|
||||
#. Copy the contents of the
|
||||
``/opt/openstack-ansible/etc/openstack_deploy`` directory to the
|
||||
``/etc/openstack_deploy`` directory.
|
||||
|
||||
@ -15,30 +15,32 @@ for Ansible. Start by getting those files into the correct places:
|
||||
#. Copy the ``openstack_user_config.yml.example`` file to
|
||||
``/etc/openstack_deploy/openstack_user_config.yml``.
|
||||
|
||||
Deployers can review the ``openstack_user_config.yml`` file and make changes
|
||||
to how the OpenStack environment is deployed. The file is **heavily** commented
|
||||
with details about the various options.
|
||||
You can review the ``openstack_user_config.yml`` file and make changes
|
||||
to the deployment of your OpenStack environment.
|
||||
|
||||
There are various types of physical hosts that will host containers that are
|
||||
.. note::
|
||||
|
||||
The file is heavily commented with details about the various options.
|
||||
|
||||
There are various types of physical hardware that are able to use containers
|
||||
deployed by OpenStack-Ansible. For example, hosts listed in the
|
||||
`shared-infra_hosts` will run containers for many of the shared services
|
||||
required by OpenStack environments. Some of these services include databases,
|
||||
memcache, and RabbitMQ. There are several other host types that contain
|
||||
`shared-infra_hosts` run containers for many of the shared services that
|
||||
your OpenStack environments requires. Some of these services include databases,
|
||||
memcached, and RabbitMQ. There are several other host types that contain
|
||||
other types of containers and all of these are listed in
|
||||
``openstack_user_config.yml``.
|
||||
|
||||
For details about how the inventory is generated from the environment
|
||||
configuration, please see :ref:`developer-inventory`.
|
||||
configuration, see :ref:`developer-inventory`.
|
||||
|
||||
Affinity
|
||||
^^^^^^^^
|
||||
~~~~~~~~
|
||||
|
||||
OpenStack-Ansible's dynamic inventory generation has a concept called
|
||||
*affinity*. This determines how many containers of a similar type are deployed
|
||||
`affinity`. This determines how many containers of a similar type are deployed
|
||||
onto a single physical host.
|
||||
|
||||
Using `shared-infra_hosts` as an example, let's consider a
|
||||
``openstack_user_config.yml`` that looks like this:
|
||||
Using `shared-infra_hosts` as an example, consider this ``openstack_user_config.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -50,15 +52,15 @@ Using `shared-infra_hosts` as an example, let's consider a
|
||||
infra3:
|
||||
ip: 172.29.236.103
|
||||
|
||||
Three hosts are assigned to the `shared-infra_hosts` group, so
|
||||
OpenStack-Ansible will ensure that each host runs a single database container,
|
||||
Three hosts are assigned to the `shared-infra_hosts` group,
|
||||
OpenStack-Ansible ensures that each host runs a single database container,
|
||||
a single memcached container, and a single RabbitMQ container. Each host has
|
||||
an affinity of 1 by default, and that means each host will run one of each
|
||||
container type.
|
||||
|
||||
Some deployers may want to skip the deployment of RabbitMQ altogether. This is
|
||||
helpful when deploying a standalone swift environment. For deployers who need
|
||||
this configuration, their ``openstack_user_config.yml`` would look like this:
|
||||
You can skip the deployment of RabbitMQ altogether. This is
|
||||
helpful when deploying a standalone swift environment. If you need
|
||||
this configuration, your ``openstack_user_config.yml`` would look like this:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -76,14 +78,14 @@ this configuration, their ``openstack_user_config.yml`` would look like this:
|
||||
rabbit_mq_container: 0
|
||||
ip: 172.29.236.103
|
||||
|
||||
The configuration above would still deploy a memcached container and a database
|
||||
container on each host, but there would be no RabbitMQ containers deployed.
|
||||
The configuration above deploys a memcached container and a database
|
||||
container on each host, without the RabbitMQ containers.
|
||||
|
||||
|
||||
.. _security_hardening:
|
||||
|
||||
Security Hardening
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
Security hardening
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack-Ansible automatically applies host security hardening configurations
|
||||
using the `openstack-ansible-security`_ role. The role uses a version of the
|
||||
@ -91,8 +93,8 @@ using the `openstack-ansible-security`_ role. The role uses a version of the
|
||||
Ubuntu 14.04 and OpenStack.
|
||||
|
||||
The role is applicable to physical hosts within an OpenStack-Ansible deployment
|
||||
that are operating as any type of node -- infrastructure or compute. By
|
||||
default, the role is enabled. Deployers can disable it by changing a variable
|
||||
that are operating as any type of node, infrastructure or compute. By
|
||||
default, the role is enabled. You can disable it by changing a variable
|
||||
within ``user_variables.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -102,7 +104,7 @@ within ``user_variables.yml``:
|
||||
When the variable is set to ``true``, the ``setup-hosts.yml`` playbook applies
|
||||
the role during deployments.
|
||||
|
||||
Deployers can apply security configurations to an existing environment or audit
|
||||
You can apply security configurations to an existing environment or audit
|
||||
an environment using a playbook supplied with OpenStack-Ansible:
|
||||
|
||||
.. code-block:: bash
|
||||
|
@ -1,21 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Deploy a baremetal node kicked with Ironic
|
||||
------------------------------------------
|
||||
|
||||
.. important::
|
||||
|
||||
You will not have access unless you have a key set within Nova before
|
||||
your Ironic deployment. If you do not have an ssh key readily
|
||||
available, set one up with ``ssh-keygen``.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin
|
||||
|
||||
Now boot a node:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova boot --flavor ${FLAVOR_NAME} --image ${IMAGE_NAME} --key-name admin ${NODE_NAME}
|
||||
|
@ -1,12 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
OpenStack-Ansible deployment
|
||||
----------------------------
|
||||
|
||||
#. Modify the environment files and force ``nova-compute`` to run from
|
||||
within a container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sed -i '/is_metal.*/d' /etc/openstack_deploy/env.d/nova.yml
|
||||
|
@ -1,30 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Creating an Ironic flavor
|
||||
-------------------------
|
||||
|
||||
#. Create a new flavor called ``my-baremetal-flavor``.
|
||||
|
||||
.. note::
|
||||
|
||||
The following example sets the CPU architecture for the newly created
|
||||
flavor to be `x86_64`.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova flavor-create ${FLAVOR_NAME} ${FLAVOR_ID} ${FLAVOR_RAM} ${FLAVOR_DISK} ${FLAVOR_CPU}
|
||||
nova flavor-key ${FLAVOR_NAME} set cpu_arch=x86_64
|
||||
nova flavor-key ${FLAVOR_NAME} set capabilities:boot_option="local"
|
||||
|
||||
.. note::
|
||||
|
||||
The flavor and nodes should match when enrolling into Ironic.
|
||||
See the documentation on flavors for more information:
|
||||
http://docs.openstack.org/openstack-ops/content/flavors.html
|
||||
|
||||
After successfully deploying the ironic node on subsequent boots, the instance
|
||||
will boot from your local disk as first preference. This will speed up the deployed
|
||||
node's boot time. The alternative, if this is not set, will mean the ironic node will
|
||||
attempt to PXE boot first, which will allow for operator-initiated image updates and
|
||||
other operations. The operational reasoning and building an environment to support this
|
||||
use case is not covered here.
|
@ -1,66 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Building Ironic images
|
||||
----------------------
|
||||
|
||||
Images using the ``diskimage-builder`` must be built outside of a container.
|
||||
For this process, use one of the physical hosts within the environment.
|
||||
|
||||
#. Install the necessary packages:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install -y qemu uuid-runtime curl
|
||||
|
||||
#. Install the ``disk-imagebuilder`` client:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install diskimage-builder --isolated
|
||||
|
||||
.. important::
|
||||
|
||||
Only use the ``--isolated`` flag if you are building on a node that
|
||||
has already been deployed by OpenStack-Ansible as pip will not
|
||||
allow the external package to be resolved.
|
||||
|
||||
#. Optional: Force the ubuntu ``image-create`` process to use a modern kernel:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
echo 'linux-image-generic-lts-xenial:' > /usr/local/share/diskimage-builder/elements/ubuntu/package-installs.yaml
|
||||
|
||||
#. Create Ubuntu ``initramfs``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk-image-create ironic-agent ubuntu -o ${IMAGE_NAME}
|
||||
|
||||
#. Upload the created deploy images into the Image Service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Upload the deploy image kernel
|
||||
glance image-create --name ${IMAGE_NAME}.kernel --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.kernel
|
||||
|
||||
# Upload the user image initramfs
|
||||
glance image-create --name ${IMAGE_NAME}.initramfs --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initramfs
|
||||
|
||||
#. Create Ubuntu user image:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk-image-create ubuntu baremetal localboot local-config dhcp-all-interfaces grub2 -o ${IMAGE_NAME}
|
||||
|
||||
#. Upload the created user images into the Image Service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Upload the user image vmlinuz and store uuid
|
||||
VMLINUZ_UUID="$(glance image-create --name ${IMAGE_NAME}.vmlinuz --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.vmlinuz | awk '/\| id/ {print $4}')"
|
||||
|
||||
# Upload the user image initrd and store uuid
|
||||
INITRD_UUID="$(glance image-create --name ${IMAGE_NAME}.initrd --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initrd | awk '/\| id/ {print $4}')"
|
||||
|
||||
# Create image
|
||||
glance image-create --name ${IMAGE_NAME} --visibility public --disk-format qcow2 --container-format bare --property kernel_id=${VMLINUZ_UUID} --property ramdisk_id=${INITRD_UUID} < ${IMAGE_NAME}.qcow2
|
@ -1,22 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Setup a Neutron network for use Ironic
|
||||
--------------------------------------
|
||||
|
||||
In the general case, the Neutron network can be a simple flat network. However,
|
||||
in a complex case, this can be whatever you need and want. Ensure
|
||||
you adjust the deployment accordingly. The following is an example:
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
neutron net-create cleaning-net --shared \
|
||||
--provider:network_type flat \
|
||||
--provider:physical_network ironic-net
|
||||
|
||||
neutron subnet-create ironic-net 172.19.0.0/22 --name ironic-subnet
|
||||
--ip-version=4 \
|
||||
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
|
||||
--enable-dhcp \
|
||||
--dns-nameservers list=true 8.8.4.4 8.8.8.8
|
||||
|
@ -1,52 +0,0 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Enroll Ironic nodes
|
||||
-------------------
|
||||
|
||||
#. From the utility container, enroll a new baremetal node by executing the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Source credentials
|
||||
. ~/openrc
|
||||
|
||||
# Create the node
|
||||
NODE_HOSTNAME="myfirstnodename"
|
||||
IPMI_ADDRESS="10.1.2.3"
|
||||
IPMI_USER="my-ipmi-user"
|
||||
IPMI_PASSWORD="my-ipmi-password"
|
||||
KERNEL_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.kernel/ {print \$2}")
|
||||
INITRAMFS_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.initramfs/ {print \$2}")
|
||||
ironic node-create \
|
||||
-d agent_ipmitool \
|
||||
-i ipmi_address="${IPMI_ADDRESS}" \
|
||||
-i ipmi_username="${IPMI_USER}" \
|
||||
-i ipmi_password="${IPMI_PASSWORD}" \
|
||||
-i deploy_ramdisk="${INITRAMFS_IMAGE}" \
|
||||
-i deploy_kernel="${KERNEL_IMAGE}" \
|
||||
-n ${NODE_HOSTNAME}
|
||||
|
||||
# Create a port for the node
|
||||
NODE_MACADDRESS="aa:bb:cc:dd:ee:ff"
|
||||
ironic port-create \
|
||||
-n $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") \
|
||||
-a ${NODE_MACADDRESS}
|
||||
|
||||
# Associate an image to the node
|
||||
ROOT_DISK_SIZE_GB=40
|
||||
ironic node-update $(ironic node-list | awk "/${IMAGE_NAME}/ {print \$2}") add \
|
||||
driver_info/deploy_kernel=$KERNEL_IMAGE \
|
||||
driver_info/deploy_ramdisk=$INITRAMFS_IMAGE \
|
||||
instance_info/deploy_kernel=$KERNEL_IMAGE \
|
||||
instance_info/deploy_ramdisk=$INITRAMFS_IMAGE \
|
||||
instance_info/root_gb=${ROOT_DISK_SIZE_GB}
|
||||
|
||||
# Add node properties
|
||||
# The property values used here should match the hardware used
|
||||
ironic node-update $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") add \
|
||||
properties/cpus=48 \
|
||||
properties/memory_mb=254802 \
|
||||
properties/local_gb=80 \
|
||||
properties/size=3600 \
|
||||
properties/cpu_arch=x86_64 \
|
||||
properties/capabilities=memory_mb:254802,local_gb:80,cpu_arch:x86_64,cpus:48,boot_option:local
|
@ -1,28 +1,220 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring the ironic service (optional)
|
||||
-----------------------------------------
|
||||
Configuring the Bare Metal (ironic) service (optional)
|
||||
======================================================
|
||||
|
||||
.. note::
|
||||
|
||||
This feature is experimental at this time and it has not been fully production
|
||||
tested yet. This implementation instructions assume that Ironic is being deployed
|
||||
tested yet. These implementation instructions assume that ironic is being deployed
|
||||
as the sole hypervisor for the region.
|
||||
|
||||
.. toctree::
|
||||
|
||||
configure-ironic-deployment.rst
|
||||
configure-ironic-neutron.rst
|
||||
configure-ironic-images.rst
|
||||
configure-ironic-flavor.rst
|
||||
configure-ironic-nodes.rst
|
||||
configure-ironic-baremetal-node.rst
|
||||
|
||||
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual)
|
||||
machines by leveraging common technologies such as PXE boot and IPMI to cover a wide
|
||||
range of hardware, while supporting pluggable drivers to allow vendor-specific
|
||||
functionality to be added.
|
||||
|
||||
OpenStack’s Ironic project makes physical servers as easy to provision as
|
||||
virtual machines in a cloud, which in turn will open up new avenues for enterprises
|
||||
and service providers.
|
||||
OpenStack’s ironic project makes physical servers as easy to provision as
|
||||
virtual machines in a cloud.
|
||||
|
||||
OpenStack-Ansible deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Modify the environment files and force ``nova-compute`` to run from
|
||||
within a container:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sed -i '/is_metal.*/d' /etc/openstack_deploy/env.d/nova.yml
|
||||
|
||||
Setup a neutron network for use by ironic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In a general case, neutron networking can be a simple flat network. However,
|
||||
in a complex case, this can be whatever you need and want. Ensure
|
||||
you adjust the deployment accordingly. The following is an example:
|
||||
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
neutron net-create cleaning-net --shared \
|
||||
--provider:network_type flat \
|
||||
--provider:physical_network ironic-net
|
||||
|
||||
neutron subnet-create ironic-net 172.19.0.0/22 --name ironic-subnet
|
||||
--ip-version=4 \
|
||||
--allocation-pool start=172.19.1.100,end=172.19.1.200 \
|
||||
--enable-dhcp \
|
||||
--dns-nameservers list=true 8.8.4.4 8.8.8.8
|
||||
|
||||
Building ironic images
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Images using the ``diskimage-builder`` must be built outside of a container.
|
||||
For this process, use one of the physical hosts within the environment.
|
||||
|
||||
#. Install the necessary packages:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
apt-get install -y qemu uuid-runtime curl
|
||||
|
||||
#. Install the ``disk-imagebuilder`` package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install diskimage-builder --isolated
|
||||
|
||||
.. important::
|
||||
|
||||
Only use the ``--isolated`` flag if you are building on a node
|
||||
deployed by OpenStack-Ansible, otherwise pip will not
|
||||
resolve the external package.
|
||||
|
||||
#. Optional: Force the ubuntu ``image-create`` process to use a modern kernel:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
echo 'linux-image-generic-lts-xenial:' > \
|
||||
/usr/local/share/diskimage-builder/elements/ubuntu/package-installs.yaml
|
||||
|
||||
#. Create Ubuntu ``initramfs``:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk-image-create ironic-agent ubuntu -o ${IMAGE_NAME}
|
||||
|
||||
#. Upload the created deploy images into the Image (glance) Service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Upload the deploy image kernel
|
||||
glance image-create --name ${IMAGE_NAME}.kernel --visibility public \
|
||||
--disk-format aki --container-format aki < ${IMAGE_NAME}.kernel
|
||||
|
||||
# Upload the user image initramfs
|
||||
glance image-create --name ${IMAGE_NAME}.initramfs --visibility public \
|
||||
--disk-format ari --container-format ari < ${IMAGE_NAME}.initramfs
|
||||
|
||||
#. Create Ubuntu user image:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk-image-create ubuntu baremetal localboot local-config dhcp-all-interfaces grub2 -o ${IMAGE_NAME}
|
||||
|
||||
#. Upload the created user images into the Image (glance) Service:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Upload the user image vmlinuz and store uuid
|
||||
VMLINUZ_UUID="$(glance image-create --name ${IMAGE_NAME}.vmlinuz --visibility public --disk-format aki --container-format aki < ${IMAGE_NAME}.vmlinuz | awk '/\| id/ {print $4}')"
|
||||
|
||||
# Upload the user image initrd and store uuid
|
||||
INITRD_UUID="$(glance image-create --name ${IMAGE_NAME}.initrd --visibility public --disk-format ari --container-format ari < ${IMAGE_NAME}.initrd | awk '/\| id/ {print $4}')"
|
||||
|
||||
# Create image
|
||||
glance image-create --name ${IMAGE_NAME} --visibility public --disk-format qcow2 --container-format bare --property kernel_id=${VMLINUZ_UUID} --property ramdisk_id=${INITRD_UUID} < ${IMAGE_NAME}.qcow2
|
||||
|
||||
|
||||
Creating an ironic flavor
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Create a new flavor called ``my-baremetal-flavor``.
|
||||
|
||||
.. note::
|
||||
|
||||
The following example sets the CPU architecture for the newly created
|
||||
flavor to be `x86_64`.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova flavor-create ${FLAVOR_NAME} ${FLAVOR_ID} ${FLAVOR_RAM} ${FLAVOR_DISK} ${FLAVOR_CPU}
|
||||
nova flavor-key ${FLAVOR_NAME} set cpu_arch=x86_64
|
||||
nova flavor-key ${FLAVOR_NAME} set capabilities:boot_option="local"
|
||||
|
||||
.. note::
|
||||
|
||||
Ensure the flavor and nodes match when enrolling into ironic.
|
||||
See the documentation on flavors for more information:
|
||||
http://docs.openstack.org/openstack-ops/content/flavors.html
|
||||
|
||||
After successfully deploying the ironic node on subsequent boots, the instance
|
||||
boots from your local disk as first preference. This speeds up the deployed
|
||||
node's boot time. Alternatively, if this is not set, the ironic node PXE boots first and
|
||||
allows for operator-initiated image updates and other operations.
|
||||
|
||||
.. note::
|
||||
|
||||
The operational reasoning and building an environment to support this
|
||||
use case is not covered here.
|
||||
|
||||
Enroll ironic nodes
|
||||
-------------------
|
||||
|
||||
#. From the utility container, enroll a new baremetal node by executing the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Source credentials
|
||||
. ~/openrc
|
||||
|
||||
# Create the node
|
||||
NODE_HOSTNAME="myfirstnodename"
|
||||
IPMI_ADDRESS="10.1.2.3"
|
||||
IPMI_USER="my-ipmi-user"
|
||||
IPMI_PASSWORD="my-ipmi-password"
|
||||
KERNEL_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.kernel/ {print \$2}")
|
||||
INITRAMFS_IMAGE=$(glance image-list | awk "/${IMAGE_NAME}.initramfs/ {print \$2}")
|
||||
ironic node-create \
|
||||
-d agent_ipmitool \
|
||||
-i ipmi_address="${IPMI_ADDRESS}" \
|
||||
-i ipmi_username="${IPMI_USER}" \
|
||||
-i ipmi_password="${IPMI_PASSWORD}" \
|
||||
-i deploy_ramdisk="${INITRAMFS_IMAGE}" \
|
||||
-i deploy_kernel="${KERNEL_IMAGE}" \
|
||||
-n ${NODE_HOSTNAME}
|
||||
|
||||
# Create a port for the node
|
||||
NODE_MACADDRESS="aa:bb:cc:dd:ee:ff"
|
||||
ironic port-create \
|
||||
-n $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") \
|
||||
-a ${NODE_MACADDRESS}
|
||||
|
||||
# Associate an image to the node
|
||||
ROOT_DISK_SIZE_GB=40
|
||||
ironic node-update $(ironic node-list | awk "/${IMAGE_NAME}/ {print \$2}") add \
|
||||
driver_info/deploy_kernel=$KERNEL_IMAGE \
|
||||
driver_info/deploy_ramdisk=$INITRAMFS_IMAGE \
|
||||
instance_info/deploy_kernel=$KERNEL_IMAGE \
|
||||
instance_info/deploy_ramdisk=$INITRAMFS_IMAGE \
|
||||
instance_info/root_gb=${ROOT_DISK_SIZE_GB}
|
||||
|
||||
# Add node properties
|
||||
# The property values used here should match the hardware used
|
||||
ironic node-update $(ironic node-list | awk "/${NODE_HOSTNAME}/ {print \$2}") add \
|
||||
properties/cpus=48 \
|
||||
properties/memory_mb=254802 \
|
||||
properties/local_gb=80 \
|
||||
properties/size=3600 \
|
||||
properties/cpu_arch=x86_64 \
|
||||
properties/capabilities=memory_mb:254802,local_gb:80,cpu_arch:x86_64,cpus:48,boot_option:local
|
||||
|
||||
Deploy a baremetal node kicked with ironic
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
You will not have access unless you have a key set within nova before
|
||||
your ironic deployment. If you do not have an ssh key readily
|
||||
available, set one up with ``ssh-keygen``.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin
|
||||
|
||||
Now boot a node:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova boot --flavor ${FLAVOR_NAME} --image ${IMAGE_NAME} --key-name admin ${NODE_NAME}
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
.. _network_configuration:
|
||||
|
||||
Configuring target host networking
|
||||
----------------------------------
|
||||
==================================
|
||||
|
||||
Edit the ``/etc/openstack_deploy/openstack_user_config.yml`` file to
|
||||
configure target host networking.
|
||||
@ -26,10 +26,10 @@ configure target host networking.
|
||||
notation. For example, 203.0.113.0/24.
|
||||
|
||||
Use the same IP address ranges as the underlying physical network
|
||||
interfaces or bridges configured in `the section called "Configuring
|
||||
interfaces or bridges in `the section called "Configuring
|
||||
the network" <targethosts-network.html>`_. For example, if the
|
||||
container network uses 203.0.113.0/24, the ``CONTAINER_MGMT_CIDR``
|
||||
should also use 203.0.113.0/24.
|
||||
also uses 203.0.113.0/24.
|
||||
|
||||
The default configuration includes the optional storage and service
|
||||
networks. To remove one or both of them, comment out the appropriate
|
||||
@ -44,11 +44,10 @@ configure target host networking.
|
||||
|
||||
Replace ``EXISTING_IP_ADDRESSES`` with a list of existing IP
|
||||
addresses in the ranges defined in the previous step. This list
|
||||
should include all IP addresses manually configured on target hosts
|
||||
in the `the section called "Configuring the
|
||||
network" <targethosts-network.html>`_, internal load balancers,
|
||||
service network bridge, deployment hosts and any other devices
|
||||
to avoid conflicts during the automatic IP address generation process.
|
||||
should include all IP addresses manually configured on target hosts,
|
||||
internal load balancers, service network bridge, deployment hosts and
|
||||
any other devices to avoid conflicts during the automatic IP address
|
||||
generation process.
|
||||
|
||||
Add individual IP addresses on separate lines. For example, to
|
||||
prevent use of 203.0.113.101 and 201:
|
||||
@ -132,7 +131,7 @@ configure target host networking.
|
||||
|
||||
The default configuration includes the optional storage and service
|
||||
networks. To remove one or both of them, comment out the entire
|
||||
associated stanza beginning with the *- network:* line.
|
||||
associated stanza beginning with the ``- network:`` line.
|
||||
|
||||
#. Configure OpenStack Networking VXLAN tunnel/overlay networks in the
|
||||
``provider_networks`` subsection:
|
||||
@ -185,16 +184,16 @@ configure target host networking.
|
||||
additional network.
|
||||
|
||||
Replace ``PHYSICAL_NETWORK_INTERFACE`` with the network interface used for
|
||||
flat networking. This **must** be a physical interface on the same L2 network
|
||||
being used with the br-vlan devices. If no additional network interface is
|
||||
available, a veth pair plugged into the br-vlan bridge can provide the needed
|
||||
interface.
|
||||
flat networking. Ensure this is a physical interface on the same L2 network
|
||||
being used with the ``br-vlan`` devices. If no additional network interface is
|
||||
available, a veth pair plugged into the ``br-vlan`` bridge can provide the
|
||||
necessary interface.
|
||||
|
||||
Example creating a veth-pair within an existing bridge
|
||||
The following is an example of creating a ``veth-pair`` within an existing bridge:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
# Create veth pair, don't bomb if already exists
|
||||
# Create veth pair, do not abort if already exists
|
||||
pre-up ip link add br-vlan-veth type veth peer name PHYSICAL_NETWORK_INTERFACE || true
|
||||
# Set both ends UP
|
||||
pre-up ip link set br-vlan-veth up
|
||||
|
@ -1,44 +1,45 @@
|
||||
`Home <index.html>`_ OpenStack-Ansible Installation Guide
|
||||
|
||||
Configuring the Compute (Nova) Service (optional)
|
||||
-------------------------------------------------
|
||||
Configuring the Compute (nova) service (optional)
|
||||
=================================================
|
||||
|
||||
The compute service (nova) handles the creation of virtual machines within an
|
||||
The Compute service (nova) handles the creation of virtual machines within an
|
||||
OpenStack environment. Many of the default options used by OpenStack-Ansible
|
||||
are found within `defaults/main.yml` within the nova role.
|
||||
are found within ``defaults/main.yml`` within the nova role.
|
||||
|
||||
Availability zones
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deployers with multiple availability zones (AZ's) can set the
|
||||
``nova_default_schedule_zone`` Ansible variable to specify an AZ to use for
|
||||
instance build requests where an AZ is not provided. This could be useful in
|
||||
environments with different types of hypervisors where builds are sent to
|
||||
certain hardware types based on their resource requirements.
|
||||
Deployers with multiple availability zones can set the
|
||||
``nova_default_schedule_zone`` Ansible variable to specify an availability zone
|
||||
for new requests. This is useful in environments with different types
|
||||
of hypervisors, where builds are sent to certain hardware types based on
|
||||
their resource requirements.
|
||||
|
||||
For example, if a deployer has some servers with spinning hard disks and others
|
||||
with SSDs, they can set the default AZ to one that uses only spinning disks (to
|
||||
save costs). To build an instance using SSDs, users must select an AZ that
|
||||
includes SSDs and provide that AZ in their instance build request.
|
||||
For example, if you have servers running on two racks without sharing the PDU.
|
||||
These two racks can be grouped into two availability zones.
|
||||
When one rack loses power, the other one still works. By spreading
|
||||
your containers onto the two racks (availability zones), you will
|
||||
improve your service availability.
|
||||
|
||||
Block device tuning for Ceph (RBD)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When Ceph is enabled and ``nova_libvirt_images_rbd_pool`` is defined, two
|
||||
libvirt configurations will be changed by default:
|
||||
Enabling Ceph and defining ``nova_libvirt_images_rbd_pool`` changes two
|
||||
libvirt configurations by default:
|
||||
|
||||
* hw_disk_discard: ``unmap``
|
||||
* disk_cachemodes: ``network=writeback``
|
||||
|
||||
Setting ``hw_disk_discard`` to ``unmap`` in libvirt will enable
|
||||
Setting ``hw_disk_discard`` to ``unmap`` in libvirt enables
|
||||
discard (sometimes called TRIM) support for the underlying block device. This
|
||||
allows for unused blocks to be reclaimed on the underlying disks.
|
||||
allows reclaiming of unused blocks on the underlying disks.
|
||||
|
||||
Setting ``disk_cachemodes`` to ``network=writeback`` allows data to be written
|
||||
into a cache on each change, but those changes are flushed to disk at a regular
|
||||
interval. This can increase write performance on Ceph block devices.
|
||||
interval. This can increase write performance on Ceph block devices.
|
||||
|
||||
Deployers have the option to customize these settings using two Ansible
|
||||
You have the option to customize these settings using two Ansible
|
||||
variables (defaults shown here):
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -46,15 +47,13 @@ variables (defaults shown here):
|
||||
nova_libvirt_hw_disk_discard: 'unmap'
|
||||
nova_libvirt_disk_cachemodes: 'network=writeback'
|
||||
|
||||
Deployers can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
|
||||
You can disable discard by setting ``nova_libvirt_hw_disk_discard`` to
|
||||
``ignore``. The ``nova_libvirt_disk_cachemodes`` can be set to an empty
|
||||
string to disable ``network=writeback``.
|
||||
|
||||
|
||||
|
||||
The following minimal example configuration sets nova to use the
|
||||
``ephemeral-vms`` ceph pool.The example uses cephx authentication, and
|
||||
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool.
|
||||
``ephemeral-vms`` Ceph pool. The following example uses cephx authentication, and
|
||||
requires an existing ``cinder`` account for the ``ephemeral-vms`` pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@ -65,33 +64,29 @@ requires an existing ``cinder`` account for the ``ephemeral-vms`` pool.
|
||||
- 172.29.244.153
|
||||
|
||||
|
||||
If you have a different ceph username for the pool, you can use it as
|
||||
If you have a different Ceph username for the pool, use it as:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
cinder_ceph_client: <ceph-username>
|
||||
|
||||
|
||||
|
||||
* The `Ceph documentation for OpenStack`_ has additional information about these settings.
|
||||
* `OpenStack-Ansible and Ceph Working Example`_
|
||||
|
||||
|
||||
|
||||
.. _Ceph documentation for OpenStack: http://docs.ceph.com/docs/master/rbd/rbd-openstack/
|
||||
.. _OpenStack-Ansible and Ceph Working Example: https://www.openstackfaq.com/openstack-ansible-ceph/
|
||||
|
||||
|
||||
|
||||
|
||||
Config Drive
|
||||
Config drive
|
||||
~~~~~~~~~~~~
|
||||
|
||||
By default, OpenStack-Ansible will not configure Nova to force config drives
|
||||
to be provisioned with every instance that Nova builds. The metadata service
|
||||
provides configuration information that can be used by cloud-init inside the
|
||||
instance. Config drives are only necessary when an instance doesn't have
|
||||
cloud-init installed or doesn't have support for handling metadata.
|
||||
By default, OpenStack-Ansible does not configure nova to force config drives
|
||||
to be provisioned with every instance that nova builds. The metadata service
|
||||
provides configuration information that is used by ``cloud-init`` inside the
|
||||
instance. Config drives are only necessary when an instance does not have
|
||||
``cloud-init`` installed or does not have support for handling metadata.
|
||||
|
||||
A deployer can set an Ansible variable to force config drives to be deployed
|
||||
with every virtual machine:
|
||||
@ -101,8 +96,8 @@ with every virtual machine:
|
||||
nova_force_config_drive: True
|
||||
|
||||
Certain formats of config drives can prevent instances from migrating properly
|
||||
between hypervisors. If a deployer needs forced config drives and the ability
|
||||
to migrate instances, the config drive format should be set to ``vfat`` using
|
||||
between hypervisors. If you need forced config drives and the ability
|
||||
to migrate instances, set the config drive format to ``vfat`` using
|
||||
the ``nova_nova_conf_overrides`` variable:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -112,7 +107,7 @@ the ``nova_nova_conf_overrides`` variable:
|
||||
config_drive_format: vfat
|
||||
force_config_drive: True
|
||||
|
||||
Libvirtd Connectivity and Authentication
|
||||
Libvirtd connectivity and authentication
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, OpenStack-Ansible configures the libvirt daemon in the following
|
||||
@ -122,8 +117,7 @@ way:
|
||||
* TCP plaintext connections are disabled
|
||||
* Authentication over TCP connections uses SASL
|
||||
|
||||
Deployers can customize these settings using the Ansible variables shown
|
||||
below:
|
||||
You can customize these settings using the following Ansible variables:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -139,8 +133,8 @@ below:
|
||||
Multipath
|
||||
~~~~~~~~~
|
||||
|
||||
Nova supports multipath for iSCSI-based storage. Deployers can enable
|
||||
multipath support in nova through a configuration override:
|
||||
Nova supports multipath for iSCSI-based storage. Enable multipath support in
|
||||
nova through a configuration override:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -151,13 +145,13 @@ multipath support in nova through a configuration override:
|
||||
Shared storage and synchronized UID/GID
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deployers can specify a custom UID for the nova user and GID for the nova group
|
||||
Specify a custom UID for the nova user and GID for the nova group
|
||||
to ensure they are identical on each host. This is helpful when using shared
|
||||
storage on compute nodes because it allows instances to migrate without
|
||||
storage on Compute nodes because it allows instances to migrate without
|
||||
filesystem ownership failures.
|
||||
|
||||
By default, Ansible will create the nova user and group without specifying the
|
||||
UID or GID. To specify custom values for the UID/GID, set the following
|
||||
By default, Ansible creates the nova user and group without specifying the
|
||||
UID or GID. To specify custom values for the UID or GID, set the following
|
||||
Ansible variables:
|
||||
|
||||
.. code-block:: yaml
|
||||
@ -165,9 +159,11 @@ Ansible variables:
|
||||
nova_system_user_uid = <specify a UID>
|
||||
nova_system_group_gid = <specify a GID>
|
||||
|
||||
.. warning:: Setting this value **after** deploying an environment with
|
||||
.. warning::
|
||||
|
||||
Setting this value after deploying an environment with
|
||||
OpenStack-Ansible can cause failures, errors, and general instability. These
|
||||
values should only be set once **before** deploying an OpenStack environment
|
||||
values should only be set once before deploying an OpenStack environment
|
||||
and then never changed.
|
||||
|
||||
--------------
|
||||
|
Loading…
x
Reference in New Issue
Block a user