Use new classpath of volume driver that was moved to dell_emc folder and rebranded to Dell EMC PS Series name. Corresponding to change: https://review.openstack.org/#/c/441998/ Change-Id: Ia9aef904b96d84116735a912a9de2239b82fa931
19 KiB
Configuring the Block (cinder) storage service (optional)
By default, the Block (cinder) storage service installs on the host itself using the LVM backend.
Note
While this is the default for cinder, using the LVM backend results in a Single Point of Failure.
The LVM back end needs to run on the host, however most of the other
back ends can be deployed inside a container. If the storage back ends
deployed within your environment are able to run inside containers, then
it is recommended to set is_metal: False
in the
env.d/cinder.yml
file.
Note
Due to a limitation of the container system, you must deploy the volume service directly onto the host when using back ends depending on iSCSI. That is the case, for example, for storage appliances configured to use the iSCSI protocol.
NFS backend
Edit /etc/openstack_deploy/openstack_user_config.yml
and
configure the NFS client on each storage node if the NetApp backend is
configured to use an NFS storage protocol.
For each storage node, add one
cinder_backends
block underneath the a newcontainer_vars
section.container_vars
are used to allow container/host individualized configuration. Each cinder back end is defined with a unique key. For example,nfs-volume1
. This later represents a unique cinder backend and volume type.container_vars: cinder_backends: nfs-volume1:
Configure the appropriate cinder volume backend name:
volume_backend_name: NFS_VOLUME1
Configure the appropriate cinder NFS driver:
volume_driver: cinder.volume.drivers.nfs.NfsDriver
Configure the location of the file that lists shares available to the block storage service. This configuration file must include
nfs_shares_config
:nfs_shares_config: FILENAME_NFS_SHARES
Replace
FILENAME_NFS_SHARES
with the location of the share configuration file. For example,/etc/cinder/nfs_shares_volume1
.Define mount options for the NFS mount. For example:
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
Configure one or more NFS shares:
shares: - { ip: "HOSTNAME", share: "PATH_TO_NFS_VOLUME" }
Replace
HOSTNAME
with the IP address or hostname of the NFS server, and thePATH_TO_NFS_VOLUME
with the absolute path to an existing and accessible NFS share (excluding the IP address or hostname).
The following is a full configuration example of a cinder NFS backend
named NFS1. The cinder playbooks will automatically add a custom
volume-type
and nfs-volume1
as in this
example:
container_vars: cinder_backends: nfs-volume1: volume_backend_name: NFS_VOLUME1 volume_driver: cinder.volume.drivers.nfs.NfsDriver nfs_shares_config: /etc/cinder/nfs_shares_volume1 nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120" shares: - { ip: "1.2.3.4", share: "/vol1" }
Backup
You can configure cinder to backup volumes to Object Storage (swift).
Enable the default configuration to back up volumes to a swift
installation accessible within your environment. Alternatively, you can
set cinder_service_backup_swift_url
and other variables to
back up to an external swift installation.
Add or edit the following line in the
/etc/openstack_deploy/user_variables.yml
file and set the value toTrue
:cinder_service_backup_program_enabled: True
By default, cinder uses the access credentials of the user initiating the backup. Default values are set in the
/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.yml
file. You can override those defaults by setting variables in/etc/openstack_deploy/user_variables.yml
to change how cinder performs backups. Add and edit any of the following variables to the/etc/openstack_deploy/user_variables.yml
file:... cinder_service_backup_swift_auth: per_user # Options include 'per_user' or 'single_user'. We default to # 'per_user' so that backups are saved to a user's swift # account. cinder_service_backup_swift_url: # This is your swift storage url when using 'per_user', or keystone # endpoint when using 'single_user'. When using 'per_user', you # can leave this as empty or as None to allow cinder-backup to # obtain a storage url from environment. cinder_service_backup_swift_url: cinder_service_backup_swift_auth_version: 2 cinder_service_backup_swift_user: cinder_service_backup_swift_tenant: cinder_service_backup_swift_key: cinder_service_backup_swift_container: volumebackups cinder_service_backup_swift_object_size: 52428800 cinder_service_backup_swift_retry_attempts: 3 cinder_service_backup_swift_retry_backoff: 2 cinder_service_backup_compression_algorithm: zlib cinder_service_backup_metadata_version: 2
During installation of cinder, the backup service is configured.
Using Ceph for cinder backups
You can deploy Ceph to hold cinder volume backups. To get started,
set the cinder_service_backup_driver
Ansible variable:
cinder_service_backup_driver: cinder.backup.drivers.ceph
Configure the Ceph user and the pool to use for backups. The defaults are shown here:
cinder_service_backup_ceph_user: cinder-backup
cinder_service_backup_ceph_pool: backups
Availability zones
Create multiple availability zones to manage cinder storage hosts.
Edit the /etc/openstack_deploy/openstack_user_config.yml
and /etc/openstack_deploy/user_variables.yml
files to set
up availability zones.
For each cinder storage host, configure the availability zone under the
container_vars
stanza:cinder_storage_availability_zone: CINDERAZ
Replace
CINDERAZ
with a suitable name. For examplecinderAZ_2
.If more than one availability zone is created, configure the default availability zone for all the hosts by creating a
cinder_default_availability_zone
in your/etc/openstack_deploy/user_variables.yml
cinder_default_availability_zone: CINDERAZ_DEFAULT
Replace
CINDERAZ_DEFAULT
with a suitable name. For example,cinderAZ_1
. The default availability zone should be the same for all cinder hosts.
OpenStack Dashboard (horizon) configuration for cinder
You can configure variables to set the behavior for cinder volume management in OpenStack Dashboard (horizon). By default, no horizon configuration is set.
- The default destination availability zone is
nova
if you use multiple availability zones andcinder_default_availability_zone
has no definition. Volume creation with horizon might fail if there is no availability zone namednova
. Setcinder_default_availability_zone
to an appropriate availability zone name so thatAny availability zone
works in horizon. - horizon does not populate the volume type by default. On the new
volume page, a request for the creation of a volume with the default
parameters fails. Set
cinder_default_volume_type
so that a volume creation request without an explicit volume type succeeds.
Configuring cinder to use LVM
List the
container_vars
that contain the storage options for the target host.Note
The vars related to the cinder availability zone and the
limit_container_types
are optional.To configure an LVM, utilize the following example:
storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_storage_availability_zone: cinderAZ_1 cinder_default_availability_zone: cinderAZ_1 cinder_backends: lvm: volume_backend_name: LVM_iSCSI volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver volume_group: cinder-volumes iscsi_ip_address: "{{ cinder_storage_address }}" limit_container_types: cinder_volume
To use another backend in a container instead of bare metal, copy the
env.d/cinder.yml
to
/etc/openstack_deploy/env.d/cinder.yml
file and change the
is_metal: true
stanza under the
cinder_volumes_container
properties to
is_metal: false
.
Alternatively, you can also selectively override, like this:
container_skel:
cinder_volumes_container:
properties:
is_metal: false
Configuring cinder to use Ceph
In order for cinder to use Ceph, it is necessary to configure for both the API and backend. When using any forms of network storage (iSCSI, NFS, Ceph) for cinder, the API containers can be considered as backend servers. A separate storage host is not required.
Copy the env.d/cinder.yml
to
/etc/openstack_deploy/env.d/cinder.yml
file and change the
is_metal: true
stanza under the
cinder_volumes_container
properties to
is_metal: false
.
Alternatively, you can also selectively override, like this:
container_skel:
cinder_volumes_container:
properties:
is_metal: false
List of target hosts on which to deploy the cinder API. We recommend that a minimum of three target hosts are used for this service.
storage-infra_hosts: infra1: ip: 172.29.236.101 infra2: ip: 172.29.236.102 infra3: ip: 172.29.236.103
To configure an RBD backend, utilize the following example:
container_vars: cinder_storage_availability_zone: cinderAZ_3 cinder_default_availability_zone: cinderAZ_1 cinder_backends: limit_container_types: cinder_volume volumes_hdd: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: volumes_hdd rbd_ceph_conf: /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot: 'false' rbd_max_clone_depth: 5 rbd_store_chunk_size: 4 rados_connect_timeout: -1 volume_backend_name: volumes_hdd rbd_user: "{{ cinder_ceph_client }}" rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
The following example sets cinder to use the
cinder_volumes
pool. The example uses cephx authentication
and requires existing cinder
account for
cinder_volumes
pool.
In user_variables.yml
:
ceph_mons: - 172.29.244.151 - 172.29.244.152 - 172.29.244.153
In openstack_user_config.yml
:
storage_hosts: infra1: ip: 172.29.236.101 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder infra2: ip: 172.29.236.102 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder infra3: ip: 172.29.236.103 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder
This link provides a complete working example of Ceph setup and integration with cinder (nova and glance included):
Configuring cinder to use Dell EqualLogic
To use the Dell EqualLogic volume driver as a back end, edit the
/etc/openstack_deploy/openstack_user_config.yml
file and
configure the storage nodes that will use it.
Define the following parameters.
Add
dellqlx
stanza under thecinder_backends
for each storage node:cinder_backends: delleqlx:
Specify volume back end name:
volume_backend_name: DellEQLX_iSCSI
Use Dell EQLX San ISCSI driver:
volume_driver: cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver
Specify the SAN IP address:
san_ip: ip_of_dell_storage
Specify SAN username (Default: grpadmin):
san_login: grpadmin
Specify the SAN password:
san_password: password
Specify the group name for pools (Default: group-0):
eqlx_group_name: group-0
Specify the pool where Cinder will create volumes and snapshots (Default: default):
eqlx_pool: default
Ensure the
openstack_user_config.yml
configuration is accurate:storage_hosts: Infra01: ip: infra_host_ip container_vars: cinder_backends: limit_container_types: cinder_volume delleqlx: volume_backend_name: DellEQLX_iSCSI volume_driver: cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver san_ip: ip_of_dell_storage san_login: grpadmin san_password: password eqlx_group_name: group-0 eqlx_pool: default
Note
For more details about available configuration options, see http://docs.openstack.org/ocata/config-reference/block-storage/drivers/dell-equallogic-driver.html
Configuring cinder to use a NetApp appliance
To use a NetApp storage appliance back end, edit the
/etc/openstack_deploy/openstack_user_config.yml
file and
configure each storage node that will use it.
Note
Ensure that the NAS Team enables httpd.admin.access
.
Add the
netapp
stanza under thecinder_backends
stanza for each storage node:cinder_backends: netapp:
The options in subsequent steps fit under the
netapp
stanza.The backend name is arbitrary and becomes a volume type within cinder.
Configure the storage family:
netapp_storage_family: STORAGE_FAMILY
Replace
STORAGE_FAMILY
withontap_7mode
for Data ONTAP operating in 7-mode orontap_cluster
for Data ONTAP operating as a cluster.Configure the storage protocol:
netapp_storage_protocol: STORAGE_PROTOCOL
Replace
STORAGE_PROTOCOL
withiscsi
for iSCSI ornfs
for NFS.For the NFS protocol, specify the location of the configuration file that lists the shares available to cinder:
nfs_shares_config: FILENAME_NFS_SHARES
Replace
FILENAME_NFS_SHARES
with the location of the share configuration file. For example,/etc/cinder/nfs_shares
.Configure the server:
netapp_server_hostname: SERVER_HOSTNAME
Replace
SERVER_HOSTNAME
with the hostnames for both netapp controllers.Configure the server API port:
netapp_server_port: PORT_NUMBER
Replace
PORT_NUMBER
with 80 for HTTP or 443 for HTTPS.Configure the server credentials:
netapp_login: USER_NAME netapp_password: PASSWORD
Replace
USER_NAME
andPASSWORD
with the appropriate values.Select the NetApp driver:
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver
Configure the volume back end name:
volume_backend_name: BACKEND_NAME
Replace
BACKEND_NAME
with a value that provides a hint for the cinder scheduler. For example,NETAPP_iSCSI
.Ensure the
openstack_user_config.yml
configuration is accurate:storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_backends: limit_container_types: cinder_volume netapp: netapp_storage_family: ontap_7mode netapp_storage_protocol: nfs netapp_server_hostname: 111.222.333.444 netapp_server_port: 80 netapp_login: openstack_cinder netapp_password: password volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver volume_backend_name: NETAPP_NFS
For
netapp_server_hostname
, specify the IP address of the Data ONTAP server. Include iSCSI or NFS for thenetapp_storage_family
depending on the configuration. Add 80 if using HTTP or 443 if using HTTPS fornetapp_server_port
.The
cinder-volume.yml
playbook will automatically install thenfs-common
file across the hosts, transitioning from an LVM to a NetApp back end.
Configuring cinder qos specs
Deployers may optionally define the variable
cinder_qos_specs
to create qos specs. This variable is a
list of dictionaries that contain the options for each qos spec. cinder
volume-types may be assigned to a qos spec by defining the key
cinder_volume_types
in the desired qos spec dictionary.
- name: high-iops
options:
consumer: front-end
read_iops_sec: 2000
write_iops_sec: 2000
cinder_volume_types:
- volumes-1
- volumes-2
- name: low-iops
options:
consumer: front-end
write_iops_sec: 100