cbd67ebdb1
For more detail, see the doc migration spec. http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html Co-Authored-By: Eduardo Gonzalez <dabarren@gmail.com> Change-Id: I3a7c0ed204ee1e9060b5325f20622afe9a5e3040
167 lines
5.0 KiB
ReStructuredText
167 lines
5.0 KiB
ReStructuredText
.. _cinder-guide:
|
|
|
|
===============
|
|
Cinder in Kolla
|
|
===============
|
|
|
|
Overview
|
|
========
|
|
|
|
Cinder can be deploying using Kolla and supports the following storage
|
|
backends:
|
|
|
|
* ceph
|
|
* hnas_iscsi
|
|
* hnas_nfs
|
|
* iscsi
|
|
* lvm
|
|
* nfs
|
|
|
|
LVM
|
|
===
|
|
|
|
When using the ``lvm`` backend, a volume group will need to be created on each
|
|
storage node. This can either be a real physical volume or a loopback mounted
|
|
file for development. Use ``pvcreate`` and ``vgcreate`` to create the volume
|
|
group. For example with the devices ``/dev/sdb`` and ``/dev/sdc``:
|
|
|
|
::
|
|
|
|
<WARNING ALL DATA ON /dev/sdb and /dev/sdc will be LOST!>
|
|
|
|
pvcreate /dev/sdb /dev/sdc
|
|
vgcreate cinder-volumes /dev/sdb /dev/sdc
|
|
|
|
During development, it may be desirable to use file backed block storage. It
|
|
is possible to use a file and mount it as a block device via the loopback
|
|
system. ::
|
|
|
|
|
|
free_device=$(losetup -f)
|
|
fallocate -l 20G /var/lib/cinder_data.img
|
|
losetup $free_device /var/lib/cinder_data.img
|
|
pvcreate $free_device
|
|
vgcreate cinder-volumes $free_device
|
|
|
|
Enable the ``lvm`` backend in ``/etc/kolla/globals.yml``:
|
|
|
|
::
|
|
|
|
enable_cinder_backend_lvm: "yes"
|
|
|
|
.. note ::
|
|
There are currently issues using the LVM backend in a multi-controller setup,
|
|
see `_bug 1571211 <https://launchpad.net/bugs/1571211>`__ for more info.
|
|
|
|
NFS
|
|
===
|
|
|
|
To use the ``nfs`` backend, configure ``/etc/exports`` to contain the mount
|
|
where the volumes are to be stored::
|
|
|
|
/kolla_nfs 192.168.5.0/24(rw,sync,no_root_squash)
|
|
|
|
In this example, ``/kolla_nfs`` is the directory on the storage node which will
|
|
be ``nfs`` mounted, ``192.168.5.0/24`` is the storage network, and
|
|
``rw,sync,no_root_squash`` means make the share read-write, synchronous, and
|
|
prevent remote root users from having access to all files.
|
|
|
|
Then start ``nfsd``::
|
|
|
|
systemctl start nfs
|
|
|
|
On the deploy node, create ``/etc/kolla/config/nfs_shares`` with an entry for
|
|
each storage node::
|
|
|
|
storage01:/kolla_nfs
|
|
storage02:/kolla_nfs
|
|
|
|
Finally, enable the ``nfs`` backend in ``/etc/kolla/globals.yml``::
|
|
|
|
enable_cinder_backend_nfs: "yes"
|
|
|
|
Validation
|
|
==========
|
|
|
|
Create a volume as follows:
|
|
|
|
::
|
|
|
|
$ openstack volume create --size 1 steak_volume
|
|
<bunch of stuff printed>
|
|
|
|
Verify it is available. If it says "error" here something went wrong during
|
|
LVM creation of the volume. ::
|
|
|
|
$ openstack volume list
|
|
+--------------------------------------+--------------+-----------+------+-------------+
|
|
| ID | Display Name | Status | Size | Attached to |
|
|
+--------------------------------------+--------------+-----------+------+-------------+
|
|
| 0069c17e-8a60-445a-b7f0-383a8b89f87e | steak_volume | available | 1 | |
|
|
+--------------------------------------+--------------+-----------+------+-------------+
|
|
|
|
Attach the volume to a server using:
|
|
|
|
::
|
|
|
|
openstack server add volume steak_server 0069c17e-8a60-445a-b7f0-383a8b89f87e
|
|
|
|
Check the console log added the disk:
|
|
|
|
::
|
|
|
|
openstack console log show steak_server
|
|
|
|
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
|
|
If the disk stays in the available state, something went wrong during the
|
|
iSCSI mounting of the volume to the guest VM.
|
|
|
|
Cinder LVM2 back end with iSCSI
|
|
===============================
|
|
|
|
As of Newton-1 milestone, Kolla supports LVM2 as cinder back end. It is
|
|
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
|
|
``tgtd`` container serves as a bridge between cinder-volume process and a
|
|
server hosting Logical Volume Groups (LVG). ``iscsid`` container serves as
|
|
a bridge between nova-compute process and the server hosting LVG.
|
|
|
|
In order to use Cinder's LVM back end, a LVG named ``cinder-volumes`` should
|
|
exist on the server and following parameter must be specified in
|
|
``globals.yml`` ::
|
|
|
|
enable_cinder_backend_lvm: "yes"
|
|
|
|
For Ubuntu and LVM2/iSCSI
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
``iscsd`` process uses configfs which is normally mounted at
|
|
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
|
|
type of systems this special file system gets mounted automatically, which is
|
|
not the case on debian/ubuntu. Since ``iscsid`` container runs on every nova
|
|
compute node, the following steps must be completed on every Ubuntu server
|
|
targeted for nova compute role.
|
|
|
|
- Add configfs module to ``/etc/modules``
|
|
- Rebuild initramfs using: ``update-initramfs -u`` command
|
|
- Stop ``open-iscsi`` system service due to its conflicts
|
|
with iscsid container.
|
|
|
|
Ubuntu 16.04 (systemd):
|
|
``systemctl stop open-iscsi; systemctl stop iscsid``
|
|
|
|
- Make sure configfs gets mounted during a server boot up process. There are
|
|
multiple ways to accomplish it, one example:
|
|
::
|
|
|
|
mount -t configfs /etc/rc.local /sys/kernel/config
|
|
|
|
Cinder back end with external iSCSI storage
|
|
===========================================
|
|
|
|
In order to use external storage system (like one from EMC or NetApp)
|
|
the following parameter must be specified in ``globals.yml`` ::
|
|
|
|
enable_cinder_backend_iscsi: "yes"
|
|
|
|
Also ``enable_cinder_backend_lvm`` should be set to "no" in this case.
|