Update Cinder guide

Co-Authored-By: Serguei Bezverkhi <sbezverk@cisco.com>
partially-implements: blueprint documentation-rework
Depends-On: I2f709f8589fdbf62e3d0b265452fd58f413bee65
Change-Id: I19fe617b52b5b4a013f0a589ca21c0c932be4ad2
This commit is contained in:
Swapnil Kulkarni (coolsvap) 2016-05-17 08:09:57 +05:30
parent d87b4f63a2
commit 42d3e77e5a

View File

@ -9,10 +9,10 @@ Overview
Currently Kolla can deploy the cinder services:
- cinder-api
- cinder-scheduler
- cinder-backup
- cinder-volume
- cinder-api
- cinder-scheduler
- cinder-backup
- cinder-volume
The cinder implementation defaults to using LVM storage. The default
implementation requires a volume group be set up. This can either be
@ -20,8 +20,8 @@ a real physical volume or a loopback mounted file for development.
Create a Volume Group
=====================
Use pvcreate and vgcreate to create the volume group. For example with
the devices /dev/sdb and /dev/sdc:
Use ``pvcreate`` and ``vgcreate`` to create the volume group. For example
with the devices ``/dev/sdb`` and ``/dev/sdc``:
::
@ -32,9 +32,7 @@ the devices /dev/sdb and /dev/sdc:
During development, it may be desirable to use file backed block storage. It
is possible to use a file and mount it as a block device via the loopback
system.
::
system. ::
mknod /dev/loop2 b 7 2
dd if=/dev/zero of=/var/lib/cinder_data.img bs=1G count=20
@ -53,9 +51,7 @@ Create a volume as follows:
<bunch of stuff printed>
Verify it is available. If it says "error" here something went wrong during
LVM creation of the volume.
::
LVM creation of the volume. ::
$ openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
@ -76,7 +72,7 @@ Check the console log added the disk:
openstack console log show steak_server
A /dev/vdb should appear in the console log, at least when booting cirros.
A ``/dev/vdb`` should appear in the console log, at least when booting cirros.
If the disk stays in the available state, something went wrong during the
iSCSI mounting of the volume to the guest VM.
@ -84,52 +80,34 @@ Cinder LVM2 backend with iSCSI
==============================
As of Newton-1 milestone, Kolla supports LVM2 as cinder backend. It is
accomplished by introducing two new containers tgtd and iscsid.
accomplished by introducing two new containers ``tgtd`` and ``iscsid``.
tgtd container serves as a bridge between cinder-volume process and a server
hosting Logical Volume Groups (LVG). iscsid container serves as a bridge
hosting Logical Volume Groups (LVG). ``iscsid`` container serves as a bridge
between nova-compute process and the server hosting LVG.
There are two methods to apply new configuration to cinder:
- New deployments: create cinder.conf and place it at /etc/kolla/config
folder, then add below configuration lines and run kolla deloyment.
- Existing cinder deployments: modify cinder.conf located at /etc/kolla/config
by adding below configuration lines and run kolla reconfigure.
In order to use iSCSI as a Cinder's backend, these two parameters must be
specified in ``globals.yml``. ::
::
cinder_iscsi_ip_address: "X.X.X.X"
cinder_volume_group: << lvg_name >>
[DEFAULT]
enabled_backends = {local_lvm_name}
volume_name_template=volume-%s
[{local_lvm_name}]
lvm_type = default
volume_group = {lvg_name}
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = {local_lvm_name}
iscsi_helper=tgtadm
iscsi_ip_address={management_ip_address_of_server_hosting_LVG}
iscsi_protocol=iscsi
::
Where:
- local_lvm_name is a name chosen by a user for a spefic LVM2 backend, multiple
LVM2 backend can be confiugred and each should have a unique name.
- lvg_name is a name Logical Volume Group created for cinder to store volumes.
- management_ip_address_of_server_hosting_LVG is IP address of an interface
where cinder process is bound to. (Do not use VIP address here, LVG does not
move from server to server as VIP address does in case of a server failure).
- X.X.X.X - is IP address of a server hosting LVG
- ``lvg_name`` - is a name of LVG on that server.
NOTE: For Ubuntu and LVM2/iSCSI
iscsd process uses configfs which is normally mounted at /sys/kernel/config to
store discovered targets information, on centos/rhel type of systems this special
file system gets mounted automatically, which is not the case on debian/ubuntu.
Since iscsid container runs on every nova compute node, the following steps must
be completed on every Ubuntu server targeted for nova compute role.
``iscsd`` process uses configfs which is normally mounted at
``/sys/kernel/config`` to store discovered targets information, on centos/rhel
type of systems this special file system gets mounted automatically, which is
not the case on debian/ubuntu. Since ``iscsid`` container runs on every nova
compute node, the following steps must be completed on every Ubuntu server
targeted for nova compute role.
- Add configfs module to /etc/modules
- Rebuild initramfs using: "update-initramfs -u" command
- Add configfs module to ``/etc/modules``
- Rebuild initramfs using: ``update-initramfs -u`` command
- Make sure configfs gets mounted during a server boot up process. There are
multiple ways to accomplish it, one example is adding this command to
"mount -t configfs configfs /sys/kernel/config" to /etc/rc.local
"mount -t configfs configfs ``/sys/kernel/config`` to ``/etc/rc.local``