d87b4f63a2
Restructured the documenation to 1. Add reference at the top 2. Updated the title formatting 3. Updated the heading formatting 4. Updated the links formatting Change-Id: Ie3786e92fee674da1fa39cf07f1bf0a3badd5b92
186 lines
5.6 KiB
ReStructuredText
186 lines
5.6 KiB
ReStructuredText
.. _ceph-guide:
|
|
|
|
=============
|
|
Ceph in Kolla
|
|
=============
|
|
|
|
The out-of-the-box Ceph deployment requires 3 hosts with at least one block
|
|
device on each host that can be dedicated for sole use by Ceph. However, with
|
|
tweaks to the Ceph cluster you can deploy a "healthy" cluster with a single
|
|
host and a single block device.
|
|
|
|
Requirements
|
|
============
|
|
|
|
* A minimum of 3 hosts for a vanilla deploy
|
|
* A minimum of 1 block device per host
|
|
|
|
Preparation and Deployment
|
|
==========================
|
|
|
|
To prepare a disk for use as a
|
|
`Ceph OSD <http://docs.ceph.com/docs/master/man/8/ceph-osd/>`_ you must add a
|
|
special partition label to the disk. This partition label is how Kolla detects
|
|
the disks to format and bootstrap. Any disk with a matching partition label will
|
|
be reformatted so use caution.
|
|
|
|
To prepare an OSD as a storage drive, execute the following operations:
|
|
|
|
::
|
|
|
|
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
# where $DISK is /dev/sdb or something similar
|
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
|
|
|
The following shows an example of using parted to configure /dev/sdb for usage with Kolla.
|
|
|
|
::
|
|
|
|
parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1
|
|
parted /dev/sdb print
|
|
Model: VMware, VMware Virtual S (scsi)
|
|
Disk /dev/sdb: 10.7GB
|
|
Sector size (logical/physical): 512B/512B
|
|
Partition Table: gpt
|
|
Number Start End Size File system Name Flags
|
|
1 1049kB 10.7GB 10.7GB KOLLA_CEPH_OSD_BOOTSTRAP
|
|
|
|
|
|
Edit the [storage] group in the inventory which contains the hostname of the
|
|
hosts that have the block devices you have prepped as shown above.
|
|
|
|
::
|
|
|
|
[storage]
|
|
controller
|
|
compute1
|
|
|
|
|
|
Enable Ceph in /etc/kolla/globals.yml:
|
|
|
|
::
|
|
|
|
enable_ceph: "yes"
|
|
|
|
|
|
RadosGW is optional, enable it in /etc/kolla/globals.yml:
|
|
|
|
::
|
|
|
|
enable_ceph_rgw: "yes"
|
|
|
|
RGW requires a healthy cluster in order to be successfully deployed.
|
|
On initial start up, RGW will create several pools.
|
|
The first pool should be in an operational state to proceed with the second one, and so on.
|
|
So, in the case of an all-in-one deployment, it is necessary to change the default number of copies
|
|
for the pools before deployment. Modify the file /etc/kolla/config/ceph.conf and add the contents::
|
|
|
|
[global]
|
|
osd pool default size = 1
|
|
osd pool default min size = 1
|
|
|
|
|
|
Finally deploy the Ceph-enabled OpenStack:
|
|
|
|
::
|
|
|
|
kolla-ansible deploy -i path/to/inventory
|
|
|
|
Using a Cache Tier
|
|
==================
|
|
|
|
An optional
|
|
`cache tier <http://docs.ceph.com/docs/hammer/rados/operations/cache-tiering/>`_
|
|
can be deployed by formatting at least one cache device and enabling cache
|
|
tiering in the globals.yml configuration file.
|
|
|
|
To prepare an OSD as a cache device, execute the following operations:
|
|
|
|
::
|
|
|
|
# <WARNING ALL DATA ON $DISK will be LOST!>
|
|
# where $DISK is /dev/sdb or something similar
|
|
parted $DISK -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_CACHE_BOOTSTRAP 1 -1
|
|
|
|
Enable the Ceph cache tier in /etc/kolla/globals.yml:
|
|
|
|
::
|
|
|
|
enable_ceph: "yes"
|
|
ceph_enable_cache: "yes"
|
|
# Valid options are [ forward, none, writeback ]
|
|
ceph_cache_mode: "writeback"
|
|
|
|
After this run the playbooks as you normally would. For example:
|
|
|
|
::
|
|
|
|
kolla-ansible deploy -i path/to/inventory
|
|
|
|
Setting up an Erasure Coded Pool
|
|
================================
|
|
|
|
`Erasure code <http://docs.ceph.com/docs/hammer/rados/operations/erasure-code/>`_
|
|
is the new big thing from Ceph. Kolla has the ability to setup your Ceph pools
|
|
as erasure coded pools. Due to technical limitations with Ceph, using erasure
|
|
coded pools as OpenStack uses them requires a cache tier. Additionally, you must
|
|
make the choice to use an erasure coded pool or a replicated pool (the default)
|
|
when you initially deploy. You cannot change this without completely removing
|
|
the pool and recreating it.
|
|
|
|
To enable erasure coded pools add the following options to your
|
|
/etc/kolla/globals.yml configuration file:
|
|
|
|
::
|
|
|
|
# A requirement for using the erasure-coded pools is you must setup a cache tier
|
|
# Valid options are [ erasure, replicated ]
|
|
ceph_pool_type: "erasure"
|
|
# Optionally, you can change the profile
|
|
#ceph_erasure_profile: "k=4 m=2 ruleset-failure-domain=host"
|
|
|
|
Managing Ceph
|
|
=============
|
|
|
|
Check the Ceph status for more diagnostic information. The sample output below
|
|
indicates a healthy cluster:
|
|
|
|
::
|
|
|
|
docker exec ceph_mon ceph -s
|
|
cluster 5fba2fbc-551d-11e5-a8ce-01ef4c5cf93c
|
|
health HEALTH_OK
|
|
monmap e1: 1 mons at {controller=10.0.0.128:6789/0}
|
|
election epoch 2, quorum 0 controller
|
|
osdmap e18: 2 osds: 2 up, 2 in
|
|
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
|
|
68676 kB used, 20390 MB / 20457 MB avail
|
|
64 active+clean
|
|
|
|
If Ceph is run in an all-in-one deployment or with less than three storage nodes, further
|
|
configuration is required. It is necessary to change the default number of copies for the pool.
|
|
The following example demonstrates how to change the number of copies for the pool to 1:
|
|
|
|
::
|
|
|
|
docker exec ceph_mon ceph osd pool set rbd size 1
|
|
|
|
All the pools must be modified if Glance, Nova, and Cinder have been deployed.
|
|
An example of modifying the pools to have 2 copies:
|
|
|
|
::
|
|
|
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p} size 2; done
|
|
|
|
If using a cache tier, these changes must be made as well:
|
|
|
|
::
|
|
|
|
for p in images vms volumes backups; do docker exec ceph_mon ceph osd pool set ${p}-cache size 2; done
|
|
|
|
The default pool Ceph creates is named 'rbd'. It is safe to remove this pool:
|
|
|
|
::
|
|
|
|
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
|