docs/doc/source/storage/kubernetes/osd-replication-factors-journal-functions-and-storage-tiers.rst
Keane Lim ceea9dda0d Storage Guide update
Applied review feedback.

Fixed 'grey bars' formatting.
Fixed gerunds and lists
Added abbreviations and references

Change-Id: I104d678ce3ea52bddcbc141f8aad49ea1e1971db
Signed-off-by: Keane Lim <keane.lim@windriver.com>
2021-02-09 12:50:24 -05:00

4.3 KiB

OSD Replication Factors, Journal Functions and Storage Tiers

OSD Replication Factor

Replication Factor Hosts per Replication Group Maximum Replication Groups Supported
2 2 4
3 3 3

You can add up to 16 object storage devices (OSDs) per storage host for data storage.

Space on the storage hosts must be configured at installation before you can unlock the hosts. You can change the configuration after installation by adding resources to existing storage hosts or adding more storage hosts. For more information, see the StarlingX Installation and Deployment Guide.

Storage hosts can achieve faster data access using SSD-backed transaction journals (journal functions). NVMe-compatible SSDs are supported.

Journal Functions

Each OSD on a storage host has an associated Ceph transaction journal, which tracks changes to be committed to disk for data storage and replication, and if required, for data recovery. This is a full Ceph journal, containing both meta-data and data. By default, it is collocated on the OSD, which typically uses slower but less expensive HDD-backed storage. For faster commits and improved reliability, you can use a dedicated solid-state drive (SSD) installed on the host and assigned as a journal function. NVMe-compatible SSDs are also supported. You can dedicate more than one SSD as a journal function.

Note

You can also assign an SSD for use as an OSD, but you cannot assign the same SSD as a journal function.

If a journal function is available, you can configure individual OSDs to use journals located on the journal function. Each journal is implemented as a partition. You can adjust the size and location of the journals.

For OSDs implemented on rotational disks, strongly recommends that you use an SSD-based journal function. For OSDs implemented on SSDs, collocated journals can be used with no performance cost.

For more information, see : Storage Functions: OSDs and SSD-backed Journals <storage-functions-osds-and-ssd-backed-journals>.

Storage Tiers

You can create different tiers of OSDs storage to meet different Container requirements. For example, to meet the needs of Containers with frequent disk access, you can create a tier containing only high-performance OSDs. You can then associate new Persistent Volume Claims with this tier for use with the Containers.

By default, is configured with one tier, called the Storage Tier. This is created as part of adding the Ceph storage back-end. It uses the first OSD in each peer host of the first replication group.

You can add more tiers as required, limited only by the available hardware.

After adding a tier, you can assign OSDs to it. The OSD assignments must satisfy the replication requirements for the system. That is, in the replication group used to implement a tier, each peer host must contribute the same number of OSDs to the tier.

For more information on storage tiers, see : Add a Storage Tier Using the CLI <add-a-storage-tier-using-the-cli>.