Ritchie, Frank (fr801x) b2dc921b05 Use upstream rook yamls as base
This is a scaled back version of:

https://review.opendev.org/c/airship/treasuremap/+/792837

This PS basically uses kustomize patches to modify the upstream
yamls to create the base resources. Currently we manually
edit the upstream yamls. The yamls produced by "kubectl kustomize ."
are as before.

Kpt can be used for syncing to upstream once this issue is resolved:

https://github.com/GoogleContainerTools/kpt/issues/517

Change-Id: Idd7a3f4a61a9dae9ec82dd0a604eb371bdbb4ab4
2021-06-10 09:32:55 -05:00

65 lines
3.2 KiB
YAML

#################################################################################################################
# Create a Ceph pool with settings for replication in production environments. A minimum of 3 OSDs on
# different hosts are required in this example.
# kubectl create -f pool.yaml
#################################################################################################################
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph # namespace:cluster
spec:
# The failure domain will spread the replicas of the data across different failure zones
failureDomain: host
# For a pool based on raw copies, specify the number of copies. A size of 1 indicates no redundancy.
replicated:
size: 3
# Disallow setting pool with replica 1, this could lead to data loss without recovery.
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want
requireSafeReplicaSize: true
# The number for replicas per failure domain, the value must be a divisor of the replica count. If specified, the most common value is 2 for stretch clusters, where the replica count would be 4.
# replicasPerFailureDomain: 2
# The name of the failure domain to place further down replicas
# subFailureDomain: host
# Ceph CRUSH root location of the rule
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#types-and-buckets
#crushRoot: my-root
# The Ceph CRUSH device class associated with the CRUSH replicated rule
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#device-classes
#deviceClass: my-class
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false.
# For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics
# enableRBDStats: true
# Set any property on a given pool
# see https://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values
parameters:
# Inline compression mode for the data pool
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression
compression_mode: none
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size
#target_size_ratio: ".5"
mirroring:
enabled: false
# mirroring mode: pool level or per image
# for more details see: https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring
mode: image
# specify the schedule(s) on which snapshots should be taken
# snapshotSchedules:
# - interval: 24h # daily snapshots
# startTime: 14:00:00-05:00
# reports pool mirroring status if enabled
statusCheck:
mirror:
disabled: false
interval: 60s
# quota in bytes and/or objects, default value is 0 (unlimited)
# see https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas
# quotas:
# maxSize: "10Gi" # valid suffixes include k, M, G, T, P, E, Ki, Mi, Gi, Ti, Pi, Ei
# maxObjects: 1000000000 # 1 billion objects
# A key/value list of annotations
annotations:
# key: value