diff --git a/specs/juno/affinity-antiaffinity-filter.rst b/specs/juno/affinity-antiaffinity-filter.rst index 18336dbd..de7a3c5c 100644 --- a/specs/juno/affinity-antiaffinity-filter.rst +++ b/specs/juno/affinity-antiaffinity-filter.rst @@ -48,6 +48,9 @@ this new filter can be useful. volumes as close to each other as possible, ideally on the same storage backend, for the sake of performance. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/cinder-storwize-driver-qos.rst b/specs/juno/cinder-storwize-driver-qos.rst index ad19ac8a..47176870 100644 --- a/specs/juno/cinder-storwize-driver-qos.rst +++ b/specs/juno/cinder-storwize-driver-qos.rst @@ -22,6 +22,9 @@ the amount of I/O for a specific volume. QoS has been implemented for some cinder storage drivers, and it is feasible for the storwize driver to add this feature as well. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/configurable-ssh-host-key-policy.rst b/specs/juno/configurable-ssh-host-key-policy.rst index 54da3792..243b965f 100644 --- a/specs/juno/configurable-ssh-host-key-policy.rst +++ b/specs/juno/configurable-ssh-host-key-policy.rst @@ -30,6 +30,8 @@ If a MITM attack were to happen, the users data could be compromised. In a worst case scenario, users could be tricked into attaching or even booting spoofed volumes containing malicious code. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/consistency-groups.rst b/specs/juno/consistency-groups.rst index c72f6a6c..bd44bc50 100644 --- a/specs/juno/consistency-groups.rst +++ b/specs/juno/consistency-groups.rst @@ -57,6 +57,9 @@ Assumptions: only rely on the storage level quiesce in phase 1 because the freeze feature mentioned above is not ready yet. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/datera-driver.rst b/specs/juno/datera-driver.rst index eff6211e..58e19bbd 100644 --- a/specs/juno/datera-driver.rst +++ b/specs/juno/datera-driver.rst @@ -17,6 +17,9 @@ Problem description Integration for Datera storage is not available in OpenStack. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/debug-translation-removal.rst b/specs/juno/debug-translation-removal.rst index a510c243..ed0e3ef9 100644 --- a/specs/juno/debug-translation-removal.rst +++ b/specs/juno/debug-translation-removal.rst @@ -27,6 +27,8 @@ been decided that this is not the case. In order to bring Cinder in line with other OpenStack components we need to remove translation from debug level log messages. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/deprecate_v1_api.rst b/specs/juno/deprecate_v1_api.rst index d61461e4..f52d26b1 100644 --- a/specs/juno/deprecate_v1_api.rst +++ b/specs/juno/deprecate_v1_api.rst @@ -24,6 +24,9 @@ assumed there are many clients out there still supporting v1, as well as many deployed clouds still using v1 that would need to make some changes to ease the switch. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/emc-vmax-driver-juno-update.rst b/specs/juno/emc-vmax-driver-juno-update.rst index 7dbc28b2..85194cd1 100644 --- a/specs/juno/emc-vmax-driver-juno-update.rst +++ b/specs/juno/emc-vmax-driver-juno-update.rst @@ -26,6 +26,9 @@ features will be added for VMAX. In previous release, masking view, storage group, and initiator group need to be created ahead of time. In Juno, this will be automated. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/emc-vnx-direct-driver-juno-update.rst b/specs/juno/emc-vnx-direct-driver-juno-update.rst index e2320be1..1c160e22 100644 --- a/specs/juno/emc-vnx-direct-driver-juno-update.rst +++ b/specs/juno/emc-vnx-direct-driver-juno-update.rst @@ -37,6 +37,8 @@ The following new functionalities will be added in this update: * Storage-Assisted Volume Migration * SP Toggle for HA +Use Cases +========= Proposed change =============== diff --git a/specs/juno/hyper-v-smbfs-volume-driver.rst b/specs/juno/hyper-v-smbfs-volume-driver.rst index 0e93ed03..42d02cdc 100644 --- a/specs/juno/hyper-v-smbfs-volume-driver.rst +++ b/specs/juno/hyper-v-smbfs-volume-driver.rst @@ -35,6 +35,9 @@ It will support using any type of SMB share, including: - vendor specific hardware exporting SMB shares. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/i18n-enablement.rst b/specs/juno/i18n-enablement.rst index 99f3ec1c..62e82f4b 100644 --- a/specs/juno/i18n-enablement.rst +++ b/specs/juno/i18n-enablement.rst @@ -31,6 +31,8 @@ API messages to also be returned in the language chosen by the user. This functionality is important to support the use of OpenStack by the international community. +Use Cases +========= Problem description =================== diff --git a/specs/juno/limit-volume-copy-bandwidth.rst b/specs/juno/limit-volume-copy-bandwidth.rst index 6c900eed..3609f41d 100644 --- a/specs/juno/limit-volume-copy-bandwidth.rst +++ b/specs/juno/limit-volume-copy-bandwidth.rst @@ -31,6 +31,8 @@ usable. e.g. When instances directly access to the storage and doesn't go through I/O scheduler of cinder control node, ionice cannot control I/O priority and instances access may slow down. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/oracle-zfssa-cinder-driver.rst b/specs/juno/oracle-zfssa-cinder-driver.rst index ffc3385a..86d0e3b2 100644 --- a/specs/juno/oracle-zfssa-cinder-driver.rst +++ b/specs/juno/oracle-zfssa-cinder-driver.rst @@ -24,6 +24,9 @@ Problem description Currently there is no support for ZFS Storage Appliance product line from Openstack Cinder. +Use Cases +========= + Proposed change =============== iSCSI driver uses REST API to communicate out of band with the storage diff --git a/specs/juno/pool-aware-cinder-scheduler.rst b/specs/juno/pool-aware-cinder-scheduler.rst index f7b18855..98fe7a3e 100644 --- a/specs/juno/pool-aware-cinder-scheduler.rst +++ b/specs/juno/pool-aware-cinder-scheduler.rst @@ -44,6 +44,8 @@ Therefore it is important to extend Cinder so that it is aware of storage pools within backend and also use them as finest granularity for resource placement. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/pure-iscsi-volume-driver.rst b/specs/juno/pure-iscsi-volume-driver.rst index 589cf41d..466bf31a 100644 --- a/specs/juno/pure-iscsi-volume-driver.rst +++ b/specs/juno/pure-iscsi-volume-driver.rst @@ -29,6 +29,8 @@ Problem description Currently, the Pure Storage FlashArray cannot be used a block storage backend in an OpenStack environment. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/restblock-driver.rst b/specs/juno/restblock-driver.rst index bce0b5e0..71b26d93 100644 --- a/specs/juno/restblock-driver.rst +++ b/specs/juno/restblock-driver.rst @@ -45,6 +45,9 @@ Also, we have plans for a few features that should come shortly: * Native snapshot management +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/smbfs-volume-driver.rst b/specs/juno/smbfs-volume-driver.rst index 1c5a1baf..a3553c01 100644 --- a/specs/juno/smbfs-volume-driver.rst +++ b/specs/juno/smbfs-volume-driver.rst @@ -36,6 +36,9 @@ It will support using any type of SMB share, including: - vendor specific hardware exporting SMB shares. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/support-GPFS-nas-ibmnas-driver.rst b/specs/juno/support-GPFS-nas-ibmnas-driver.rst index 94c6f909..6a57b70b 100644 --- a/specs/juno/support-GPFS-nas-ibmnas-driver.rst +++ b/specs/juno/support-GPFS-nas-ibmnas-driver.rst @@ -24,6 +24,8 @@ exports provided from a gpfs server. * Lacking this capability will limit the end users from using remote gpfs NAS servers as a backend in OpenStack environment. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/support-reset-state-for-backup.rst b/specs/juno/support-reset-state-for-backup.rst index 6ada391c..944664bf 100644 --- a/specs/juno/support-reset-state-for-backup.rst +++ b/specs/juno/support-reset-state-for-backup.rst @@ -47,6 +47,9 @@ backup reset state API. 2. Resetting status from creating/restoring to error Directly change the backup status to 'error' without restart cinder-backup. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/support-volume-backup-for-qcow2.rst b/specs/juno/support-volume-backup-for-qcow2.rst index 0650f286..8acd3789 100644 --- a/specs/juno/support-volume-backup-for-qcow2.rst +++ b/specs/juno/support-volume-backup-for-qcow2.rst @@ -22,6 +22,9 @@ Currently, cinder-backup doesn't support qcow2 format disk because the backup code assumes the source volume is a raw volume. The destination (i.e. swift, rbd) should absolutely remain universal across all volume back-ends. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/support-volume-num-weighter.rst b/specs/juno/support-volume-num-weighter.rst index a1baecdc..ecbdfe41 100644 --- a/specs/juno/support-volume-num-weighter.rst +++ b/specs/juno/support-volume-num-weighter.rst @@ -42,6 +42,8 @@ for these volumes----three on volume-backend A, three on volume-backend-B. So that we can make full use of all volume-backends' IO capabilities to help improve volume-backends' IO balance and volumes' IO performance. +Use Cases +========= Proposed change =============== diff --git a/specs/juno/task-log.rst b/specs/juno/task-log.rst index a24f06ff..f1821013 100644 --- a/specs/juno/task-log.rst +++ b/specs/juno/task-log.rst @@ -33,6 +33,9 @@ first place). .. _engine: http://docs.openstack.org/developer/taskflow/engines.html +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/united-policy.json-in-cinder.rst b/specs/juno/united-policy.json-in-cinder.rst index ed2574ac..8a75a46c 100644 --- a/specs/juno/united-policy.json-in-cinder.rst +++ b/specs/juno/united-policy.json-in-cinder.rst @@ -22,6 +22,9 @@ Currently, there is two policy.json files in cinder. One for cinder code, one for unit test code. It's not convenient for the developer and easy to miss one. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/vmdk-backup.rst b/specs/juno/vmdk-backup.rst index fd2d21d1..07b573f1 100644 --- a/specs/juno/vmdk-backup.rst +++ b/specs/juno/vmdk-backup.rst @@ -33,6 +33,9 @@ for the ``vmdk`` protocol and hence the attach\\detach fails for volumes created by the VMDK driver. This blueprint proposes adding support for ``backup-create``\\ ``backup-restore`` for these volumes. +Use Cases +========= + Proposed change =============== diff --git a/specs/juno/volume-replication.rst b/specs/juno/volume-replication.rst index 0c5a60c9..d5f691dc 100644 --- a/specs/juno/volume-replication.rst +++ b/specs/juno/volume-replication.rst @@ -26,6 +26,9 @@ While this blueprint focuses on volume replication, a related blueprint focuses on consistency groups, and replication would be extended to support it. +Use Cases +========= + Problem description =================== diff --git a/specs/juno/xtremio_cinder_volume_driver.rst b/specs/juno/xtremio_cinder_volume_driver.rst index 4f77d635..9d61652e 100644 --- a/specs/juno/xtremio_cinder_volume_driver.rst +++ b/specs/juno/xtremio_cinder_volume_driver.rst @@ -55,6 +55,9 @@ The following diagram shows the command and data paths. +----------------+ +-----------------+ +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/abc-volume-drivers.rst b/specs/kilo/abc-volume-drivers.rst index 26831b1c..2bea28a4 100644 --- a/specs/kilo/abc-volume-drivers.rst +++ b/specs/kilo/abc-volume-drivers.rst @@ -25,6 +25,8 @@ the outside (manager layer) it is not visible which functionality a driver implements. So the only way to discover that is to try to call a function of a feature set and to see if it raises a ``NotImplementedError``. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/add-fibre-channel-support-to-netapp-drivers.rst b/specs/kilo/add-fibre-channel-support-to-netapp-drivers.rst index 42fcdb20..2841e39a 100644 --- a/specs/kilo/add-fibre-channel-support-to-netapp-drivers.rst +++ b/specs/kilo/add-fibre-channel-support-to-netapp-drivers.rst @@ -26,6 +26,9 @@ Other vendors have moved common, protocol-agnostic driver code into library modules and called those from the actual driver modules, which then become very thin indirection layers. NetApp will take this approach as well. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/assisted_snapshot_improvements.rst b/specs/kilo/assisted_snapshot_improvements.rst index 61b0fb0b..6fa939a1 100644 --- a/specs/kilo/assisted_snapshot_improvements.rst +++ b/specs/kilo/assisted_snapshot_improvements.rst @@ -30,6 +30,8 @@ This work will also assist in supporting proper transitions between different phases of the snapshot create/delete process as we move toward a state machine in Kilo. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/backup-notification.rst b/specs/kilo/backup-notification.rst index 826de394..e3c5f912 100644 --- a/specs/kilo/backup-notification.rst +++ b/specs/kilo/backup-notification.rst @@ -21,6 +21,9 @@ Cinder is supposed to send the notifications to ceilometer to report the resource usage status. This notification support has been implemented for the volume and the volume snapshot, but not the backup. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/chiscsi-iscsi-helper.rst b/specs/kilo/chiscsi-iscsi-helper.rst index 4845240a..ef8676ec 100644 --- a/specs/kilo/chiscsi-iscsi-helper.rst +++ b/specs/kilo/chiscsi-iscsi-helper.rst @@ -30,6 +30,9 @@ chiscsi target is not currently supported by openstack * Manual intervention is currently required to export volumes, as cinder does not understand chiscsi target implementation. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/cinder-objects.rst b/specs/kilo/cinder-objects.rst index c7b9b228..3f6296d4 100644 --- a/specs/kilo/cinder-objects.rst +++ b/specs/kilo/cinder-objects.rst @@ -31,6 +31,8 @@ There are a few problems that exists today in cinder: * There are multiple places where database calls are made in cinder, and this could result in race conditions. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/consistency-groups-kilo-update.rst b/specs/kilo/consistency-groups-kilo-update.rst index 4918200e..fbc0d699 100644 --- a/specs/kilo/consistency-groups-kilo-update.rst +++ b/specs/kilo/consistency-groups-kilo-update.rst @@ -38,6 +38,9 @@ Problem description Cinder database. There's a limitation to this approach, however, because the size of the column is fixed. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/create-states.rst b/specs/kilo/create-states.rst index c2426191..59b2e17d 100644 --- a/specs/kilo/create-states.rst +++ b/specs/kilo/create-states.rst @@ -67,6 +67,9 @@ and lock acquisition techniques will likely not end in a correct solution. We will have to explore how to do this in a way that is *piecemeal* but also does not destabilize cinder more. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/database-purge.rst b/specs/kilo/database-purge.rst index fc1d152b..5bb43d3c 100644 --- a/specs/kilo/database-purge.rst +++ b/specs/kilo/database-purge.rst @@ -28,7 +28,7 @@ for years and years. To date, there is no "mechanism" to programatically purge the deleted data. The archive rows feature doesn't solve this. Use Cases ----------- +========= Operators should have the ability to purge deleted rows, possibily on a schedule (cronjob) or as needed (Before an upgrade, prior to maintenance) diff --git a/specs/kilo/db-archiving.rst b/specs/kilo/db-archiving.rst index 00960c9a..e8d65d27 100644 --- a/specs/kilo/db-archiving.rst +++ b/specs/kilo/db-archiving.rst @@ -28,6 +28,8 @@ performance and maintaining: * Storage usage utilization (low priority) +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/db-volume-filtering.rst b/specs/kilo/db-volume-filtering.rst index d0a7dd19..c7f0054a 100644 --- a/specs/kilo/db-volume-filtering.rst +++ b/specs/kilo/db-volume-filtering.rst @@ -34,6 +34,8 @@ support additional filtering. The purpose of this blueprint is to make the filtering support consistent across these APIs. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/db2-database.rst b/specs/kilo/db2-database.rst index 20d39c83..1e8f7796 100644 --- a/specs/kilo/db2-database.rst +++ b/specs/kilo/db2-database.rst @@ -32,6 +32,8 @@ Problem description since the majority of core projects support DB2 but Cinder does not yet. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/driver-private-data.rst b/specs/kilo/driver-private-data.rst index b189ffb8..498f4d4c 100644 --- a/specs/kilo/driver-private-data.rst +++ b/specs/kilo/driver-private-data.rst @@ -30,6 +30,8 @@ backends have this capability. A use case for this is storing target CHAP secrets so iSCSI initiators can do CHAP authentication against targets. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/extract-brick.rst b/specs/kilo/extract-brick.rst index 871859bf..08b35863 100644 --- a/specs/kilo/extract-brick.rst +++ b/specs/kilo/extract-brick.rst @@ -24,6 +24,8 @@ it was meant to be a standalone library that Cinder, Nova and any other project in OpenStack could use. Currently brick lives in a directory inside of Cinder, which means that only Cinder can use it. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst b/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst index c140f35e..ecd69e73 100644 --- a/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst +++ b/specs/kilo/filtering-weighing-with-driver-supplied-functions.rst @@ -39,6 +39,8 @@ actually used. There is also a maximum volume size of 500 GB. These details can be highly variable between vendors and even within different backend arrays from the same vendor. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/generic-volume-migration.rst b/specs/kilo/generic-volume-migration.rst index 8fd1bf50..3e35baf3 100644 --- a/specs/kilo/generic-volume-migration.rst +++ b/specs/kilo/generic-volume-migration.rst @@ -29,6 +29,9 @@ local cinder-volume instance. This is technically not necessary for local volumes and also prevents drivers such as Ceph from participating in volume migration operations. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/get-vol-type-extra-specs.rst b/specs/kilo/get-vol-type-extra-specs.rst index e8027c99..91e2fccd 100755 --- a/specs/kilo/get-vol-type-extra-specs.rst +++ b/specs/kilo/get-vol-type-extra-specs.rst @@ -25,6 +25,9 @@ values are. Having the ability to obtain the permissible volume type extra specs while creating or editing the extra specs will make this task easier and more user-friendly. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/huawei-dsware-driver.rst b/specs/kilo/huawei-dsware-driver.rst index c70f464b..d2fc768c 100644 --- a/specs/kilo/huawei-dsware-driver.rst +++ b/specs/kilo/huawei-dsware-driver.rst @@ -25,6 +25,8 @@ Problem description Currently, user can't access Huawei Dsware by Openstack Cinder. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/huawei-sdshypervisor-driver.rst b/specs/kilo/huawei-sdshypervisor-driver.rst index 7a3768dc..dcf2f59b 100644 --- a/specs/kilo/huawei-sdshypervisor-driver.rst +++ b/specs/kilo/huawei-sdshypervisor-driver.rst @@ -49,6 +49,8 @@ Problem description Currently, user can't access Huawei SDShypervisor by Openstack Cinder. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/incremental-backup.rst b/specs/kilo/incremental-backup.rst index 3061aec6..b199f340 100644 --- a/specs/kilo/incremental-backup.rst +++ b/specs/kilo/incremental-backup.rst @@ -21,6 +21,9 @@ entire volumes during backups will be resource intensive and do not scale well for larger deployments. This specification discusses implementation of incremental backup feature in detail. +Use Cases +========= + Proposed change ================ Cinder backup API, by default uses Swift as its backend. When a volume is diff --git a/specs/kilo/iscsi-alternative-portal.rst b/specs/kilo/iscsi-alternative-portal.rst index d92e27bb..6278d1b2 100644 --- a/specs/kilo/iscsi-alternative-portal.rst +++ b/specs/kilo/iscsi-alternative-portal.rst @@ -23,6 +23,9 @@ attach/detach will fail even though the other portal addresses is reachable. To enable nova-compute to fall-back to alternative portal addresses, cinder should tell the alternative portal addresses to nova. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/iscsi-multipath-enhancement.rst b/specs/kilo/iscsi-multipath-enhancement.rst index 5506640b..9ca7ef66 100644 --- a/specs/kilo/iscsi-multipath-enhancement.rst +++ b/specs/kilo/iscsi-multipath-enhancement.rst @@ -37,6 +37,9 @@ portal is unaccessible due to network trouble. that the session to the main portal can be re-established when the network is recovered.) +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/limit-volume-copy-bps-per-backend.rst b/specs/kilo/limit-volume-copy-bps-per-backend.rst index 47c7e960..686d2ccf 100644 --- a/specs/kilo/limit-volume-copy-bps-per-backend.rst +++ b/specs/kilo/limit-volume-copy-bps-per-backend.rst @@ -29,6 +29,8 @@ The current global config has some issues. consumed. From the viewpoint of QoS (to keep bandwidth for access from instances), we should limit total volume copy bandwidth per backend. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/linux-systemz.rst b/specs/kilo/linux-systemz.rst index 9fa80164..e1357296 100644 --- a/specs/kilo/linux-systemz.rst +++ b/specs/kilo/linux-systemz.rst @@ -39,6 +39,9 @@ respects: * vHBAs may be turned online, or offline. Offline vHBAs need to be ignored. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/multi-attach-volume.rst b/specs/kilo/multi-attach-volume.rst index bb58f7a4..ad483aea 100644 --- a/specs/kilo/multi-attach-volume.rst +++ b/specs/kilo/multi-attach-volume.rst @@ -28,7 +28,7 @@ that assumes the limitation of a single volume to a single instance. see nova/volume/cinder.py: check_attach() Use Cases ---------- +========= Allow users to share volumes between multiple guests using either read-write or read-only attachments. Clustered applications with two nodes where one is active and one is passive. Both diff --git a/specs/kilo/nfs-backup.rst b/specs/kilo/nfs-backup.rst index 9eb96db1..d9a52680 100644 --- a/specs/kilo/nfs-backup.rst +++ b/specs/kilo/nfs-backup.rst @@ -36,6 +36,8 @@ Problem description by the POSIX filesystem driver (currently in review, see [3] and [4]). +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/nfs-snapshots.rst b/specs/kilo/nfs-snapshots.rst index a6b5b65f..a57b071b 100644 --- a/specs/kilo/nfs-snapshots.rst +++ b/specs/kilo/nfs-snapshots.rst @@ -24,6 +24,9 @@ driver. * delete snapshot * clone from snapshot +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/optimze-rbd-copy-volume-to-image.rst b/specs/kilo/optimze-rbd-copy-volume-to-image.rst index 74de1bcf..4c820817 100644 --- a/specs/kilo/optimze-rbd-copy-volume-to-image.rst +++ b/specs/kilo/optimze-rbd-copy-volume-to-image.rst @@ -44,6 +44,9 @@ Benefits: * Reduce the IO pressure on cinder-volume host when doing the copy_volume_to_image operation. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/over-subscription-in-thin-provisioning.rst b/specs/kilo/over-subscription-in-thin-provisioning.rst index b6d7639d..d0bc28f2 100644 --- a/specs/kilo/over-subscription-in-thin-provisioning.rst +++ b/specs/kilo/over-subscription-in-thin-provisioning.rst @@ -92,6 +92,8 @@ subscription ratio is 2.0. If no volumes have been provisioned yet, the apparent_available_capacity is 100 x 2.0 = 200. If 50G volumes have already been provisioned, the apparent_available_capacity is 200 - 50 = 150. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/private-volume-types.rst b/specs/kilo/private-volume-types.rst index 438a6d31..e80b9a4c 100644 --- a/specs/kilo/private-volume-types.rst +++ b/specs/kilo/private-volume-types.rst @@ -22,6 +22,8 @@ Some volume types should only be restricted. Examples are test volume types where a new technology is being tried out or ultra high performance volumes for special needs where most users should not be able to select these volumes. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/remotefs-cfg-improvements.rst b/specs/kilo/remotefs-cfg-improvements.rst index a4cd1071..5c27f2df 100644 --- a/specs/kilo/remotefs-cfg-improvements.rst +++ b/specs/kilo/remotefs-cfg-improvements.rst @@ -27,6 +27,8 @@ The configuration system for NFS/GlusterFS/etc drivers: * is more complex than necessary * limits functionality such as migration +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/support-import-export-snapshots.rst b/specs/kilo/support-import-export-snapshots.rst index 2ca7dfb3..23b354cd 100644 --- a/specs/kilo/support-import-export-snapshots.rst +++ b/specs/kilo/support-import-export-snapshots.rst @@ -41,6 +41,9 @@ Benefits: By using import snapshots function to import snapshots, we could first delete the import snapshots, and then delete the import volumes. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/support-iscsi-driver.rst b/specs/kilo/support-iscsi-driver.rst index fb7af65c..88a24210 100644 --- a/specs/kilo/support-iscsi-driver.rst +++ b/specs/kilo/support-iscsi-driver.rst @@ -23,6 +23,9 @@ Problem description This code duplication causes instability in the iSER driver code, when new features or changes are added to the iSCSI driver flow. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/support-modify-volume-image-metadata.rst b/specs/kilo/support-modify-volume-image-metadata.rst index 062a1b10..0b31ae77 100644 --- a/specs/kilo/support-modify-volume-image-metadata.rst +++ b/specs/kilo/support-modify-volume-image-metadata.rst @@ -85,6 +85,9 @@ looking up rich information about the metadata from the definition catalog to display information to users and admins. This can include metadata about software on the volume. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/support-volume-backup-quota.rst b/specs/kilo/support-volume-backup-quota.rst index 30c1f3a5..e4da8b2e 100644 --- a/specs/kilo/support-volume-backup-quota.rst +++ b/specs/kilo/support-volume-backup-quota.rst @@ -27,6 +27,8 @@ to take backup into account: of backup storage back-end, it would cause cinder-backup in the state of rejecting service. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/unit-test-cases-for-cinder-scripts.rst b/specs/kilo/unit-test-cases-for-cinder-scripts.rst index d1b4df58..3bf1613c 100644 --- a/specs/kilo/unit-test-cases-for-cinder-scripts.rst +++ b/specs/kilo/unit-test-cases-for-cinder-scripts.rst @@ -24,6 +24,9 @@ tests for these scripts can help prevent issues similar to https://review.openstack.org/#/c/79791/, where a non-existent module was imported. Furthermore, it increases the test coverage for each cinder script. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/valid-states-api.rst b/specs/kilo/valid-states-api.rst index d95d7df3..e23680b6 100644 --- a/specs/kilo/valid-states-api.rst +++ b/specs/kilo/valid-states-api.rst @@ -23,6 +23,9 @@ horizon in a meaningful way by restricting the set of permissible states that the administrator can specify for a volume. There is no API for this, and it is undesirable to hardcode this information into horizon. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/vhost-support.rst b/specs/kilo/vhost-support.rst index 87ea2e28..51e4d658 100644 --- a/specs/kilo/vhost-support.rst +++ b/specs/kilo/vhost-support.rst @@ -24,6 +24,9 @@ This means the data plane does not go through emulations, which can slow down I/O performance. Cinder today does not provide an option for taking advantage of the Linux vHost driver. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/vmdk-oslo.vmware.rst b/specs/kilo/vmdk-oslo.vmware.rst index 31d517e5..2054add1 100644 --- a/specs/kilo/vmdk-oslo.vmware.rst +++ b/specs/kilo/vmdk-oslo.vmware.rst @@ -23,6 +23,9 @@ upload/download of virtual disks. The VMware drivers for nova, glance and ceilometer have already integrated with oslo.vmware. This spec proposes the integration of VMDK driver with oslo.vmware. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/volume-sorting.rst b/specs/kilo/volume-sorting.rst index 6e2aaca4..7b8a1099 100644 --- a/specs/kilo/volume-sorting.rst +++ b/specs/kilo/volume-sorting.rst @@ -30,6 +30,8 @@ has retrieved from the server. The items in this table need to be sorted by status first and by display name second. In order to retrieve data in this order, the APIs must accept multiple sort keys/directions. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/volume-type-description.rst b/specs/kilo/volume-type-description.rst index 340464a1..7a6f045b 100644 --- a/specs/kilo/volume-type-description.rst +++ b/specs/kilo/volume-type-description.rst @@ -29,6 +29,8 @@ Problem description default volume type or None. There is no way for a user to find out what is the default volume type. +Use Cases +========= Proposed change =============== diff --git a/specs/kilo/xio-iscsi-fc-volume-driver.rst b/specs/kilo/xio-iscsi-fc-volume-driver.rst index ddd6b21d..36a37440 100644 --- a/specs/kilo/xio-iscsi-fc-volume-driver.rst +++ b/specs/kilo/xio-iscsi-fc-volume-driver.rst @@ -22,6 +22,9 @@ Problem description Currently no volume driver for X-IO ISE storage available in any release branch. +Use Cases +========= + Proposed change =============== diff --git a/specs/kilo/xio-volume-driver-1-1.rst b/specs/kilo/xio-volume-driver-1-1.rst index 14dd8b9c..3cfb08f8 100644 --- a/specs/kilo/xio-volume-driver-1-1.rst +++ b/specs/kilo/xio-volume-driver-1-1.rst @@ -34,6 +34,9 @@ The volume will be updated to align with the new type accordingly. Thin allocation support: Allow the end user to specify that the volume should be thinly allocated. +Use Cases +========= + Proposed change =============== diff --git a/specs/template.rst b/specs/template.rst index 8939323f..12190a93 100644 --- a/specs/template.rst +++ b/specs/template.rst @@ -52,14 +52,15 @@ Some notes about using this template: Problem description =================== -A detailed description of the problem: +A detailed description of the problem. What problem is this blueprint +addressing? -* For a new feature this might be use cases. Ensure you are clear about the - actors in each use case: End User vs Deployer - -* For a major reworking of something existing it would describe the - problems in that feature that are being addressed. +Use Cases +========= +What use cases does this address? What impact on actors does this change have? +Ensure you're clear about the actors in each use case: Developer, end user, +deployer, etc. Proposed change =============== diff --git a/tests/test_titles.py b/tests/test_titles.py index ff471645..8f0ee800 100644 --- a/tests/test_titles.py +++ b/tests/test_titles.py @@ -39,11 +39,13 @@ class TestTitles(testtools.TestCase): return titles def _check_titles(self, spec, titles): - self.assertEqual(7, len(titles), + self.assertEqual(8, len(titles), "Titles count in '%s' doesn't match expected" % spec) problem = 'Problem description' self.assertIn(problem, titles) + self.assertIn('Use Cases', titles) + proposed = 'Proposed change' self.assertIn(proposed, titles) self.assertIn('Alternatives', titles[proposed])