Remove hardcoded releases list from unit tests
test_titles had hardcoded releases list and 'liberty' folder was out of scope for this test. Also all specs were changed to fit template and pass tests. Change-Id: Ib4c6cbb96a9f9c96dd54cff3b92a35c35df72fff
This commit is contained in:
parent
b63b22c9bf
commit
fa11db5278
@ -34,8 +34,8 @@ Proposed change
|
||||
All cinder volume drivers needs to get updated with the following approach::
|
||||
|
||||
class FooDriver(driver.RetypeVD, driver.TransferVD, driver.ExtendVD,
|
||||
driver.CloneableVD, driver.CloneableImageVD, driver.SnapshotVD,
|
||||
driver.BaseVD)
|
||||
driver.CloneableVD, driver.CloneableImageVD,
|
||||
driver.SnapshotVD, driver.BaseVD)
|
||||
|
||||
A drivers must inherit from BaseVD and implement the basic functions. In order
|
||||
to mark that a driver does implement further feature sets it must inherit from
|
||||
|
@ -17,7 +17,7 @@ threads and configuration, among other things, to developers, operators,
|
||||
and tech support.
|
||||
|
||||
|
||||
Problem Description
|
||||
Problem description
|
||||
===================
|
||||
|
||||
Currently Cinder does not have a way to gather runtime data from active
|
||||
@ -33,13 +33,16 @@ information to any service. Report generation is triggered by sending a special
|
||||
(USR1) signal to a service. Reports are generated on stderr, that can be piped
|
||||
into system log, if needed.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Guru reports are extensible, meaning that we will be able to add more
|
||||
information to those reports in case we see it needed.
|
||||
|
||||
Guru reports has been supported by Nova.
|
||||
|
||||
|
||||
Proposed Change
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
First, a new oslo-incubator module (reports.*) should be synchronized into
|
||||
@ -47,17 +50,17 @@ cinder tree. Then, each service entry point should be extended to register
|
||||
reporting functionality before proceeding to its real main().
|
||||
|
||||
|
||||
Data Model Impact
|
||||
Data model impact
|
||||
-----------------
|
||||
None.
|
||||
|
||||
|
||||
REST API Impact
|
||||
REST API impact
|
||||
---------------
|
||||
None.
|
||||
|
||||
|
||||
Security Impact
|
||||
Security impact
|
||||
---------------
|
||||
In theory, the change could expose service internals to someone who is able to
|
||||
send the needed signal to a service. That said, we can probably assume that the
|
||||
@ -69,12 +72,12 @@ is channeled into safe place.
|
||||
Because this report is triggered by user, there is no need to add config option
|
||||
to turn on/off this feature.
|
||||
|
||||
Notifications Impact
|
||||
Notifications impact
|
||||
--------------------
|
||||
None.
|
||||
|
||||
|
||||
Other End User Impact
|
||||
Other end user impact
|
||||
---------------------
|
||||
None.
|
||||
|
||||
@ -93,13 +96,13 @@ IPv6 Impact
|
||||
None.
|
||||
|
||||
|
||||
Other Deployer Impact
|
||||
Other deployer impact
|
||||
---------------------
|
||||
Deployers may be interested in making sure those reports are collected
|
||||
somewhere (e.g. stderr should be captured by syslog).
|
||||
|
||||
|
||||
Developer Impact
|
||||
Developer impact
|
||||
----------------
|
||||
None.
|
||||
|
||||
|
@ -25,6 +25,9 @@ vendors that currently provide hardware interfaces for open-iscsi cannot use
|
||||
them in openstack as the interfaces (iface) argument is not currently
|
||||
supported in the iSCSI driver.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Use of such iSCSI hardware transports requires providing a corresponding
|
||||
interface file (referred to as iface), which can be autogenerated via iscsiadm
|
||||
but need correctness checks or can also be generated on the fly, given the
|
||||
|
@ -25,9 +25,9 @@ OpenStack is moving towards support for hierarchical ownership of projects.
|
||||
In this regard, Keystone will change the organizational structure of
|
||||
OpenStack, creating nested projects.
|
||||
|
||||
The existing Quota Driver in Cinder called ``DbQuotaDriver`` is useful to enforce
|
||||
quotas at both the project and the project-user level provided that all the
|
||||
projects are at the same level (i.e. hierarchy level cannot be greater
|
||||
The existing Quota Driver in Cinder called ``DbQuotaDriver`` is useful to
|
||||
enforce quotas at both the project and the project-user level provided that all
|
||||
the projects are at the same level (i.e. hierarchy level cannot be greater
|
||||
than 1).
|
||||
|
||||
The proposal is to develop a new Quota Driver called ``NestedQuotaDriver``,
|
||||
@ -47,7 +47,7 @@ tree below.
|
||||
|
||||
|
||||
Use Cases
|
||||
---------
|
||||
=========
|
||||
|
||||
**Actors**
|
||||
|
||||
@ -85,7 +85,8 @@ Xing and Eric respectively.
|
||||
1. Mike needs to be able to set the quotas for both CMS and ATLAS, and also
|
||||
manage quotas across the entire projects including the root project,
|
||||
ProductionIT.
|
||||
2. Jay should be able to set and update the quota of Visualisation and Computing.
|
||||
2. Jay should be able to set and update the quota of Visualisation and
|
||||
Computing.
|
||||
3. Jay should be able to able to view the quota of CMS, Visualisation and
|
||||
Computing.
|
||||
4. Jay should not be able to update the quota of CMS, although he is the
|
||||
@ -155,7 +156,8 @@ projects this can be an useful addition to Cinder.
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
1. The default quota (hard limit) for any newly created sub-project is set to 0.
|
||||
1. The default quota (hard limit) for any newly created sub-project is set
|
||||
to 0.
|
||||
The neutral value of zero ensures consistency of data in the case of race
|
||||
conditions when several projects are created by admins at the same time.
|
||||
Suppose the default value of number of volumes allowed per project is 100,
|
||||
@ -392,8 +394,8 @@ operations, the token is scoped to the parent.
|
||||
Data model impact
|
||||
-----------------
|
||||
|
||||
Create a new column ``allocated`` in table ``quotas`` with default value 0. This
|
||||
can be done by adding a migration script to do the same.
|
||||
Create a new column ``allocated`` in table ``quotas`` with default value 0.
|
||||
This can be done by adding a migration script to do the same.
|
||||
|
||||
|
||||
REST API impact
|
||||
@ -526,7 +528,8 @@ added since Kilo release.
|
||||
References
|
||||
==========
|
||||
|
||||
* `Hierarchical Projects Wiki <https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
||||
* `Hierarchical Projects Wiki
|
||||
<https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
||||
|
||||
* `Hierarchical Projects
|
||||
<http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html>`_
|
||||
|
@ -33,9 +33,12 @@ The next link generation is already available in volume list, so it is
|
||||
straightforward to implement it for other cinder concepts. Please refer to
|
||||
the _get_collection_links function in the cinder.api.common.ViewBuilder class.
|
||||
|
||||
Here is one use case to prove the necessity of the pagination keys and the next
|
||||
link generation: Suppose the maximum page size 1000 and there are more than
|
||||
1000 volume snapshots. If there is no pagination key marker implemented for
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Suppose the maximum page size 1000 and there are more than 1000 volume
|
||||
snapshots. If there is no pagination key marker implemented for
|
||||
snapshot, the maximum snapshots the user can query from the API is 1000. The
|
||||
only way to query more than 1000 snapshots is to increase the default maximum
|
||||
page size, which is a limitation to the cinder list operations. If no next link
|
||||
|
@ -27,6 +27,8 @@ inside of Cinder, which means that only Cinder can use it.
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Use the same library code both in Nova and Cinder.
|
||||
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
@ -46,7 +48,7 @@ Alternatives
|
||||
|
||||
We could simply keep brick inside of Cinder and not share it's code. The
|
||||
problem with this is that any changes/fixes to brick will then need to be
|
||||
backported into the same code in Nova. This is the existing problem.
|
||||
backported into the same code in Nova. This is the existing problem.
|
||||
|
||||
Data model impact
|
||||
-----------------
|
||||
|
@ -12,8 +12,11 @@ https://blueprints.launchpad.net/cinder/+spec/get-volume-type-extra-specs
|
||||
|
||||
Provide an interface to obtain a volume driver's *capabilities*.
|
||||
|
||||
Problem description
|
||||
===================
|
||||
|
||||
Definitions
|
||||
===========
|
||||
-----------
|
||||
|
||||
* *Volume Type:* A group of volume policies.
|
||||
* *Extra Specs:* The definition of a volume type. This is a group of policies
|
||||
@ -22,9 +25,6 @@ Definitions
|
||||
* *Capabilities:* What the current deployed backend in Cinder is able to do.
|
||||
These correspond to extra specs.
|
||||
|
||||
Problem description
|
||||
===================
|
||||
|
||||
The current implementation of *volume type* *extra specs* management process in
|
||||
Horizon and the cinder client are error-prone. Operators manage extra specs
|
||||
without having guidance on what capabilities are possible with a backend.
|
||||
@ -93,7 +93,8 @@ New endpoint GET /v2/tenant/get_capabilities/ubuntu@lvm1_pool::
|
||||
"driver_version": "2.0.0",
|
||||
"storage_protocol": "iSCSI",
|
||||
"display_name": "Capabilities of Cinder LVM driver",
|
||||
"description": "These are volume type options provided by Cinder LVM driver, blah, blah.",
|
||||
"description":
|
||||
"These are volume type options provided by Cinder LVM driver, blah, blah.",
|
||||
"visibility": "public",
|
||||
"properties": {
|
||||
"thin_provisioning": {
|
||||
@ -195,14 +196,16 @@ The qos capability describes some corner cases for us:
|
||||
QoS on the same device, so you can specify that with
|
||||
<capability-key-name>=true|false.
|
||||
|
||||
If a device doesn't support this (ie always true), then this entry is omitted.
|
||||
If a device doesn't support this (ie always true), then this entry is
|
||||
omitted.
|
||||
|
||||
The other key piece is ``vendor unique`` keys. For those that allow
|
||||
additional special keys to set QoS those key names are provided in list
|
||||
format as valid keys that can be specified and set as related to Quality of
|
||||
Service.
|
||||
|
||||
The vendor:persona key is another good example of a ``vendor unique`` capability:
|
||||
The vendor:persona key is another good example of a ``vendor unique``
|
||||
capability:
|
||||
This is very much like QoS, and again, note that we're just providing what
|
||||
the valid values are.
|
||||
|
||||
@ -233,12 +236,6 @@ operator would fetch a list of capabilities for a particular backend's pool:
|
||||
First get list of services::
|
||||
|
||||
$ cinder service-list
|
||||
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
|
||||
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
|
||||
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
|
||||
| cinder-volume | block1@lvm#pool | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
|
||||
+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
|
||||
|
||||
With one of the listed pools, pass that to capabilities-list::
|
||||
|
||||
|
@ -65,6 +65,9 @@ more aptly named 'force_rollforward_attach', but it seems that simply forcing
|
||||
a detach of a volume will put the volume back in a state where the attach can
|
||||
be attempted again.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
UseCase1: Cinder DB 'attaching', storage back end 'available', Nova DB
|
||||
does not show block device for this volume.
|
||||
An attempt was made to attach a volume using 'nova volume-attach <instance>
|
||||
@ -149,7 +152,8 @@ accomplished via manual intervention (i.e. 'cinder force-detach....'
|
||||
Cinder force-detach API currently calls:
|
||||
volume_api.terminate_connection(...)
|
||||
self.volume_api.detach(...)
|
||||
This will be modified to call into the VolumeManager with a new force_detach(...)
|
||||
This will be modified to call into the VolumeManager with a new
|
||||
force_detach(...)
|
||||
|
||||
api/contrib/volume_actions.py: force_detach(...)
|
||||
try:
|
||||
|
@ -15,7 +15,7 @@ is_incremental and has_dependent_backups flags to indicate the type of backup
|
||||
and enriching the notification system via adding parent_id to report.
|
||||
|
||||
|
||||
Problem Description
|
||||
Problem description
|
||||
===================
|
||||
|
||||
In Kilo release we supported incremental backup, but there are still some
|
||||
@ -27,8 +27,8 @@ It's important that Cinder doesn't allow this backup to be deleted since
|
||||
'Incremental backups exist for this backup'. Currently, they must have a try to
|
||||
know it. So if there is a flag to indicate the backup can't be deleted or not,
|
||||
it will bring more convenience to user and reduce API call.
|
||||
3.Enriching the notification system via reporting to Ceilometer, add parent_id
|
||||
to report
|
||||
3. Enriching the notification system via reporting to Ceilometer,
|
||||
add parent_id to report
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
@ -37,24 +37,28 @@ It's useful for 3rd party billing system to distinguish the full backup and
|
||||
incremental backup, as using different size of storage space, they could have
|
||||
different fee for full and incremental backups.
|
||||
|
||||
Proposed Change
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
1.When show single backup detail, cinder-api needs to judge if this backup is
|
||||
1. When show single backup detail, cinder-api needs to judge if this backup is
|
||||
a full backup or not checking backup['parent_id'].
|
||||
2.If it's an incremental backup, judge if this backup has dependent backups
|
||||
2. If it's an incremental backup, judge if this backup has dependent backups
|
||||
like we do in process of delete backup.
|
||||
3.Then add 'is_incremental=True' and 'has_dependent_backups=True/False' to
|
||||
3. Then add 'is_incremental=True' and 'has_dependent_backups=True/False' to
|
||||
response body.
|
||||
4.Add parent_id to notification system.
|
||||
4. Add parent_id to notification system.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
None.
|
||||
|
||||
|
||||
Data Model Impact
|
||||
Data model impact
|
||||
-----------------
|
||||
None.
|
||||
|
||||
|
||||
REST API Impact
|
||||
REST API impact
|
||||
---------------
|
||||
The response body of show incremental backup detail is changed like this:
|
||||
|
||||
@ -73,16 +77,16 @@ If there is full backup, the is_incremental flag will be False.
|
||||
And has_dependent_backups will be True if the full backup has dependent
|
||||
backups.
|
||||
|
||||
Security Impact
|
||||
Security impact
|
||||
---------------
|
||||
None
|
||||
|
||||
Notifications Impact
|
||||
Notifications impact
|
||||
--------------------
|
||||
Add parent_id to backup notification.
|
||||
|
||||
|
||||
Other End User Impact
|
||||
Other end user impact
|
||||
---------------------
|
||||
End user can get more info about incremental backup. Enhance user experience.
|
||||
|
||||
@ -99,12 +103,12 @@ IPv6 Impact
|
||||
None.
|
||||
|
||||
|
||||
Other Deployer Impact
|
||||
Other deployer impact
|
||||
---------------------
|
||||
None.
|
||||
|
||||
|
||||
Developer Impact
|
||||
Developer impact
|
||||
----------------
|
||||
None.
|
||||
|
||||
@ -141,7 +145,7 @@ Unit tests are needed to ensure response is working correctly.
|
||||
|
||||
Documentation Impact
|
||||
====================
|
||||
1. Cloud admin documentations will be updated to introduce the changes:
|
||||
1. Cloud admin documentations will be updated to introduce the changes:
|
||||
http://docs.openstack.org/admin-guide-cloud/content/volume-backup-restore.html
|
||||
|
||||
2. API ref will be also updated for backups:
|
||||
@ -150,4 +154,4 @@ http://developer.openstack.org/api-ref-blockstorage-v2.html
|
||||
|
||||
References
|
||||
==========
|
||||
None
|
||||
None
|
||||
|
@ -25,7 +25,8 @@ This spec proposes a use of our entire tool-box to implement replication.
|
||||
2. Types/Extra-Specs - provide mechanism for vendor-unique custom info and
|
||||
help level out some of the unique aspects among the different back-ends.
|
||||
|
||||
3. API Calls - provide some general API calls for things like enable, disable etc
|
||||
3. API Calls - provide some general API calls for things like enable,
|
||||
disable etc
|
||||
|
||||
It would also be preferrable to simplify the state management a bit if we can.
|
||||
|
||||
@ -40,6 +41,9 @@ base.
|
||||
The existing design is great for some backends, but is challenging for many
|
||||
devices to fit in to.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
TBD.
|
||||
|
||||
Proposed change
|
||||
===============
|
||||
@ -57,9 +61,11 @@ to indicate pairing. This could look something like this in the conf file:
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
valid_replication_devices='backend=backend_name-a','backend=backend_name-b'....
|
||||
valid_replication_devices='backend=backend_name-a',
|
||||
'backend=backend_name-b'....
|
||||
|
||||
Alternatively the replication target can potentially be a device unknown to Cinder
|
||||
Alternatively the replication target can potentially be a device unknown
|
||||
to Cinder
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
@ -69,7 +75,8 @@ Or a combination of the two even
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
valid_replication_devices='remote_device={'some unique access meta}','backend=backend_name-b'....
|
||||
valid_replication_devices='remote_device={'some unique access meta}',
|
||||
'backend=backend_name-b'....
|
||||
|
||||
NOTE That the remote_device access would have to be handled via the
|
||||
configured driver.
|
||||
@ -89,20 +96,23 @@ from the create call. The flow would be something like this:
|
||||
enable_replication(volume)
|
||||
disable_replication(volume)
|
||||
failover_replicated_volume(volume)
|
||||
udpate_replication_targets() [mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets() [mechanism for an admin to query what a backend has configured]
|
||||
udpate_replication_targets()
|
||||
[mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets()
|
||||
[mechanism for an admin to query what a backend has configured]
|
||||
|
||||
|
||||
Special considerations
|
||||
-----------------
|
||||
* volume-types
|
||||
There should not be a requirement of an exact match of volume-types between the
|
||||
primary and secondary volumes in the replication set. If a backend "can" match
|
||||
these exactly, then that's fine, if they can't, that's ok as well.
|
||||
There should not be a requirement of an exact match of volume-types between
|
||||
the primary and secondary volumes in the replication set. If a backend "can"
|
||||
match these exactly, then that's fine, if they can't, that's ok as well.
|
||||
|
||||
Ideally, if the volume fails over the type specifications would match, but if
|
||||
this isn't possible it's probably acceptable, and if it needs to be handled by
|
||||
the driver via a retype/modification after the failover, that's fine as well.
|
||||
this isn't possible it's probably acceptable, and if it needs to be handled
|
||||
by the driver via a retype/modification after the failover, that's fine as
|
||||
well.
|
||||
|
||||
* async vs sync
|
||||
This spec assumes async replication only for now. It can easily be
|
||||
@ -211,11 +221,13 @@ We would need to add the API calls mentioned above:
|
||||
enable_replication(volume)
|
||||
disable_replication(volume)
|
||||
failover_replicated_volume(volume)
|
||||
udpate_replication_targets() [mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets() [mechanism for an admin to query what a backend has configured]
|
||||
udpate_replication_targets()
|
||||
[mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets()
|
||||
[mechanism for an admin to query what a backend has configured]
|
||||
|
||||
I think augmenting the existing calls is better than reusing them, but we can look at that
|
||||
more closely in the submission stage.
|
||||
I think augmenting the existing calls is better than reusing them, but we can
|
||||
look at that more closely in the submission stage.
|
||||
|
||||
Security impact
|
||||
---------------
|
||||
|
@ -27,7 +27,8 @@ Details of the overhead are covered in [#rw_bp].
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
This will eliminate bottleneck in large number of concurrent executed operations.
|
||||
This will eliminate bottleneck in large number of concurrent executed
|
||||
operations.
|
||||
|
||||
Proposed change
|
||||
===============
|
||||
@ -88,26 +89,26 @@ required to be run with root privileges. Current state of rootwrap daemon
|
||||
in Neutron shows over 10x speedup comparing to usual ``sudo rootwrap`` call.
|
||||
Total speedup for Cinder shows impressive results too [#rw_perf]:
|
||||
test scenario CinderVolumes.create_and_delete_volume
|
||||
Current performance:
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
| cinder.create_volume | 2.779 | 5.76 | 14.375 | 12.458 | 13.417 | 100.0% | 8 |
|
||||
| cinder.delete_volume | 13.535 | 24.958 | 32.959 | 32.949 | 32.954 | 100.0% | 8 |
|
||||
| total | 16.314 | 30.718 | 35.96 | 35.957 | 35.959 | 100.0% | 8 |
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
Current performance :
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
| action | min (sec) | avg (sec) | max (sec) | count |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
| cinder.create_volume | 2.779 | 5.76 | 14.375 | 8 |
|
||||
| cinder.delete_volume | 13.535 | 24.958 | 32.959 | 8 |
|
||||
| total | 16.314 | 30.718 | 35.96 | 8 |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
Load duration: 131.423681974
|
||||
Full duration: 135.794852018
|
||||
|
||||
With use_rootwrap_daemon enabled:
|
||||
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
| cinder.create_volume | 2.49 | 2.619 | 3.086 | 2.826 | 2.956 | 100.0% | 8 |
|
||||
| cinder.delete_volume | 2.183 | 2.226 | 2.353 | 2.278 | 2.315 | 100.0% | 8 |
|
||||
| total | 4.673 | 4.845 | 5.3 | 5.06 | 5.18 | 100.0% | 8 |
|
||||
+----------------------+-----------+-----------+-----------+---------------+---------------+---------+-------+
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
| action | min (sec) | avg (sec) | max (sec) | count |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
| cinder.create_volume | 2.49 | 2.619 | 3.086 | 8 |
|
||||
| cinder.delete_volume | 2.183 | 2.226 | 2.353 | 8 |
|
||||
| total | 4.673 | 4.845 | 5.3 | 8 |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
Load duration: 19.7548749447
|
||||
Full duration: 22.2729279995
|
||||
|
||||
|
@ -18,6 +18,11 @@ push capabilities of their pools [1]. Eventually we would want some common
|
||||
capabilities to graduate into becoming a ``well defined`` capability. The
|
||||
point of this spec is to agree on the initial ``well defined`` capabilities.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Having the ``well defined`` capabilities will allow the deployer to see what
|
||||
common capabilities are shared beyond their deployed backends in Cinder.
|
||||
|
||||
Proposed change
|
||||
===============
|
||||
@ -39,12 +44,6 @@ Alternatives
|
||||
|
||||
None
|
||||
|
||||
Use Cases
|
||||
---------
|
||||
|
||||
Having the ``well defined`` capabilities will allow the deployer to see what
|
||||
common capabilities are shared beyond their deployed backends in Cinder.
|
||||
|
||||
Data model impact
|
||||
-----------------
|
||||
|
||||
|
@ -129,8 +129,8 @@ None
|
||||
|
||||
Testing
|
||||
=======
|
||||
Need to test the force delete with an in-progress backup and ensure that it deletes
|
||||
successfully and cleans up correctly.
|
||||
Need to test the force delete with an in-progress backup and ensure that it
|
||||
deletes successfully and cleans up correctly.
|
||||
|
||||
|
||||
Documentation Impact
|
||||
|
@ -68,7 +68,8 @@ A volume delete operation should handle this by default.
|
||||
|
||||
Phase 1:
|
||||
|
||||
This is the generic/"non-optimized" case which will work with any volume driver.
|
||||
This is the generic/"non-optimized" case which will work with any volume
|
||||
driver.
|
||||
|
||||
When a volume delete request is received:
|
||||
1. Look for snapshots belonging to the volume, set them all to "deleting"
|
||||
|
@ -16,9 +16,15 @@ adding the migration progress indication, enriching the notification system via
|
||||
reporting to Ceilometer, guaranteeing the migration quality via tempest tests
|
||||
and CI support, etc. It targets to resolve the current issues for the available
|
||||
volumes only, since we need to wait until the multiple attached volume
|
||||
functionality lands in Nova to resolve the issues related to the attached volumes.
|
||||
There is going to be another specification designed to cover the issues regarding
|
||||
the attached volumes.
|
||||
functionality lands in Nova to resolve the issues related to the attached
|
||||
volumes. There is going to be another specification designed to cover the
|
||||
issues regarding the attached volumes.
|
||||
|
||||
Use Cases
|
||||
=========
|
||||
|
||||
Having the ``well defined`` capabilities will allow the deployer to see what
|
||||
common capabilities are shared beyond their deployed backends in Cinder.
|
||||
|
||||
There are three cases for volume migration. The scope of this spec is for the
|
||||
available volumes only and targets to resolve the issues within the following
|
||||
@ -29,8 +35,9 @@ Within the scope of this spec
|
||||
For example, migration from LVM to LVM, between LVM and vendor driver, and
|
||||
between different vendor drivers.
|
||||
|
||||
2) Available volume migration between two pools from the same vendor driver using
|
||||
driver-specific way. Storwize is taken as the reference example for this spec.
|
||||
2) Available volume migration between two pools from the same vendor driver
|
||||
using driver-specific way. Storwize is taken as the reference example for this
|
||||
spec.
|
||||
|
||||
Out of the scope of the spec
|
||||
3) In-use(attached) volume migration using Cinder generic migration.
|
||||
@ -38,45 +45,48 @@ Out of the scope of the spec
|
||||
Problem description
|
||||
===================
|
||||
|
||||
Currently, there are quite some straightforward issues about the volume migration.
|
||||
Currently, there are quite some straightforward issues about the volume
|
||||
migration.
|
||||
1. Whether the migration succeeds or fails is not saved anywhere, which is very
|
||||
confusing for the administrator. The volume status is still available or in-use,
|
||||
even if the administrator mentions --force as a flag for cinder migrate command.
|
||||
2. From the API perspective, the administrator is unable to check the status of the
|
||||
migration. The only way to check if the migration fails or succeeds is
|
||||
confusing for the administrator. The volume status is still available or
|
||||
in-use, even if the administrator mentions --force as a flag for cinder migrate
|
||||
command.
|
||||
2. From the API perspective, the administrator is unable to check the status of
|
||||
the migration. The only way to check if the migration fails or succeeds is
|
||||
to check the database.
|
||||
3. During the migration, there are two volumes appearing in the database record
|
||||
via the check by "cinder list" command. One is the source volume and one is the
|
||||
destination volume. The latter is actually useless to show and leads to confusion
|
||||
for the administrator.
|
||||
destination volume. The latter is actually useless to show and leads to
|
||||
confusion for the administrator.
|
||||
4. When executing the command "cinder migrate", most of the time there is
|
||||
nothing returned to the administrator from the terminal, which is unfriendly and needs to
|
||||
be improved.
|
||||
5. It is probable that the migration process takes a long time to finish. Currently
|
||||
the administrator gets nothing from the log about the progress of the migration.
|
||||
nothing returned to the administrator from the terminal, which is unfriendly
|
||||
and needs to be improved.
|
||||
5. It is probable that the migration process takes a long time to finish.
|
||||
Currently the administrator gets nothing from the log about the progress of the
|
||||
migration.
|
||||
6. Nothing reports to the Ceilometer.
|
||||
7. There are no tempest test cases and no CI support to make sure the migration
|
||||
truly works for any kind of drivers.
|
||||
|
||||
We propose to add the management of the migration status to resolve
|
||||
issues 1 to 4, add the migration progress indication to cover Issue 5, add
|
||||
the notification to solve Issue 6 and add tempest tests and CI support to tackle
|
||||
Issue 7.
|
||||
the notification to solve Issue 6 and add tempest tests and CI support to
|
||||
tackle Issue 7.
|
||||
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
At the beginning, we declare that all the changes and test cases are dedicated to
|
||||
available volumes. For the attached volumes, we will wait until the multiple
|
||||
At the beginning, we declare that all the changes and test cases are dedicated
|
||||
to available volumes. For the attached volumes, we will wait until the multiple
|
||||
attached volume functionality get merged in Nova.
|
||||
|
||||
* Management of the volume migration status:
|
||||
|
||||
If the migration fails, the migration status is set to "error". If the migration
|
||||
succeeds, the migration status is set to "success". If no migration is ever done
|
||||
for the volume, the migration status is set to None. The migration status is used
|
||||
to record the result of the previous migration. The migration status can be seen
|
||||
by the administrator only.
|
||||
If the migration fails, the migration status is set to "error". If the
|
||||
migration succeeds, the migration status is set to "success". If no migration
|
||||
is ever done for the volume, the migration status is set to None. The migration
|
||||
status is used to record the result of the previous migration. The migration
|
||||
status can be seen by the administrator only.
|
||||
|
||||
The administrator has several ways to check the migration status:
|
||||
1) The administrator can do a regular "volume list" with a filter
|
||||
@ -86,23 +96,25 @@ will list the migration status.
|
||||
2) The administrator can issue a "get" command for a certain volume and the
|
||||
migration status can be found in the field 'os-vol-mig-status-attr:migstat'.
|
||||
|
||||
If the administrator issues the migrate command with the --force flag, the volume
|
||||
status will be changed to 'maintenance'. Attach or detach will not be allowed
|
||||
during migration. If the administrator issues the migrate command without the
|
||||
--force flag, the volume status will remain unchanged. Attach or detach action issued
|
||||
during migration will abort the migration. The status 'maintenance' can be extended
|
||||
to use in any other situation, in which the volume service is not available due to
|
||||
any kinds of reasons.
|
||||
If the administrator issues the migrate command with the --force flag, the
|
||||
volume status will be changed to 'maintenance'. Attach or detach will not be
|
||||
allowed during migration. If the administrator issues the migrate command
|
||||
without the --force flag, the volume status will remain unchanged. Attach or
|
||||
detach action issued during migration will abort the migration. The status
|
||||
'maintenance' can be extended to use in any other situation, in which the
|
||||
volume service is not available due to any kinds of reasons.
|
||||
|
||||
We plan to provide more information when the administrator is running "cinder migrate"
|
||||
command. If the migration is able to start, we return a message "Your migration request
|
||||
has been received. Please check migration status and the server log to see more
|
||||
information." If the migration is rejected by the API, we shall return messages
|
||||
like "Your migration request failed to process due to some reasons".
|
||||
We plan to provide more information when the administrator is running
|
||||
"cinder migrate" command. If the migration is able to start, we return a
|
||||
message "Your migration request has been received. Please check migration
|
||||
status and the server log to see more information." If the migration is
|
||||
rejected by the API, we shall return messages like "Your migration request
|
||||
failed to process due to some reasons".
|
||||
|
||||
We plan to remove the redundant information for the dummy destination volume. If
|
||||
Cinder Internal Tenant(https://review.openstack.org/#/c/186232/) is successfully
|
||||
implemented, we will apply that patch to hide the destination volume.
|
||||
We plan to remove the redundant information for the dummy destination volume.
|
||||
If Cinder Internal Tenant(https://review.openstack.org/#/c/186232/) is
|
||||
successfully implemented, we will apply that patch to hide the destination
|
||||
volume.
|
||||
|
||||
* Migration progress indication:
|
||||
|
||||
@ -116,51 +128,54 @@ If the volume copy starts, another thread for the migration progress check will
|
||||
start as well. If the volume copy ends, the thread for the migration progress
|
||||
check ends.
|
||||
|
||||
A driver capability named migration_progress_report can be added to each driver.
|
||||
A driver capability named migration_progress_report can be added to each
|
||||
driver.
|
||||
It is either True or False. This is for the case that volumes are migrated
|
||||
from one pool to another within the same storage back-end. If it is True, the
|
||||
loop for the poll mechanism will start. Otherwise, no poll mechanism will start.
|
||||
loop for the poll mechanism will start. Otherwise, no poll mechanism will
|
||||
start.
|
||||
|
||||
A configuration option named migration_progress_report_interval can be added into
|
||||
cinder.conf, specifying how frequent the migration progress needs to be checked.
|
||||
For example, if migration_progress_report_interval is set to 30 seconds, the code will
|
||||
check the migration progress and report it every 30 seconds.
|
||||
A configuration option named migration_progress_report_interval can be added
|
||||
into cinder.conf, specifying how frequent the migration progress needs to be
|
||||
checked. For example, if migration_progress_report_interval is set to
|
||||
30 seconds, the code will check the migration progress and report it every
|
||||
30 seconds.
|
||||
|
||||
If the volume is migrated using dd command, e.g. volume migration from LVM to
|
||||
LVM, from LVM to vendor driver, from one back-end to another via blockcopy, etc,
|
||||
the migration progress can be checked via the position indication of
|
||||
LVM, from LVM to vendor driver, from one back-end to another via blockcopy,
|
||||
etc, the migration progress can be checked via the position indication of
|
||||
/proc/<PID>/fdinfo/1.
|
||||
|
||||
For the volume is migrated using file I/O, the current file pointer is able to
|
||||
report the position of the transferred data. The migration progress can be checked
|
||||
via this position relative to EOF.
|
||||
report the position of the transferred data. The migration progress can be
|
||||
checkedvia this position relative to EOF.
|
||||
|
||||
If the volume is migrated within different pools of one back-end, we would like to
|
||||
implement this feature by checking the stats of the storage back-end. Storwize
|
||||
V7000 is taken as the reference implementation about reporting the migration
|
||||
progress. It is possible that some drivers support the progress report and some
|
||||
do not. A new key "migration_progress_report" will be added into the driver
|
||||
to report the capability. If the back-end driver supports to report the migration
|
||||
progress, this key is set to True. Otherwise, this key is set to False and the
|
||||
progress report becomes unsupported in this case.
|
||||
If the volume is migrated within different pools of one back-end, we would like
|
||||
to implement this feature by checking the stats of the storage back-end.
|
||||
Storwize V7000 is taken as the reference implementation about reporting the
|
||||
migration progress. It is possible that some drivers support the progress
|
||||
report and some do not. A new key "migration_progress_report" will be added
|
||||
into the driver to report the capability. If the back-end driver supports to
|
||||
report the migration progress, this key is set to True. Otherwise, this key is
|
||||
set to False and the progress report becomes unsupported in this case.
|
||||
|
||||
The migration progress can be checked by the administrator only. Since the progress
|
||||
is not stored, each time the progress is queried from the API, the request will be
|
||||
scheduled to the cinder-volume service, which can get the updated migration
|
||||
progress for a specified volume and reports back.
|
||||
The migration progress can be checked by the administrator only. Since the
|
||||
progress is not stored, each time the progress is queried from the API, the
|
||||
request will be scheduled to the cinder-volume service, which can get the
|
||||
updated migration progress for a specified volume and reports back.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
We can definitely use a hidden flag to indicate if a database row is displayed or
|
||||
hidden. However, cinder needs a consistent way to resolve other issues like image
|
||||
cache, backup, etc, we reach an agreement that cinder internal tenant is the approach
|
||||
to go.
|
||||
We can definitely use a hidden flag to indicate if a database row is displayed
|
||||
or hidden. However, cinder needs a consistent way to resolve other issues like
|
||||
image cache, backup, etc, we reach an agreement that cinder internal tenant is
|
||||
the approach to go.
|
||||
|
||||
The purpose that we plan to have a better management of the volume migration status,
|
||||
add migration progress indication, report the stats to Ceilometer and provide tempest
|
||||
tests and CI, is simply to guarantee the migration works in a more robust and stable
|
||||
way.
|
||||
The purpose that we plan to have a better management of the volume migration
|
||||
status, add migration progress indication, report the stats to Ceilometer and
|
||||
provide tempest tests and CI, is simply to guarantee the migration works in a
|
||||
more robust and stable way.
|
||||
|
||||
|
||||
Data model impact
|
||||
@ -188,8 +203,8 @@ None
|
||||
Notifications impact
|
||||
--------------------
|
||||
|
||||
The volume migration should send notification to Ceilometer about the start, and
|
||||
the progress and the finish.
|
||||
The volume migration should send notification to Ceilometer about the start,
|
||||
and the progress and the finish.
|
||||
|
||||
|
||||
Other end user impact
|
||||
@ -208,23 +223,24 @@ Other deployer impact
|
||||
---------------------
|
||||
|
||||
If the back-end driver supports the migration progress indication, a new
|
||||
configuration option migration_progress_report_interval can be added. The administrator
|
||||
can decide how frequent the cinder volume service to report the migration
|
||||
progress. For example, if migration_progress_report_interval is set to 30 seconds,
|
||||
the cinder volume service will provide the progress information every 30 seconds.
|
||||
configuration option migration_progress_report_interval can be added. The
|
||||
administrator can decide how frequent the cinder volume service to report the
|
||||
migration progress. For example, if migration_progress_report_interval is set
|
||||
to 30 seconds, the cinder volume service will provide the progress information
|
||||
every 30 seconds.
|
||||
|
||||
|
||||
Developer impact
|
||||
----------------
|
||||
|
||||
The driver maintainer or developer should be aware that they need to add a new
|
||||
capability to indicate whether their driver support the progress report. If yes,
|
||||
they need to implement the related method, to be provided in the implementation of
|
||||
this specification.
|
||||
capability to indicate whether their driver support the progress report.
|
||||
If yes, they need to implement the related method, to be provided in the
|
||||
implementation of this specification.
|
||||
|
||||
If their drivers have implemented volume migration, integration tests and driver CI
|
||||
are important to ensure the quality. This is something they need to pay attention
|
||||
and implement for their drivers as well.
|
||||
If their drivers have implemented volume migration, integration tests and
|
||||
driver CI are important to ensure the quality. This is something they need to
|
||||
pay attention and implement for their drivers as well.
|
||||
|
||||
|
||||
Implementation
|
||||
@ -252,7 +268,8 @@ migration_status to "success" if the migration succeeds.
|
||||
the migration command with --force flag. No attach or detach is allowed during
|
||||
this migration. If the administrator executes the migration command without
|
||||
--force flag, the volume status will stay unchanged. Attach or detach during
|
||||
migration will terminate the migration to ensure the availability of the volume.
|
||||
migration will terminate the migration to ensure the availability of
|
||||
the volume.
|
||||
3) Enrich cinderclient with friendly messages returned for cinder migrate and
|
||||
retype command.
|
||||
4) Hide the redundant dummy destination volume during migration.
|
||||
@ -271,25 +288,28 @@ interval, in which the migration progress is checked.
|
||||
the position indication of /proc/<PID>/fdinfo/1 can be checked to get the
|
||||
progress of the blockcopy.
|
||||
|
||||
2) If the volume is migrated within different pools of one back-end, we would like
|
||||
to check the progress report of the back-end storage in a certain time interval.
|
||||
2) If the volume is migrated within different pools of one back-end, we would
|
||||
like to check the progress report of the back-end storage in a certain time
|
||||
interval.
|
||||
|
||||
The migration percentage will be logged and reported to Ceilometer.
|
||||
|
||||
* Notification:
|
||||
|
||||
Add the code to send the start, progress and end to Ceilometer during migration.
|
||||
Add the code to send the start, progress and end to Ceilometer during
|
||||
migration.
|
||||
|
||||
* Tempest tests and CI support:
|
||||
|
||||
This work item is planned to finish in two steps. The first step is called manual
|
||||
mode, in which the tempest tests are ready and people need to configure the
|
||||
OpenStack environment manually to meet the requirements of the tests.
|
||||
This work item is planned to finish in two steps. The first step is called
|
||||
manual mode, in which the tempest tests are ready and people need to configure
|
||||
the OpenStack environment manually to meet the requirements of the tests.
|
||||
|
||||
The second step is called automatic mode, in which the tempest tests can run
|
||||
automatically in the gate. With the current state of OpenStack infrastructure, it
|
||||
is only possible for us to do the manual mode. The automatic mode needs to
|
||||
collaboration with OpenStack-infra team and there is going to be a blueprint for it.
|
||||
automatically in the gate. With the current state of OpenStack infrastructure,
|
||||
it is only possible for us to do the manual mode. The automatic mode needs to
|
||||
collaboration with OpenStack-infra team and there is going to be a blueprint
|
||||
for it.
|
||||
|
||||
The following cases will be added:
|
||||
1) From LVM(thin) to LVM(thin)
|
||||
@ -297,13 +317,13 @@ The following cases will be added:
|
||||
3) From Storwize to LVM(thin)
|
||||
4) From Storwize Pool 1 to Storwize Pool 2
|
||||
|
||||
Besides, RBD driver is also going to provide the similar test cases from (2) to (4)
|
||||
as above.
|
||||
Besides, RBD driver is also going to provide the similar test cases from (2)
|
||||
to (4) as above.
|
||||
|
||||
We are sure that other drivers can get involved into the tests. This specification
|
||||
targets to add the test cases for LVM, Storwize and RBD drivers as the initiative. We
|
||||
hope other drivers can take the implementation of LVM, Storwize and RBD as a
|
||||
reference in future.
|
||||
We are sure that other drivers can get involved into the tests. This
|
||||
specification targets to add the test cases for LVM, Storwize and RBD drivers
|
||||
as the initiative. We hope other drivers can take the implementation of LVM,
|
||||
Storwize and RBD as a reference in future.
|
||||
|
||||
* Documentation:
|
||||
|
||||
@ -315,7 +335,8 @@ Dependencies
|
||||
============
|
||||
|
||||
Cinder Internal Tenant: https://review.openstack.org/#/c/186232/
|
||||
Add support for file I/O volume migration: https://review.openstack.org/#/c/187270/
|
||||
Add support for file I/O volume migration:
|
||||
https://review.openstack.org/#/c/187270/
|
||||
|
||||
|
||||
Testing
|
||||
@ -326,9 +347,11 @@ following scenarios for available volumes will taken into account:
|
||||
1) Migration using Cinder generic migration with LVM(thin) to LVM(thin).
|
||||
2) Migration using Cinder generic migration with LVM(thin) to vendor driver.
|
||||
3) Migration using Cinder generic migration with vendor driver to LVM(thin).
|
||||
4) Migration between two pools from the same vendor driver using driver-specific way.
|
||||
4) Migration between two pools from the same vendor driver using
|
||||
driver-specific way.
|
||||
|
||||
There are some other scenarios, but for this release we plan to consider the above.
|
||||
There are some other scenarios, but for this release we plan to consider the
|
||||
above.
|
||||
For scenarios 1 to 3, we plan to put tests cases into Tempest.
|
||||
For Scenario 4, we plan to put the test into CI.
|
||||
The reference case for Scenario 2 is migration from LVM to Storwize V7000.
|
||||
@ -338,10 +361,10 @@ The reference case for Scenario 3 is migration from Storwize V7000 to LVM.
|
||||
Documentation Impact
|
||||
====================
|
||||
|
||||
Documentation should be updated to tell the administrator how to use the migrate
|
||||
and retype command. Describe what commands work for what kind of use cases, how
|
||||
to check the migration status, how to configure and check the migration indication,
|
||||
etc.
|
||||
Documentation should be updated to tell the administrator how to use the
|
||||
migrate and retype command. Describe what commands work for what kind of use
|
||||
cases, how to check the migration status, how to configure and check
|
||||
the migration indication, etc.
|
||||
|
||||
Reference will be updated to tell the driver maintainers or developers how to
|
||||
change their drivers to adapt this migration improvement via the link
|
||||
|
@ -11,6 +11,7 @@
|
||||
# under the License.
|
||||
|
||||
import glob
|
||||
import os
|
||||
import re
|
||||
|
||||
import docutils.core
|
||||
@ -98,7 +99,9 @@ class TestTitles(testtools.TestCase):
|
||||
self.assertEqual(len(trailing_spaces), 0, msg)
|
||||
|
||||
def test_template(self):
|
||||
releases = ['juno', 'kilo']
|
||||
# NOTE (e0ne): adding 'template.rst' to ignore dirs to exclude it from
|
||||
# os.listdir output
|
||||
ignored_dirs = {'template.rst', 'api'}
|
||||
|
||||
files = ['specs/template.rst']
|
||||
|
||||
@ -107,6 +110,7 @@ class TestTitles(testtools.TestCase):
|
||||
# to test them.
|
||||
# files.extend(glob.glob('specs/api/*/*'))
|
||||
|
||||
releases = set(os.listdir('specs')) - ignored_dirs
|
||||
for release in releases:
|
||||
specs = glob.glob('specs/%s/*' % release)
|
||||
files.extend(specs)
|
||||
|
Loading…
x
Reference in New Issue
Block a user