Remove unit testing
Spec repos do not have code to unit test. The gate job definition for the py27 and py35 jobs skip if there doc only changes, which is all we will ever have in the specs repo. Therefore the one "test" we had will never be run. We were using this unit test as a check for formatting issues with the rst files. This was done before doc8 was available. Now that we can use doc8, we should just switch to running that as part of the pep8 jobs. Also fixes all the errors caught by doc8 that were not caught by our unit test check. Change-Id: Ida20764edde3a07c89703d82b41958c96548b239
This commit is contained in:
parent
2f5701b912
commit
5016627f04
@ -5,3 +5,4 @@ testrepository>=0.0.18
|
||||
testtools>=0.9.34
|
||||
flake8
|
||||
yasfb>=0.5.1
|
||||
doc8>=0.6.0 # Apache-2.0
|
||||
|
@ -9,29 +9,46 @@ body returns additional information about the fault.
|
||||
The following table lists possible fault types with their associated
|
||||
error codes and descriptions.
|
||||
|
||||
======================= ===================== ===============================
|
||||
Fault type Associated error code Description
|
||||
|
||||
``badRequest`` 400 The user request contains one or more errors.
|
||||
|
||||
``unauthorized`` 401 The supplied token is not authorized to access the resources, either it's expired or invalid.
|
||||
|
||||
``forbidden`` 403 Access to the requested resource was denied.
|
||||
|
||||
``itemNotFound`` 404 The back-end services did not find anything matching the Request-URI.
|
||||
|
||||
``badMethod`` 405 The request method is not allowed for this resource.
|
||||
|
||||
``overLimit`` 413 Either the number of entities in the request is larger than allowed limits, or the user has exceeded allowable request rate limits. See the ``details`` element for more specifics. Contact your cloud provider if you think you need higher request rate limits.
|
||||
|
||||
``badMediaType`` 415 The requested content type is not supported by this service.
|
||||
|
||||
``unprocessableEntity`` 422 The requested resource could not be processed on at the moment.
|
||||
|
||||
``instanceFault`` 500 This is a generic server error and the message contains the reason for the error. This error could wrap several error messages and is a catch all.
|
||||
|
||||
``notImplemented`` 501 The requested method or resource is not implemented.
|
||||
|
||||
======================= ===================== ===============================
|
||||
``badRequest`` 400 The user request contains one
|
||||
or more errors.
|
||||
``unauthorized`` 401 The supplied token is not
|
||||
authorized to access the
|
||||
resources, either it's expired
|
||||
or invalid.
|
||||
``forbidden`` 403 Access to the requested
|
||||
resource was denied.
|
||||
``itemNotFound`` 404 The back-end services did not
|
||||
find anything matching the
|
||||
Request-URI.
|
||||
``badMethod`` 405 The request method is not
|
||||
allowed for this resource.
|
||||
``overLimit`` 413 Either the number of entities
|
||||
in the request is larger than
|
||||
allowed limits, or the user has
|
||||
exceeded allowable request rate
|
||||
limits. See the ``details``
|
||||
element for more specifics.
|
||||
Contact your cloud provider if
|
||||
you think you need higher
|
||||
request rate limits.
|
||||
``badMediaType`` 415 The requested content type is
|
||||
not supported by this service.
|
||||
``unprocessableEntity`` 422 The requested resource could
|
||||
not be processed on at the
|
||||
moment.
|
||||
``instanceFault`` 500 This is a generic server error
|
||||
and the message contains the
|
||||
reason for the error. This
|
||||
error could wrap several error
|
||||
messages and is a catch all.
|
||||
``notImplemented`` 501 The requested method or
|
||||
resource is not implemented.
|
||||
``serviceUnavailable`` 503 Block Storage is not available.
|
||||
======================= ===================== ===============================
|
||||
|
||||
|
||||
The following two ``instanceFault`` examples show errors when the server
|
||||
has erred or cannot perform the requested operation:
|
||||
@ -83,7 +100,7 @@ is optional and may contain information that is useful for tracking down
|
||||
an error, such as a stack trace. The ``details`` element may or may not
|
||||
be appropriate for display to an end user, depending on the role and
|
||||
experience of the end user.
|
||||
|
||||
`
|
||||
The fault's root element (for example, ``instanceFault``) may change
|
||||
depending on the type of error.
|
||||
|
||||
|
@ -4,9 +4,9 @@
|
||||
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
==========================================
|
||||
=====================================================================
|
||||
Extending IBMNAS driver to support NAS based GPFS storage deployments
|
||||
==========================================
|
||||
=====================================================================
|
||||
|
||||
https://blueprints.launchpad.net/cinder/+spec/add-gpfs-nas-to-ibmnas
|
||||
|
||||
|
@ -137,9 +137,9 @@ Create volume with replication enabled:
|
||||
When a replicated volume is created it is expected that the volume dictionary
|
||||
will be populated as follows:
|
||||
|
||||
** volume['replication_status'] = 'copying'
|
||||
** volume['replication_extended_status'] = <driver specific value>
|
||||
** volume['driver_data'] = <driver specific value>
|
||||
- volume['replication_status'] = 'copying'
|
||||
- volume['replication_extended_status'] = <driver specific value>
|
||||
- volume['driver_data'] = <driver specific value>
|
||||
|
||||
The replica volume is hidden from the end user as the end user will
|
||||
never need to directly interact with the replica volume. Any interaction
|
||||
@ -208,21 +208,28 @@ Re-type volume:
|
||||
|
||||
The steps to implement this would look as follows:
|
||||
* Do a diff['extra_specs'] and see if 'replication' is included.
|
||||
|
||||
* If replication was enabled for the original volume_type but is not
|
||||
not enabled for the new volume_type, then replication should be disabled.
|
||||
|
||||
* The replica should be deleted.
|
||||
|
||||
* The volume dictionary should be updated as follows:
|
||||
** volume['replication_status'] = 'disabled'
|
||||
** volume['replication_extended_status'] = None
|
||||
** volume['driver_data'] = None
|
||||
|
||||
- volume['replication_status'] = 'disabled'
|
||||
- volume['replication_extended_status'] = None
|
||||
- volume['driver_data'] = None
|
||||
|
||||
* If replication was not enabled for the original volume_type but is
|
||||
enabled for the new volume_type, then replication should be enabled.
|
||||
|
||||
* A volume replica should be created and the replication should
|
||||
be set up between the volume and the newly created replica.
|
||||
|
||||
* The volume dictionary should be updated as follows:
|
||||
** volume['replication_status'] = 'copying'
|
||||
** volume['replication_extended_status'] = <driver specific value>
|
||||
** volume['driver_data'] = <driver specific value>
|
||||
- volume['replication_status'] = 'copying'
|
||||
- volume['replication_extended_status'] = <driver specific value>
|
||||
- volume['driver_data'] = <driver specific value>
|
||||
|
||||
Get Replication Status:
|
||||
|
||||
@ -242,11 +249,13 @@ Get Replication Status:
|
||||
|
||||
* volume['replication_status'] = <error | copying | active | active-stopped |
|
||||
inactive>
|
||||
** 'error' if an error occurred with replication.
|
||||
** 'copying' replication copying data to secondary (inconsistent)
|
||||
** 'active' replication copying data to secondary (consistent)
|
||||
** 'active-stopped' replication data copy on hold (consistent)
|
||||
** 'inactive' if replication data copy is stopped (inconsistent)
|
||||
|
||||
- 'error' if an error occurred with replication.
|
||||
- 'copying' replication copying data to secondary (inconsistent)
|
||||
- 'active' replication copying data to secondary (consistent)
|
||||
- 'active-stopped' replication data copy on hold (consistent)
|
||||
- 'inactive' if replication data copy is stopped (inconsistent)
|
||||
|
||||
* volume['replication_extended_status'] = <driver specific value>
|
||||
* volume['driver_data'] = <driver specific value>
|
||||
|
||||
@ -266,9 +275,12 @@ Promote replica:
|
||||
|
||||
As with the functions above, the volume driver is expected to update the
|
||||
volume dictionary as follows:
|
||||
|
||||
* volume['replication_status'] = <error | inactive>
|
||||
** 'error' if an error occurred with replication.
|
||||
** 'inactive' if replication data copy on hold (inconsistent)
|
||||
|
||||
- 'error' if an error occurred with replication.
|
||||
- 'inactive' if replication data copy on hold (inconsistent)
|
||||
|
||||
* volume['replication_extended_status'] = <driver specific value>
|
||||
* volume['driver_data'] = <driver specific value>
|
||||
|
||||
@ -283,13 +295,16 @@ Re-enable replication:
|
||||
|
||||
The backend driver is expected to update the following volume dictionary
|
||||
entries:
|
||||
|
||||
* volume['replication_status'] = <error | copying | active | active-stopped |
|
||||
inactive>
|
||||
** 'error' if an error occurred with replication.
|
||||
** 'copying' replication copying data to secondary (inconsistent)
|
||||
** 'active' replication copying data to secondary (consistent)
|
||||
** 'active-stopped' replication data copy on hold (consistent)
|
||||
** 'inactive' if replication data copy is stopped (inconsistent)
|
||||
|
||||
- 'error' if an error occurred with replication.
|
||||
- 'copying' replication copying data to secondary (inconsistent)
|
||||
- 'active' replication copying data to secondary (consistent)
|
||||
- 'active-stopped' replication data copy on hold (consistent)
|
||||
- 'inactive' if replication data copy is stopped (inconsistent)
|
||||
|
||||
* volume['replication_extended_status'] = <driver specific value>
|
||||
* volume['driver_data'] = <driver specific value>
|
||||
|
||||
@ -331,22 +346,31 @@ admin.
|
||||
Data model impact
|
||||
-----------------
|
||||
|
||||
* The volumes table will be updated:
|
||||
** Add replication_status column (string) for indicating the status of
|
||||
The volumes table will be updated:
|
||||
|
||||
* Add replication_status column (string) for indicating the status of
|
||||
replication for a give volume. Possible values are:
|
||||
*** 'copying' - Data is being copied between volumes, the secondary is
|
||||
|
||||
- 'copying' - Data is being copied between volumes, the secondary is
|
||||
inconsistent.
|
||||
*** 'disabled' - Volume replication is disabled.
|
||||
*** 'error' - Replication is in error state.
|
||||
*** 'active' - Data is being copied to the secondary and the secondary is
|
||||
|
||||
- 'disabled' - Volume replication is disabled.
|
||||
|
||||
- 'error' - Replication is in error state.
|
||||
|
||||
- 'active' - Data is being copied to the secondary and the secondary is
|
||||
consistent.
|
||||
*** 'active-stopped' - Data is not being copied to the secondary (on hold),
|
||||
|
||||
- 'active-stopped' - Data is not being copied to the secondary (on hold),
|
||||
the secondary volume is consistent.
|
||||
*** 'inactive' - Data is not being copied to the secondary, the secondary
|
||||
|
||||
- 'inactive' - Data is not being copied to the secondary, the secondary
|
||||
copy is inconsistent.
|
||||
** Add replication_extended_status column to contain details with regards
|
||||
|
||||
- Add replication_extended_status column to contain details with regards
|
||||
to replication status of the primary and secondary volumes.
|
||||
** Add replication_driver_data column to contain additional details that
|
||||
|
||||
- Add replication_driver_data column to contain additional details that
|
||||
may be needed by a vendor's driver to implement replication on a backend.
|
||||
|
||||
|
||||
@ -386,6 +410,8 @@ REST API impact
|
||||
|
||||
Create volume API will have "source-replica" added:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"volume":
|
||||
{
|
||||
|
@ -32,7 +32,9 @@ Proposed change
|
||||
===============
|
||||
|
||||
Build a base VolumeDriver class and subclasses that describe feature sets,
|
||||
like::
|
||||
like:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-------------------------+
|
||||
+----------------+ BaseVolumeDriver +---------------+
|
||||
@ -40,10 +42,10 @@ like::
|
||||
| +-----------^-------------+ |
|
||||
| | |
|
||||
| | |
|
||||
+--------+-------------+ +-----------+-------------+ +------------+---------+
|
||||
+-------+-------------+ +-----------+-------------+ +------------+---------+
|
||||
| VolumeDriverBackup | | VolumeDriverSnapshot | | VolumeDriverImport |
|
||||
| {abstract} | | {abstract} | | {abstract} |
|
||||
+----------------------+ +-------------------------+ +----------------------+
|
||||
+---------------------+ +-------------------------+ +----------------------+
|
||||
|
||||
|
||||
If a driver implements the backup functionality and supports volume import it
|
||||
|
@ -17,13 +17,17 @@ Problem description
|
||||
===================
|
||||
|
||||
* Create CG from CG snapshot
|
||||
|
||||
Currently a user can create a Consistency Group and create a snapshot of a
|
||||
Consistency Group. To restore from a Cgsnapshot, however, the following
|
||||
steps need to be performed:
|
||||
|
||||
1) Create a new Consistency Group.
|
||||
2) Do the following for every volume in the Consistency group:
|
||||
|
||||
a) Call "create volume from snapshot" for snapshot associated with every
|
||||
volume in the original Consistency Group.
|
||||
|
||||
There's no single API that allows a user to create a Consistency Group from
|
||||
a Cgsnapshot.
|
||||
|
||||
@ -45,22 +49,30 @@ Proposed change
|
||||
===============
|
||||
|
||||
* Create CG from CG snapshot
|
||||
|
||||
* Add an API that allows a user to create a Consistency Group from a
|
||||
Cgsnapshot.
|
||||
|
||||
* Add a Volume Driver API accordingly.
|
||||
|
||||
* Modify Consistency Group
|
||||
|
||||
* Add an API that adds existing volumes to CG and removing volumes from CG
|
||||
after it is created.
|
||||
|
||||
* Add a Volume Driver API accordingly.
|
||||
|
||||
* DB Schema Changes
|
||||
|
||||
The following changes are proposed:
|
||||
|
||||
* A new cg_volumetypes table will be created.
|
||||
* This new table will contain 3 columns:
|
||||
|
||||
* uuid of a cg_volumetype entry
|
||||
* uuid of a consistencygroup
|
||||
* uuid of a volume type
|
||||
|
||||
* Upgrade and downgrade functions will be provided for db migrations.
|
||||
|
||||
Alternatives
|
||||
@ -75,9 +87,10 @@ Data model impact
|
||||
The following changes are proposed:
|
||||
* A new cg_volumetypes table will be created.
|
||||
* This new table will contain 3 columns:
|
||||
* uuid of a cg_volumetype entry
|
||||
* uuid of a consistencygroup
|
||||
* uuid of a volume type
|
||||
|
||||
- uuid of a cg_volumetype entry
|
||||
- uuid of a consistencygroup
|
||||
- uuid of a volume type
|
||||
|
||||
REST API impact
|
||||
---------------
|
||||
@ -133,10 +146,15 @@ New Consistency Group APIs changes
|
||||
|
||||
|
||||
* Cinder Volume Driver API
|
||||
|
||||
The following new volume driver APIs will be added:
|
||||
* def create_consistencygroup_from_cgsnapshot(self, context,
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def create_consistencygroup_from_cgsnapshot(self, context,
|
||||
consistencygroup, volumes, cgsnapshot, snapshots)
|
||||
* def modify_consistencygroup(self, context, consistencygroup,
|
||||
|
||||
def modify_consistencygroup(self, context, consistencygroup,
|
||||
old_volumes, new_volumes)
|
||||
|
||||
Security impact
|
||||
@ -153,10 +171,16 @@ Other end user impact
|
||||
python-cinderclient needs to be changed to support the new APIs.
|
||||
|
||||
* Create CG from CG snapshot
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cinder consisgroup-create --name <name> --description <description>
|
||||
--cgsnapshot <cgsnapshot uuid or name>
|
||||
|
||||
* Modify CG
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cinder consisgroup-modify <cg uuid or name> --name <new name>
|
||||
--description <new description> --addvolumes
|
||||
<volume uuid> [<volume uuid> ...] --removevolumes
|
||||
@ -192,11 +216,15 @@ Work Items
|
||||
----------
|
||||
|
||||
1. API changes:
|
||||
|
||||
* Create CG from CG snapshot API
|
||||
* Modify CG API
|
||||
|
||||
2. Volume Driver API changes:
|
||||
|
||||
* Create CG from CG snapshot
|
||||
* Modify CG
|
||||
|
||||
3. DB schema changes
|
||||
|
||||
Dependencies
|
||||
|
@ -83,6 +83,8 @@ parameters 'target_alternative_portals' and 'target_alternative_iqns', which
|
||||
contain a list of portal IP address:port pairs, a list of alternative iqn's and
|
||||
lun's corresponding to each portal address. For example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{"connection_info": {"driver_volume_type": "iscsi", ...
|
||||
"data": {"target_portal": "10.0.1.2:3260",
|
||||
"target_alternative_portals": [
|
||||
|
@ -88,6 +88,8 @@ a list of secondary iqn's and lun's corresponding to each portal address.
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{"connection_info": {"driver_volume_type": "iscsi", ...
|
||||
"data": {"target_portals": ["10.0.1.2:3260",
|
||||
"10.0.2.2:3260"],
|
||||
|
@ -139,7 +139,8 @@ None
|
||||
Testing
|
||||
=======
|
||||
|
||||
* With volume_copy_bps_limit = 100MB/s for a fake backend driver,
|
||||
* With volume_copy_bps_limit = 100MB/s for a fake backend driver:
|
||||
|
||||
* start a volume copy, then the bps limit is set to 100MB/s
|
||||
* start a second volume copy, then the limit is updated to 50MB/s for both
|
||||
* finish one of the copies, then the limit is resumed to 100MB/s
|
||||
|
@ -157,8 +157,7 @@ support for the reporting feature.
|
||||
|
||||
References
|
||||
==========
|
||||
* oslo-incubator module: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/commo
|
||||
n/report
|
||||
* blog about nova guru reports: https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-i
|
||||
ncubator-guru-meditation-reports/
|
||||
* oslo-incubator module: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/report
|
||||
* blog about nova guru reports: https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/
|
||||
* oslo.reports repo: https://github.com/directxman12/oslo.reports
|
||||
|
||||
|
@ -178,5 +178,8 @@ References
|
||||
==========
|
||||
|
||||
* http://eavesdrop.openstack.org/meetings/cinder/2015
|
||||
|
||||
/cinder.2015-05-27-16.00.log.html
|
||||
|
||||
* https://blueprints.launchpad.net/cinder/+spec/image-volume-cache
|
||||
|
||||
|
@ -124,20 +124,31 @@ Xing and Eric respectively.
|
||||
+----------------+----------------+----------+--------------+
|
||||
|
||||
Consider following use-cases :-
|
||||
|
||||
a. Suppose, Mike(admin of root project or cloud admin) increases the
|
||||
``hard_limit`` of volumes in CMS to 400
|
||||
|
||||
b. Suppose, Mike increases the ``hard_limit`` of volumes in CMS to 500
|
||||
|
||||
c. Suppose, Mike delete the quota of CMS
|
||||
|
||||
d. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 350
|
||||
|
||||
e. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 200
|
||||
|
||||
f. Suppose, Jay(Manager of CMS)increases the ``hard_limit`` of
|
||||
volumes in CMS to 400
|
||||
|
||||
g. Suppose, Jay tries to view the quota of ATLAS
|
||||
|
||||
h. Suppose, Duncan tries to reduce the ``hard_limit`` of volumes in CMS to
|
||||
400.
|
||||
|
||||
i. Suppose, Mike tries to increase the ``hard_limit`` of volumes in
|
||||
ProductionIT to 2000.
|
||||
|
||||
j. Suppose, Mike deletes the quota of Visualisation.
|
||||
|
||||
k. Suppose, Mike deletes the project Visualisation.
|
||||
|
||||
9. Suppose the company doesn't want a nested structure and want to
|
||||
@ -528,11 +539,9 @@ added since Kilo release.
|
||||
References
|
||||
==========
|
||||
|
||||
* `Hierarchical Projects Wiki
|
||||
<https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
||||
* `Hierarchical Projects Wiki <https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
||||
|
||||
* `Hierarchical Projects
|
||||
<http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html>`_
|
||||
* `Hierarchical Projects <http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html>`_
|
||||
|
||||
* `Hierarchical Projects Improvements <https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy-improvements>`_
|
||||
|
||||
* `Hierarchical Projects Improvements
|
||||
<https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy-improvements>`_
|
||||
|
@ -43,6 +43,7 @@ Proposed change
|
||||
|
||||
* The existing Create CG from Source API takes an existing CG snapshot
|
||||
as the source.
|
||||
|
||||
* This blueprint proposes to modify the existing API to accept an existing
|
||||
CG as the source.
|
||||
|
||||
@ -51,9 +52,12 @@ Alternatives
|
||||
|
||||
Without the proposed changes, we can create a CG from an existing CG
|
||||
with these steps:
|
||||
|
||||
* Create an empty CG.
|
||||
|
||||
* Create a cloned volume from an existing volume in an existing CG
|
||||
and add to the new CG.
|
||||
|
||||
* Repeat the above step for all volumes in the CG.
|
||||
|
||||
Data model impact
|
||||
@ -69,8 +73,11 @@ Consistency Group API change
|
||||
|
||||
* Create Consistency Group from Source
|
||||
* V2/<tenant id>/consistencygroups/create_from_src
|
||||
|
||||
* Method: POST
|
||||
* JSON schema definition for V2::
|
||||
* JSON schema definition for V2:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"consistencygroup-from-src":
|
||||
@ -87,9 +94,12 @@ Consistency Group API change
|
||||
of the new CG.
|
||||
|
||||
* Cinder Volume Driver API
|
||||
|
||||
Two new optional parameters will be added to the existing volume driver API:
|
||||
|
||||
* def create_consistencygroup_from_src(self, context, group, volumes,
|
||||
cgsnapshot=None, snapshots=None, src_group=None, src_volumes=None)
|
||||
|
||||
* Note only "src_group" and "src_volumes" are new parameters.
|
||||
|
||||
Security impact
|
||||
|
@ -160,4 +160,6 @@ References
|
||||
* Nova's spec for db archiving: https://review.openstack.org/#/c/18493/
|
||||
|
||||
* Discussion in openstack-dev mailing list:
|
||||
|
||||
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029952.html
|
||||
|
||||
|
@ -112,24 +112,29 @@ The following existing v2 GET APIs will support the new sorting parameters:
|
||||
{items} will be replaced by the appropriate entities as follows:
|
||||
|
||||
* For snapshots:
|
||||
* /v2/{tenant_id}/snapshots
|
||||
* /v2/{tenant_id}/snapshots/detail
|
||||
|
||||
- /v2/{tenant_id}/snapshots
|
||||
- /v2/{tenant_id}/snapshots/detail
|
||||
|
||||
* For volume transfers:
|
||||
* /v2/{tenant_id}/os-volume-transfer
|
||||
* /v2/{tenant_id}/os-volume-transfer/detail
|
||||
|
||||
- /v2/{tenant_id}/os-volume-transfer
|
||||
- /v2/{tenant_id}/os-volume-transfer/detail
|
||||
|
||||
* For consistency group:
|
||||
* /v2/{tenant_id}/consistencygroups
|
||||
* /v2/{tenant_id}/consistencygroups/detail
|
||||
|
||||
- /v2/{tenant_id}/consistencygroups
|
||||
- /v2/{tenant_id}/consistencygroups/detail
|
||||
|
||||
* For consistency group snapshots:
|
||||
* /v2/{tenant_id}/cgsnapshots
|
||||
* /v2/{tenant_id}/cgsnapshots/detail
|
||||
|
||||
- /v2/{tenant_id}/cgsnapshots
|
||||
- /v2/{tenant_id}/cgsnapshots/detail
|
||||
|
||||
* For backups:
|
||||
* /v2/{tenant_id}/backups
|
||||
* /v2/{tenant_id}/backups/detail
|
||||
|
||||
- /v2/{tenant_id}/backups
|
||||
- /v2/{tenant_id}/backups/detail
|
||||
|
||||
The existing API needs to support the following new Request Parameters for
|
||||
the above cinder concepts:
|
||||
@ -165,30 +170,45 @@ The next link will be put in the response returned from cinder if it is
|
||||
necessary.
|
||||
|
||||
* For snapshots, it replies:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"snapshots": [<List of snapshots>],
|
||||
"snapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||
}
|
||||
|
||||
* For volume transfers, it replies:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"transfers": [<List of transfers>],
|
||||
"transfers_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||
}
|
||||
|
||||
* For consistency group, it replies:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"consistencygroups": [<List of consistencygroups>],
|
||||
"consistencygroups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||
}
|
||||
|
||||
* For consistency group snapshots, it replies:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"cgsnapshots": [<List of cgsnapshots>],
|
||||
"cgsnapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||
}
|
||||
|
||||
* For backups, it replies::
|
||||
* For backups, it replies:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"backups": [<List of backups>],
|
||||
"backups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||
|
@ -206,6 +206,7 @@ The qos capability describes some corner cases for us:
|
||||
|
||||
The vendor:persona key is another good example of a ``vendor unique``
|
||||
capability:
|
||||
|
||||
This is very much like QoS, and again, note that we're just providing what
|
||||
the valid values are.
|
||||
|
||||
|
@ -150,12 +150,19 @@ accomplished via manual intervention (i.e. 'cinder force-detach....'
|
||||
(Links to proposed Nova changes will be provided ASAP)
|
||||
|
||||
Cinder force-detach API currently calls:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
volume_api.terminate_connection(...)
|
||||
self.volume_api.detach(...)
|
||||
|
||||
This will be modified to call into the VolumeManager with a new
|
||||
force_detach(...)
|
||||
|
||||
api/contrib/volume_actions.py: force_detach(...)
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
try:
|
||||
volume_rpcapi.force_detach(...)
|
||||
except: #catch and add debug message
|
||||
@ -164,6 +171,9 @@ api/contrib/volume_actions.py: force_detach(...)
|
||||
self._reset_status(..) #fix DB if backend cleanup is successful
|
||||
|
||||
volume/manager.py: force_detach(...)
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
self.driver.force_detach(..)
|
||||
|
||||
Individual drivers will implement force_detach as needed by the driver, most
|
||||
|
@ -98,7 +98,9 @@ Alternatives
|
||||
------------
|
||||
|
||||
There are a couple of alternatives:
|
||||
|
||||
* Detach the volume and back it up.
|
||||
|
||||
* Take a snapshot of the attached volume, create a volume from the
|
||||
snapshot and then back it up.
|
||||
|
||||
@ -146,7 +148,9 @@ By default it is False. The force flag is not needed for
|
||||
* Create backup
|
||||
* V2/<tenant id>/backups
|
||||
* Method: POST
|
||||
* JSON schema definition for V2::
|
||||
* JSON schema definition for V2:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"backup":
|
||||
@ -165,18 +169,24 @@ The following driver APIs will be added to support attach snapshot and
|
||||
detach snapshot.
|
||||
|
||||
attach snapshot:
|
||||
* def _attach_snapshot(self, context, snapshot, properties,
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def _attach_snapshot(self, context, snapshot, properties,
|
||||
remote=False)
|
||||
* def create_export_snapshot(self, conext, snapshot)
|
||||
* def initialize_connection_snapshot(self, snapshot, properties,
|
||||
def create_export_snapshot(self, conext, snapshot)
|
||||
def initialize_connection_snapshot(self, snapshot, properties,
|
||||
** kwargs)
|
||||
|
||||
detach snapshot:
|
||||
* def _detach_snapshot(self, context, attach_info, snapshot,
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def _detach_snapshot(self, context, attach_info, snapshot,
|
||||
properties, force=False, remote=False)
|
||||
* def terminate_connection_snapshot(self, snapshot, properties,
|
||||
def terminate_connection_snapshot(self, snapshot, properties,
|
||||
** kwargs)
|
||||
* def remove_export_snapshot(self, context, snapshot)
|
||||
def remove_export_snapshot(self, context, snapshot)
|
||||
|
||||
Alternatively we can use an is_snapshot flag for volume and snapshot
|
||||
to share common code without adding new functions, but it will make
|
||||
|
@ -80,7 +80,7 @@ Using a volume created based on "cirros-0.3.0-x86_64-disk" to show
|
||||
the time-consuming between solution A and solution B.
|
||||
* [Common]Create a volume based on image "cirros-0.3.0-x86_64-disk".
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# cinder list
|
||||
+--------------------------------------+-----------+------+------+
|
||||
@ -88,11 +88,12 @@ the time-consuming between solution A and solution B.
|
||||
+--------------------------------------+-----------+------+------+
|
||||
| 3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 | available | None | 1 |
|
||||
+--------------------------------------+-----------+------+------+
|
||||
|
||||
* [Solution-A-step-1]Copy
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 from "volumes" pool to
|
||||
"images" pool.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd cp
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 images/test
|
||||
@ -106,7 +107,7 @@ the time-consuming between solution A and solution B.
|
||||
* [Solution-B-step-1]Create a snapshot of volume
|
||||
3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 and protect it.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd snap create --pool volumes --image
|
||||
volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 --snap image_test
|
||||
@ -125,7 +126,7 @@ the time-consuming between solution A and solution B.
|
||||
* [Solution-B-step-2]Do clone operation on
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd clone
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||
@ -137,7 +138,7 @@ the time-consuming between solution A and solution B.
|
||||
|
||||
* [Solution-B-step-3]Flatten the clone image images/snapshot_clone_image_test.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd flatten images/snapshot_clone_image_test
|
||||
|
||||
@ -150,7 +151,7 @@ the time-consuming between solution A and solution B.
|
||||
* [Solution-B-step-4]Unprotect the snap
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd snap unprotect
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||
@ -162,7 +163,7 @@ the time-consuming between solution A and solution B.
|
||||
* [Solution-B-step-5]Delete the no dependency snap
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||
|
||||
::
|
||||
.. code-block:: bash
|
||||
|
||||
root@devaio:/home# time rbd snap rm
|
||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||
|
@ -59,6 +59,8 @@ could easily be extended to be included in future work. Backend devices
|
||||
(drivers) would be listed in the cinder.conf file and we add entries
|
||||
to indicate pairing. This could look something like this in the conf file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
valid_replication_devices='backend=backend_name-a',
|
||||
@ -67,12 +69,16 @@ to indicate pairing. This could look something like this in the conf file:
|
||||
Alternatively the replication target can potentially be a device unknown
|
||||
to Cinder
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
valid_replication_devices='remote_device={'some unique access meta}',...
|
||||
|
||||
Or a combination of the two even
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[driver-foo]
|
||||
volume_driver=xxxx
|
||||
valid_replication_devices='remote_device={'some unique access meta}',
|
||||
@ -93,17 +99,19 @@ from the create call. The flow would be something like this:
|
||||
scheduler.
|
||||
|
||||
* Add the following API calls
|
||||
enable_replication(volume)
|
||||
disable_replication(volume)
|
||||
failover_replicated_volume(volume)
|
||||
udpate_replication_targets()
|
||||
[mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets()
|
||||
[mechanism for an admin to query what a backend has configured]
|
||||
- enable_replication(volume)
|
||||
- disable_replication(volume)
|
||||
- failover_replicated_volume(volume)
|
||||
- update_replication_targets()
|
||||
|
||||
+ [mechanism to add tgts external to the conf file * optional]
|
||||
|
||||
- get_replication_targets()
|
||||
+ [mechanism for an admin to query what a backend has configured]
|
||||
|
||||
|
||||
Special considerations
|
||||
-----------------
|
||||
----------------------
|
||||
* volume-types
|
||||
There should not be a requirement of an exact match of volume-types between
|
||||
the primary and secondary volumes in the replication set. If a backend "can"
|
||||
@ -154,10 +162,12 @@ Special considerations
|
||||
Workflow diagram
|
||||
-----------------
|
||||
Create call on the left:
|
||||
No change to workflow
|
||||
* No change to workflow
|
||||
|
||||
Replication calls on the right:
|
||||
Direct to manager then driver via host entry
|
||||
* Direct to manager then driver via host entry
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+-----------+
|
||||
+--< +Volume API + >---------+ Enable routing directly to
|
||||
@ -218,12 +228,12 @@ REST API impact
|
||||
---------------
|
||||
|
||||
We would need to add the API calls mentioned above:
|
||||
enable_replication(volume)
|
||||
disable_replication(volume)
|
||||
failover_replicated_volume(volume)
|
||||
udpate_replication_targets()
|
||||
* enable_replication(volume)
|
||||
* disable_replication(volume)
|
||||
* failover_replicated_volume(volume)
|
||||
* udpate_replication_targets()
|
||||
[mechanism to add tgts external to the conf file * optional]
|
||||
get_replication_targets()
|
||||
* get_replication_targets()
|
||||
[mechanism for an admin to query what a backend has configured]
|
||||
|
||||
I think augmenting the existing calls is better than reusing them, but we can
|
||||
|
@ -90,6 +90,7 @@ in Neutron shows over 10x speedup comparing to usual ``sudo rootwrap`` call.
|
||||
Total speedup for Cinder shows impressive results too [#rw_perf]:
|
||||
test scenario CinderVolumes.create_and_delete_volume
|
||||
Current performance :
|
||||
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
| action | min (sec) | avg (sec) | max (sec) | count |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
@ -97,6 +98,7 @@ Current performance :
|
||||
| cinder.delete_volume | 13.535 | 24.958 | 32.959 | 8 |
|
||||
| total | 16.314 | 30.718 | 35.96 | 8 |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
|
||||
Load duration: 131.423681974
|
||||
Full duration: 135.794852018
|
||||
|
||||
@ -109,6 +111,7 @@ With use_rootwrap_daemon enabled:
|
||||
| cinder.delete_volume | 2.183 | 2.226 | 2.353 | 8 |
|
||||
| total | 4.673 | 4.845 | 5.3 | 8 |
|
||||
+----------------------+-----------+-----------+-----------+-------+
|
||||
|
||||
Load duration: 19.7548749447
|
||||
Full duration: 22.2729279995
|
||||
|
||||
|
@ -115,6 +115,8 @@ Data model impact
|
||||
A new table, called service_versions, would be created to track the RPC and
|
||||
object versions. The schema would be as follows:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
service_versions = Table(
|
||||
'service_versions', meta,
|
||||
Column('created_at', DateTime(timezone=False)),
|
||||
@ -130,8 +132,8 @@ service_versions = Table(
|
||||
mysql_engine='InnoDB'
|
||||
)
|
||||
|
||||
The service_id is service name + host. The *_current_version is the version
|
||||
a service is currently running and pinned at. The *_available_version is the
|
||||
The service_id is service name + host. The \*_current_version is the version
|
||||
a service is currently running and pinned at. The \*_available_version is the
|
||||
version a service knows about and (if newer) could upgrade to.
|
||||
|
||||
REST API impact
|
||||
|
@ -63,8 +63,13 @@ None
|
||||
REST API impact
|
||||
---------------
|
||||
|
||||
Add a new REST API to delete backup in v2::
|
||||
*POST /v2/{tenant_id}/backups/{id}/action
|
||||
Add a new REST API to delete backup in v2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
POST /v2/{tenant_id}/backups/{id}/action
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"os-force_delete": {}
|
||||
|
@ -35,7 +35,6 @@ Benefits:
|
||||
import volumes.
|
||||
|
||||
::
|
||||
|
||||
For those volume drivers which could not delete volume with snapshots,
|
||||
we could not delete the import volume that has snapshots.
|
||||
By using import snapshots function to import snapshots, we could first
|
||||
@ -97,8 +96,13 @@ REST API impact
|
||||
|
||||
1. Add one API "manage_snapshot"
|
||||
|
||||
The rest API look like this in v2::
|
||||
/v2/{project_id}/os-snapshot-manage
|
||||
The rest API look like this in v2:
|
||||
|
||||
.. code-block: console
|
||||
|
||||
POST /v2/{project_id}/os-snapshot-manage
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'snapshot':{
|
||||
@ -110,13 +114,17 @@ The rest API look like this in v2::
|
||||
|
||||
* The <string> 'host' means cinder volume host name.
|
||||
No default value.
|
||||
|
||||
* The <string> 'volume_id' means the import snapshot's volume id.
|
||||
No default value.
|
||||
|
||||
* The <string> 'ref' means the import snapshot name
|
||||
exist in storage back-end.
|
||||
No default value.
|
||||
|
||||
The response body of it is like::
|
||||
The response body of it is like:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"snapshot": {
|
||||
@ -142,8 +150,13 @@ The response body of it is like::
|
||||
|
||||
2. Add one API "unmanage_snapshot".
|
||||
|
||||
The rest API look like this in v2::
|
||||
/v2/{project_id}/os-snapshot-manage/{id}/action
|
||||
The rest API look like this in v2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
POST /v2/{project_id}/os-snapshot-manage/{id}/action
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'os-unmanage':{}
|
||||
|
@ -129,22 +129,37 @@ the operations on image metadata of volume.
|
||||
**Common http response code(s)**
|
||||
|
||||
* Modify Success: `200 OK`
|
||||
|
||||
* Failure: `400 Bad Request` with details.
|
||||
* Forbidden: `403 Forbidden` e.g. no permission to update the
|
||||
specific metadata
|
||||
* Not found: `501 Not Implemented` e.g. The server doesn't recognize
|
||||
the request method
|
||||
* Not found: `405 Not allowed` e.g. HEAD is not allowed on the resource
|
||||
|
||||
* Forbidden: `403 Forbidden`
|
||||
|
||||
e.g. no permission to update the specific metadata
|
||||
|
||||
* Not found: `501 Not Implemented`
|
||||
|
||||
e.g. The server doesn't recognize the request method
|
||||
|
||||
* Not found: `405 Not allowed`
|
||||
|
||||
e.g. HEAD is not allowed on the resource
|
||||
|
||||
|
||||
**Update volume image metadata**
|
||||
|
||||
* Method type
|
||||
PUT
|
||||
|
||||
* API version
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
PUT /v2/{project_id}/volumes/{volume_id}/image_metadata
|
||||
|
||||
* JSON schema definition
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"image_metadata": {
|
||||
"key": "v2"
|
||||
@ -174,6 +189,8 @@ Other end user impact
|
||||
* Provide Cinder API to allow a user to update an image property.
|
||||
CLI-python API that triggers the update.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# Sets or deletes volume image metadata
|
||||
cinder image-metadata <volume-id> set <property-name = value>
|
||||
|
||||
@ -221,23 +238,30 @@ Changes to Cinder:
|
||||
|
||||
#. Define property protections config files in Cinder
|
||||
(Deployer need to keep the files in sync with Glance's)
|
||||
|
||||
#. Sync the properties protection code from Glance into Cinder
|
||||
(The common protection code will be shared in Cinder)
|
||||
|
||||
#. Extend existing volume_image_metadatas(VolumeImageMetadataController)
|
||||
controller extension to add update capability.
|
||||
|
||||
#. Reuse update_volume_metadata method in volume API for updating image
|
||||
metadata and differentiate user/image metadata by introducing a new
|
||||
constant "meta_type"
|
||||
|
||||
#. Add update_volume_image_metadata method to volume API.
|
||||
|
||||
#. Check against property protections config files
|
||||
(property-protections-policies.conf or property-protections-roles.conf)
|
||||
if the property has update protection.
|
||||
|
||||
#. Update DB API and driver to allow image metadata updates.
|
||||
|
||||
Changes to Cinder python client:
|
||||
|
||||
#. Provide Cinder API to allow a user to update an image property.
|
||||
CLI-python API that triggers the update.
|
||||
|
||||
# Sets or deletes volume image metadata
|
||||
cinder image-metadata <volume-id> set <property-name = value>
|
||||
|
||||
@ -287,9 +311,10 @@ rule context_is_admin defined in policy.json.
|
||||
+-------------------------------------------------------------------+
|
||||
|
||||
* property-protections-roles.conf
|
||||
|
||||
This is a template file when property protections is based on user's role.
|
||||
Example: Allow both admins and users with the billing role to read and modify
|
||||
properties prefixed with x_billing_code_.
|
||||
properties prefixed with ``x_billing_code_``.
|
||||
|
||||
+-------------------------------------------------------------------+
|
||||
| [^x_billing_code_.*] |
|
||||
|
@ -52,7 +52,9 @@ it's reasonable to figure that a user shouldn't have to handle this).
|
||||
|
||||
There are two losses of performance in requiring back and forth between
|
||||
cinder-volume and the backend to delete a volume and snapshots:
|
||||
|
||||
A. Extra time spent checking the status of X requests.
|
||||
|
||||
B. Time spent merging snapshot data into another snapshot or volume
|
||||
which is going to immediately be deleted.
|
||||
|
||||
@ -95,20 +97,27 @@ This case is for volume drivers that wish to handle mass volume/snapshot
|
||||
deletion in an optimized fashion.
|
||||
|
||||
When a volume delete request is received:
|
||||
|
||||
Starting in the volume manager...
|
||||
|
||||
1. Check for a driver capability of 'volume_with_snapshots_delete'.
|
||||
(Name TBD.) This will be a new abc driver feature.
|
||||
|
||||
2. If the driver supports this, call driver.delete_volume_and_snapshots().
|
||||
This will be passed the volume, and a list of all relevant
|
||||
snapshots.
|
||||
|
||||
3. No exception thrown by the driver will indicate that everything
|
||||
was successfully deleted. The driver may return information indicating
|
||||
that the volume itself is intact, but snapshot operations failed.
|
||||
|
||||
4. Volume manager now moves all snapshots and the volume from 'deleting'
|
||||
to deleted. (volume_destroy/snapshot_destroy)
|
||||
|
||||
5. If an exception occurred, set the volume and all snapshots to
|
||||
'error_deleting'. We don't have enough information to do anything
|
||||
else safely.
|
||||
|
||||
6. The driver returns a list of dicts indicating the new statuses of
|
||||
the volume and each snapshot. This allows handling cases where
|
||||
deletion of some things succeeded but the process did not complete.
|
||||
|
@ -36,7 +36,7 @@ On API nodes given current code we are open to races in the code that affect
|
||||
resources on the database, and this will be exacerbated when working with
|
||||
Active/Active configurations.
|
||||
|
||||
Manager Local Locks
|
||||
Local Manager Locks
|
||||
-------------------
|
||||
|
||||
We have multiple local locks in the manager code of the volume nodes to prevent
|
||||
|
@ -84,7 +84,7 @@ Cinder deployments use these drivers. Also some non-RemoteFS-based drivers are
|
||||
using local locks too.
|
||||
|
||||
We could also replace current locks with some DB-based locking. This was
|
||||
proposed by gegulieo in specs to remove local locks from the maanger_ and from
|
||||
proposed by gegulieo in specs to remove local locks from the manager_ and from
|
||||
drivers_, but increased the complexity of the solution and potentially required
|
||||
more testing than relying on a broadly used DLM software.
|
||||
|
||||
|
@ -20,8 +20,10 @@ is totally complete.
|
||||
|
||||
Both the dd way or driver-specific way will fail if we attach a volume when
|
||||
it is migrating:
|
||||
|
||||
* If we migrate the volume through dd way, the source or target volume is not
|
||||
usable to server.
|
||||
|
||||
* If we migrate the volume through driver-specific way, the migration task is
|
||||
issued to the source backend by scheduler and the volume is owned by the
|
||||
source backend during the whole migration. If we attach this volume to a
|
||||
@ -62,6 +64,7 @@ These features are:
|
||||
|
||||
* One array can take over other array's LUN as a remote LUN if these two
|
||||
arrays are connected with FC or iSCSI fabric.
|
||||
|
||||
* We can migrate a remote LUN to a local LUN and meanwhile the remote(source)
|
||||
LUN is writable and readable with the migration task is undergoing, after the
|
||||
migration is completed, the local(target) LUN is exactly the same as the
|
||||
|
@ -146,6 +146,7 @@ References
|
||||
|
||||
A patch is proposed in Manila to solve a similar problem:
|
||||
https://review.openstack.org/#/c/315266/
|
||||
|
||||
Note that capabilities reporting for thin and thick provisioning
|
||||
in Manila is different from that in Cinder. In Manila, a driver reports
|
||||
`thin_provisioning = [True, False]` if it supports both thin and thick;
|
||||
|
@ -380,13 +380,17 @@ New Group APIs
|
||||
* V3/<tenant id>/groups/<group uuid>/action
|
||||
* Method: POST (We need to use "POST" not "DELETE" here because the request
|
||||
body has a flag and therefore is not empty.)
|
||||
* JSON schema definition for V3::
|
||||
* JSON schema definition for V3:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
"delete":
|
||||
{
|
||||
"delete-volumes": False
|
||||
}
|
||||
}
|
||||
|
||||
* Set delete-volumes flag to True to delete a group with volumes in it.
|
||||
This will delete the group and all the volumes. Deleting an empty
|
||||
group does not need the flag to be True.
|
||||
|
@ -50,11 +50,17 @@ corresponding to some existing CG APIs.
|
||||
|
||||
The missing pieces to support the existing CG feature using the generic
|
||||
volume contructs are as follows.
|
||||
|
||||
* A group_snapshots table (for cgsnapshots table)
|
||||
|
||||
* Create group snapshot API (for create cgsnapshot)
|
||||
|
||||
* Delete group snapshot API (for delete cgsnapshot)
|
||||
|
||||
* List group snapshots API (for list cgsnapshots)
|
||||
|
||||
* Show group snapshot API (for show cgsnapshot)
|
||||
|
||||
* Create group from source group or source group snapshot API (for
|
||||
create CG from cgsnapshot or source CG)
|
||||
|
||||
@ -158,7 +164,11 @@ Proposed change
|
||||
type spec, the manager will call create_group in the driver first and will
|
||||
call create_consistencygroup in the driver if create_group is not
|
||||
implemented.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{'consistent_group_snapshot_enabled': <is> True}
|
||||
|
||||
Same applies to delete_group, update_group, create_group_snapshot,
|
||||
delete_group_snapshot, and create_group_from_src. This way the new APIs
|
||||
will work with existing driver implementation of CG functions.
|
||||
|
@ -73,7 +73,7 @@ in Active-Active environments.
|
||||
Proposed change
|
||||
===============
|
||||
|
||||
This spec suggests modifying behavior introduced by `Tooz locks for A/A spec`_
|
||||
This spec suggests modifying behavior introduced by `Tooz locks for A/A`_
|
||||
for the case where the drivers don't need distributed locks. So we would use
|
||||
local file locks in the drivers (if they use any) and for the locks in the
|
||||
manager we would use a locking mechanism based on the ``workers`` table that
|
||||
@ -137,13 +137,16 @@ On a closer look at these 4 locks mentioned before we can classify them in 2
|
||||
categories:
|
||||
|
||||
- Locks for the resource of the operation.
|
||||
|
||||
- *${VOL_UUID}-detach_volume* - Used in ``detach_volume`` to prevent
|
||||
multiple simultaneous detaches
|
||||
|
||||
- *${VOL_UUID}* - Used in ``attach_volume`` to prevent multiple
|
||||
simultaneous attaches
|
||||
|
||||
- Locks that prevent deletion of the source of a volume creation (they are
|
||||
created by ``create_volume`` method):
|
||||
|
||||
- *${VOL_UUID}-delete_volume* - Used in ``delete_volume``
|
||||
- *${SNAPSHOT_UUID}-delete_snapshot* - Used in ``delete_snapshot``
|
||||
|
||||
|
@ -60,10 +60,17 @@ REST API impact
|
||||
|
||||
* Add volume id list into response body of querying a CG detail if specifying
|
||||
the argument 'list_volume=True', this will be dependent on the Generic Group
|
||||
spec now, so just leave a example here::
|
||||
spec now, so just leave a example here:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
GET /v3/{project_id}/consistencygroups/{consistency_group_id}?list_volume=True
|
||||
RESP BODY: {"consistencygroup": {"status": "XXX",
|
||||
|
||||
RESP BODY:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{"consistencygroup": {"status": "XXX",
|
||||
"description": "XXX",
|
||||
...,
|
||||
"volume_list":['volume_id1',
|
||||
@ -72,7 +79,9 @@ RESP BODY: {"consistencygroup": {"status": "XXX",
|
||||
}
|
||||
}
|
||||
|
||||
* Add a filter "group_id=xxx" in URL of querying volume list/detail::
|
||||
* Add a filter "group_id=xxx" in URL of querying volume list/detail:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
GET /v3/{project_id}/volumes?group_id=XXX
|
||||
|
||||
|
@ -188,8 +188,8 @@ In ``os-brick/initiator/linuxfc.py``:
|
||||
|
||||
* new class LinuxFibreChannelECKD
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
::
|
||||
def configure_eckd_device(self, device_number):
|
||||
"""Add the eckd volume to the Linux configuration. """
|
||||
|
||||
|
@ -91,12 +91,13 @@ Modify REST API to restore backup:
|
||||
* Add volume_type and availability_zone in request body
|
||||
* JSON request schema definition:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
'backup-restore': {
|
||||
'volume_id': volume_id,
|
||||
'volume_type': volume_type,
|
||||
'availability_zone': availability_zone,
|
||||
'name': name
|
||||
}
|
||||
'name': name}
|
||||
|
||||
Security impact
|
||||
---------------
|
||||
|
@ -182,3 +182,4 @@ References
|
||||
==========
|
||||
|
||||
None
|
||||
|
||||
|
@ -169,3 +169,4 @@ _`[1]`: https://review.openstack.org/#/c/45026/
|
||||
_`[2]`: https://review.openstack.org/#/c/266688/
|
||||
_`[3]`: https://en.wikipedia.org/wiki/ReDoS
|
||||
_`[4]`: https://review.openstack.org/#/c/441516/
|
||||
|
||||
|
@ -24,10 +24,12 @@ Use Cases
|
||||
=========
|
||||
* Allows developers testing a new configuration option to change what the
|
||||
option is set to and test without having to restart all the Cinder services.
|
||||
|
||||
* Allows developers to test new functionality implemented in a driver by
|
||||
enabling and disabling configuration options in cinder.conf without
|
||||
restarting Cinder services each time the option value is changed from
|
||||
'disabled' to 'enabled'.
|
||||
|
||||
* Allows admins to manage ip addresses for various backends in cinder.conf
|
||||
and have the connections dynamically update.
|
||||
|
||||
@ -50,25 +52,27 @@ this is a sizable issue that needs to be investigated.
|
||||
Alternatives
|
||||
------------
|
||||
* Manual Approach: Continue to manually restart Cinder services when changes
|
||||
are made to settings in the cinder.conf file. This is the current approach and
|
||||
is what we are trying to get around.
|
||||
are made to settings in the cinder.conf file. This is the current approach
|
||||
and is what we are trying to get around.
|
||||
|
||||
* Reload & Restart Approach: First each process would need to finish the
|
||||
actions that were ongoing. Next the db and RPC connections would need to
|
||||
be dropped and then all caches and all driver caches would need to be
|
||||
flushed before reloading. This approach adds a lot of complexity and lots of
|
||||
possibilities for failure since each cache has to be flushed- things could get
|
||||
missed or not flushed properly.
|
||||
possibilities for failure since each cache has to be flushed- things could
|
||||
get missed or not flushed properly.
|
||||
|
||||
* File Watcher: Code will be added to the cinder processes to watch for
|
||||
changes to the cinder.conf file. Once the processes see that changes have been
|
||||
made, the process will drain and take the necessary actions to reconfigure
|
||||
itself and then auto-restart. This capability could also be controlled by a
|
||||
configuration option so that if the user didn't want dynamic reconfiguration
|
||||
enabled, they could just disable it in the cinder.conf file. This approach is
|
||||
dangerous because it wouldn't account for configuration options that are saved
|
||||
into variables. To fix these cases, there would be a sizeable impact for
|
||||
developers finding and replacing all instances of configuration variables and
|
||||
in doing so a number of assumptions of deployment, configuration, tools, and
|
||||
packaging would be broken.
|
||||
changes to the cinder.conf file. Once the processes see that changes have
|
||||
been made, the process will drain and take the necessary actions to
|
||||
reconfigure itself and then auto-restart. This capability could also be
|
||||
controlled by a configuration option so that if the user didn't want dynamic
|
||||
reconfiguration enabled, they could just disable it in the cinder.conf file.
|
||||
This approach is dangerous because it wouldn't account for configuration
|
||||
options that are saved into variables. To fix these cases, there would be a
|
||||
sizeable impact for developers finding and replacing all instances of
|
||||
configuration variables and in doing so a number of assumptions of
|
||||
deployment, configuration, tools, and packaging would be broken.
|
||||
|
||||
Data model impact
|
||||
-----------------
|
||||
@ -118,10 +122,11 @@ Secondary assignee:
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
* Implement handling of SIGHUP signal
|
||||
**Ensure caches are cleared
|
||||
**Dependent vars are updated
|
||||
**Connections are cleanly dropped
|
||||
- Ensure caches are cleared
|
||||
- Dependent vars are updated
|
||||
- Connections are cleanly dropped
|
||||
* Write Unit tests
|
||||
* Testing on a variety of drivers
|
||||
* Update devref to describe SIGHUP/reload process
|
||||
|
@ -122,9 +122,12 @@ The proposed changes are:
|
||||
VOLUME_BACKUP_001_0002 : ({PROJECT}_{RESOURCE}_{ACTION_ID}_{MESSAGE_ID})
|
||||
|
||||
We could have these benefits:
|
||||
|
||||
1. We don't need to define that much events. (we only need to
|
||||
define less messages).
|
||||
|
||||
2. It's also unique cross all of OpenStack.
|
||||
|
||||
3. It's reading friendly and easy to classify or analysis.
|
||||
|
||||
Alternatives
|
||||
|
@ -198,40 +198,44 @@ URL for the group action API from the Generic Volume Group design::
|
||||
|
||||
* Enable replication
|
||||
|
||||
** Method: POST
|
||||
- Method: POST
|
||||
|
||||
** JSON schema definition for V3::
|
||||
- JSON schema definition for V3:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'enable_replication':
|
||||
{
|
||||
}
|
||||
'enable_replication': {}
|
||||
}
|
||||
|
||||
** Normal response codes: 202
|
||||
- Normal response codes: 202
|
||||
|
||||
** Error response codes: 400, 403, 404
|
||||
- Error response codes: 400, 403, 404
|
||||
|
||||
* Disable replication
|
||||
|
||||
** Method: POST
|
||||
- Method: POST
|
||||
|
||||
- JSON schema definition for V3:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
** JSON schema definition for V3::
|
||||
{
|
||||
'disable_replication':
|
||||
{
|
||||
}
|
||||
'disable_replication': {}
|
||||
}
|
||||
|
||||
** Normal response codes: 202
|
||||
- Normal response codes: 202
|
||||
|
||||
** Error response codes: 400, 403, 404
|
||||
- Error response codes: 400, 403, 404
|
||||
|
||||
* Failover replication
|
||||
|
||||
** Method: POST
|
||||
- Method: POST
|
||||
|
||||
- JSON schema definition for V3:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
** JSON schema definition for V3::
|
||||
{
|
||||
'failover_replication':
|
||||
{
|
||||
@ -240,22 +244,26 @@ URL for the group action API from the Generic Volume Group design::
|
||||
}
|
||||
}
|
||||
|
||||
** Normal response codes: 202
|
||||
- Normal response codes: 202
|
||||
|
||||
** Error response codes: 400, 403, 404
|
||||
- Error response codes: 400, 403, 404
|
||||
|
||||
* List replication targets
|
||||
|
||||
** Method: POST
|
||||
- Method: POST
|
||||
|
||||
- JSON schema definition for V3:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
** JSON schema definition for V3::
|
||||
{
|
||||
'list_replication_targets':
|
||||
{
|
||||
}
|
||||
'list_replication_targets': {}
|
||||
}
|
||||
|
||||
** Response example::
|
||||
- Response example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'replication_targets': [
|
||||
{
|
||||
@ -271,7 +279,10 @@ URL for the group action API from the Generic Volume Group design::
|
||||
]
|
||||
}
|
||||
|
||||
** Response example for non-admin::
|
||||
- Response example for non-admin:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'replication_targets': [
|
||||
{
|
||||
|
@ -28,8 +28,7 @@ Current Workflow
|
||||
b. Stop cinder volume service
|
||||
c. Update cinder.conf to have backend A replaced with B and B with C
|
||||
d. *Hack db to set backend to no longer be in 'failed-over' state*
|
||||
* This is the step this spec is concerned with
|
||||
* Example:
|
||||
This is the step this spec is concerned with. Example:
|
||||
::
|
||||
|
||||
update services set disabled=0,
|
||||
@ -37,6 +36,7 @@ Current Workflow
|
||||
replication_status='enabled',
|
||||
active_backend_id=NULL
|
||||
where id=3;
|
||||
|
||||
e. Start cinder volume service
|
||||
f. Unfreeze backend
|
||||
|
||||
|
@ -65,9 +65,15 @@ REST API impact
|
||||
Add backend_state: up/down into response body of service list and also need
|
||||
a microversions for this feature:
|
||||
|
||||
* GET /v3/{project_id}/os-services
|
||||
.. code-block:: console
|
||||
|
||||
RESP BODY: {"services": [{"host": "host@backend1",
|
||||
GET /v3/{project_id}/os-services
|
||||
|
||||
RESP BODY:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{"services": [{"host": "host@backend1",
|
||||
...,
|
||||
"backend_status": "up",
|
||||
},
|
||||
|
@ -301,10 +301,12 @@ remove the existing validation of parameters which is there inside of
|
||||
controller methods which will again break the v2 apis.
|
||||
|
||||
Solution:
|
||||
1] Do the schema validation for v3 apis using the @validation.schema decorator
|
||||
|
||||
1. Do the schema validation for v3 apis using the @validation.schema decorator
|
||||
similar to Nova and also keep the validation code which is there inside of
|
||||
method to keep v2 working.
|
||||
2] Once the decision is made to remove the support to v2 we should remove the
|
||||
|
||||
2. Once the decision is made to remove the support to v2 we should remove the
|
||||
validation code from inside of method.
|
||||
|
||||
|
||||
|
@ -44,7 +44,7 @@ Some notes about using this template:
|
||||
as changing parameters which can be returned or accepted, or even
|
||||
the semantics of what happens when a client calls into the API, then
|
||||
you should add the APIImpact flag to the commit message. Specifications with
|
||||
the APIImpact flag can be found with the following query::
|
||||
the APIImpact flag can be found with the following query:
|
||||
|
||||
https://review.openstack.org/#/q/status:open+project:openstack/cinder-specs+message:apiimpact,n,z
|
||||
|
||||
|
@ -1,129 +0,0 @@
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import glob
|
||||
import os
|
||||
import re
|
||||
|
||||
import docutils.core
|
||||
import testtools
|
||||
|
||||
|
||||
class TestTitles(testtools.TestCase):
|
||||
def _get_title(self, section_tree):
|
||||
section = {
|
||||
'subtitles': [],
|
||||
}
|
||||
for node in section_tree:
|
||||
if node.tagname == 'title':
|
||||
section['name'] = node.rawsource
|
||||
elif node.tagname == 'section':
|
||||
subsection = self._get_title(node)
|
||||
section['subtitles'].append(subsection['name'])
|
||||
return section
|
||||
|
||||
def _get_titles(self, spec):
|
||||
titles = {}
|
||||
for node in spec:
|
||||
if node.tagname == 'section':
|
||||
section = self._get_title(node)
|
||||
titles[section['name']] = section['subtitles']
|
||||
return titles
|
||||
|
||||
def _check_titles(self, spec, titles):
|
||||
self.assertEqual(8, len(titles),
|
||||
"Titles count in '%s' doesn't match expected" % spec)
|
||||
problem = 'Problem description'
|
||||
self.assertIn(problem, titles)
|
||||
|
||||
self.assertIn('Use Cases', titles)
|
||||
|
||||
proposed = 'Proposed change'
|
||||
self.assertIn(proposed, titles)
|
||||
self.assertIn('Alternatives', titles[proposed])
|
||||
self.assertIn('Data model impact', titles[proposed])
|
||||
self.assertIn('REST API impact', titles[proposed])
|
||||
self.assertIn('Security impact', titles[proposed])
|
||||
self.assertIn('Notifications impact', titles[proposed])
|
||||
self.assertIn('Other end user impact', titles[proposed])
|
||||
self.assertIn('Performance Impact', titles[proposed])
|
||||
self.assertIn('Other deployer impact', titles[proposed])
|
||||
self.assertIn('Developer impact', titles[proposed])
|
||||
|
||||
impl = 'Implementation'
|
||||
self.assertIn(impl, titles)
|
||||
self.assertIn('Assignee(s)', titles[impl])
|
||||
self.assertIn('Work Items', titles[impl])
|
||||
|
||||
deps = 'Dependencies'
|
||||
self.assertIn(deps, titles)
|
||||
|
||||
testing = 'Testing'
|
||||
self.assertIn(testing, titles)
|
||||
|
||||
docs = 'Documentation Impact'
|
||||
self.assertIn(docs, titles)
|
||||
|
||||
refs = 'References'
|
||||
self.assertIn(refs, titles)
|
||||
|
||||
def _check_lines_wrapping(self, tpl, raw):
|
||||
for i, line in enumerate(raw.split("\n")):
|
||||
if "http://" in line or "https://" in line:
|
||||
continue
|
||||
self.assertTrue(
|
||||
len(line) < 80,
|
||||
msg="%s:%d: Line limited to a maximum of 79 characters." %
|
||||
(tpl, i + 1))
|
||||
|
||||
def _check_no_cr(self, tpl, raw):
|
||||
matches = re.findall('\r', raw)
|
||||
self.assertEqual(
|
||||
len(matches), 0,
|
||||
"Found %s literal carriage returns in file %s" %
|
||||
(len(matches), tpl))
|
||||
|
||||
def _check_trailing_spaces(self, tpl, raw):
|
||||
for i, line in enumerate(raw.split("\n")):
|
||||
trailing_spaces = re.findall(" +$", line)
|
||||
msg = "Found trailing spaces on line %s of %s" % (i + 1, tpl)
|
||||
self.assertEqual(len(trailing_spaces), 0, msg)
|
||||
|
||||
def test_template(self):
|
||||
# NOTE (e0ne): adding 'template.rst' to ignore dirs to exclude it from
|
||||
# os.listdir output
|
||||
ignored_dirs = {'template.rst', 'api'}
|
||||
|
||||
files = ['specs/template.rst']
|
||||
|
||||
# NOTE (e0ne): We don't check specs in 'api' directory because
|
||||
# they don't match template.rts. Uncomment code below it you want
|
||||
# to test them.
|
||||
# files.extend(glob.glob('specs/api/*/*'))
|
||||
|
||||
releases = set(os.listdir('specs')) - ignored_dirs
|
||||
for release in releases:
|
||||
specs = glob.glob('specs/%s/*' % release)
|
||||
files.extend(specs)
|
||||
|
||||
for filename in files:
|
||||
self.assertTrue(filename.endswith(".rst"),
|
||||
"spec's file must use 'rst' extension.")
|
||||
with open(filename) as f:
|
||||
data = f.read()
|
||||
|
||||
spec = docutils.core.publish_doctree(data)
|
||||
titles = self._get_titles(spec)
|
||||
self._check_titles(filename, titles)
|
||||
self._check_lines_wrapping(filename, data)
|
||||
self._check_no_cr(filename, data)
|
||||
self._check_trailing_spaces(filename, data)
|
16
tox.ini
16
tox.ini
@ -1,6 +1,6 @@
|
||||
[tox]
|
||||
minversion = 1.6
|
||||
envlist = docs,py27,py35,pep8
|
||||
envlist = docs,pep8
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
@ -9,20 +9,22 @@ install_command = pip install -U {opts} {packages}
|
||||
setenv =
|
||||
VIRTUAL_ENV={envdir}
|
||||
deps = -r{toxinidir}/requirements.txt
|
||||
commands = python setup.py testr --slowest --testr-args='{posargs}'
|
||||
|
||||
[testenv:docs]
|
||||
commands = python setup.py build_sphinx
|
||||
whitelist_externals = rm
|
||||
commands =
|
||||
rm -fr doc/build
|
||||
python setup.py build_sphinx
|
||||
doc8 --ignore D001 doc/source
|
||||
|
||||
[testenv:pep8]
|
||||
commands = flake8
|
||||
commands =
|
||||
flake8
|
||||
doc8 --ignore D001 specs/
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[testenv:cover]
|
||||
commands = python setup.py testr --coverage --testr-args='{posargs}'
|
||||
|
||||
[flake8]
|
||||
# E123, E125 skipped as they are invalid PEP-8.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user