Remove unit testing
Spec repos do not have code to unit test. The gate job definition for the py27 and py35 jobs skip if there doc only changes, which is all we will ever have in the specs repo. Therefore the one "test" we had will never be run. We were using this unit test as a check for formatting issues with the rst files. This was done before doc8 was available. Now that we can use doc8, we should just switch to running that as part of the pep8 jobs. Also fixes all the errors caught by doc8 that were not caught by our unit test check. Change-Id: Ida20764edde3a07c89703d82b41958c96548b239
This commit is contained in:
parent
2f5701b912
commit
5016627f04
@ -5,3 +5,4 @@ testrepository>=0.0.18
|
|||||||
testtools>=0.9.34
|
testtools>=0.9.34
|
||||||
flake8
|
flake8
|
||||||
yasfb>=0.5.1
|
yasfb>=0.5.1
|
||||||
|
doc8>=0.6.0 # Apache-2.0
|
||||||
|
@ -17,7 +17,7 @@ May 19th, 2011 at 8:07:08 AM, GMT-5 has the following format:
|
|||||||
|
|
||||||
2011-05-19T08:07:08-05:00
|
2011-05-19T08:07:08-05:00
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
The following table describes the date time format codes:
|
The following table describes the date time format codes:
|
||||||
|
|
||||||
|
@ -9,29 +9,46 @@ body returns additional information about the fault.
|
|||||||
The following table lists possible fault types with their associated
|
The following table lists possible fault types with their associated
|
||||||
error codes and descriptions.
|
error codes and descriptions.
|
||||||
|
|
||||||
Fault type Associated error code Description
|
======================= ===================== ===============================
|
||||||
|
Fault type Associated error code Description
|
||||||
|
======================= ===================== ===============================
|
||||||
|
``badRequest`` 400 The user request contains one
|
||||||
|
or more errors.
|
||||||
|
``unauthorized`` 401 The supplied token is not
|
||||||
|
authorized to access the
|
||||||
|
resources, either it's expired
|
||||||
|
or invalid.
|
||||||
|
``forbidden`` 403 Access to the requested
|
||||||
|
resource was denied.
|
||||||
|
``itemNotFound`` 404 The back-end services did not
|
||||||
|
find anything matching the
|
||||||
|
Request-URI.
|
||||||
|
``badMethod`` 405 The request method is not
|
||||||
|
allowed for this resource.
|
||||||
|
``overLimit`` 413 Either the number of entities
|
||||||
|
in the request is larger than
|
||||||
|
allowed limits, or the user has
|
||||||
|
exceeded allowable request rate
|
||||||
|
limits. See the ``details``
|
||||||
|
element for more specifics.
|
||||||
|
Contact your cloud provider if
|
||||||
|
you think you need higher
|
||||||
|
request rate limits.
|
||||||
|
``badMediaType`` 415 The requested content type is
|
||||||
|
not supported by this service.
|
||||||
|
``unprocessableEntity`` 422 The requested resource could
|
||||||
|
not be processed on at the
|
||||||
|
moment.
|
||||||
|
``instanceFault`` 500 This is a generic server error
|
||||||
|
and the message contains the
|
||||||
|
reason for the error. This
|
||||||
|
error could wrap several error
|
||||||
|
messages and is a catch all.
|
||||||
|
``notImplemented`` 501 The requested method or
|
||||||
|
resource is not implemented.
|
||||||
|
``serviceUnavailable`` 503 Block Storage is not available.
|
||||||
|
======================= ===================== ===============================
|
||||||
|
|
||||||
``badRequest`` 400 The user request contains one or more errors.
|
|
||||||
|
|
||||||
``unauthorized`` 401 The supplied token is not authorized to access the resources, either it's expired or invalid.
|
|
||||||
|
|
||||||
``forbidden`` 403 Access to the requested resource was denied.
|
|
||||||
|
|
||||||
``itemNotFound`` 404 The back-end services did not find anything matching the Request-URI.
|
|
||||||
|
|
||||||
``badMethod`` 405 The request method is not allowed for this resource.
|
|
||||||
|
|
||||||
``overLimit`` 413 Either the number of entities in the request is larger than allowed limits, or the user has exceeded allowable request rate limits. See the ``details`` element for more specifics. Contact your cloud provider if you think you need higher request rate limits.
|
|
||||||
|
|
||||||
``badMediaType`` 415 The requested content type is not supported by this service.
|
|
||||||
|
|
||||||
``unprocessableEntity`` 422 The requested resource could not be processed on at the moment.
|
|
||||||
|
|
||||||
``instanceFault`` 500 This is a generic server error and the message contains the reason for the error. This error could wrap several error messages and is a catch all.
|
|
||||||
|
|
||||||
``notImplemented`` 501 The requested method or resource is not implemented.
|
|
||||||
|
|
||||||
``serviceUnavailable`` 503 Block Storage is not available.
|
|
||||||
|
|
||||||
The following two ``instanceFault`` examples show errors when the server
|
The following two ``instanceFault`` examples show errors when the server
|
||||||
has erred or cannot perform the requested operation:
|
has erred or cannot perform the requested operation:
|
||||||
@ -54,7 +71,7 @@ has erred or cannot perform the requested operation:
|
|||||||
performing the requested operation. </message>
|
performing the requested operation. </message>
|
||||||
</instanceFault>
|
</instanceFault>
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
**Example 2.5. Example fault response: JSON**
|
**Example 2.5. Example fault response: JSON**
|
||||||
|
|
||||||
@ -74,7 +91,7 @@ has erred or cannot perform the requested operation:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
The error code (``code``) is returned in the body of the response for
|
The error code (``code``) is returned in the body of the response for
|
||||||
convenience. The ``message`` element returns a human-readable message
|
convenience. The ``message`` element returns a human-readable message
|
||||||
@ -83,7 +100,7 @@ is optional and may contain information that is useful for tracking down
|
|||||||
an error, such as a stack trace. The ``details`` element may or may not
|
an error, such as a stack trace. The ``details`` element may or may not
|
||||||
be appropriate for display to an end user, depending on the role and
|
be appropriate for display to an end user, depending on the role and
|
||||||
experience of the end user.
|
experience of the end user.
|
||||||
|
`
|
||||||
The fault's root element (for example, ``instanceFault``) may change
|
The fault's root element (for example, ``instanceFault``) may change
|
||||||
depending on the type of error.
|
depending on the type of error.
|
||||||
|
|
||||||
@ -108,7 +125,7 @@ size is invalid:
|
|||||||
cannot be accepted. </message>
|
cannot be accepted. </message>
|
||||||
</badRequest>
|
</badRequest>
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
**Example 2.7. Example badRequest fault on volume size errors: JSON**
|
**Example 2.7. Example badRequest fault on volume size errors: JSON**
|
||||||
|
|
||||||
@ -128,7 +145,7 @@ size is invalid:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
The next two examples show ``itemNotFound`` errors:
|
The next two examples show ``itemNotFound`` errors:
|
||||||
|
|
||||||
@ -149,7 +166,7 @@ The next two examples show ``itemNotFound`` errors:
|
|||||||
<message> The resource could not be found. </message>
|
<message> The resource could not be found. </message>
|
||||||
</itemNotFound>
|
</itemNotFound>
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
**Example 2.9. Example itemNotFound fault: JSON**
|
**Example 2.9. Example itemNotFound fault: JSON**
|
||||||
|
|
||||||
@ -169,5 +186,5 @@ The next two examples show ``itemNotFound`` errors:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
|
@ -72,7 +72,7 @@ Default
|
|||||||
|
|
||||||
100/minute
|
100/minute
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
Rate limits are applied in order relative to the verb, going from least
|
Rate limits are applied in order relative to the verb, going from least
|
||||||
to most specific. For example, although the threshold for **POST** to
|
to most specific. For example, although the threshold for **POST** to
|
||||||
@ -102,5 +102,5 @@ Maximum amount of block storage
|
|||||||
|
|
||||||
1 TB
|
1 TB
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ In the request example below, notice that *``Content-Type``* is set to
|
|||||||
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
|
X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb
|
||||||
Accept: application/xml
|
Accept: application/xml
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
An XML response format is returned:
|
An XML response format is returned:
|
||||||
|
|
||||||
@ -63,5 +63,5 @@ An XML response format is returned:
|
|||||||
</volume_type>
|
</volume_type>
|
||||||
</volume_types>
|
</volume_types>
|
||||||
|
|
||||||
|
|
|
|
||||||
|
|
||||||
|
@ -4,9 +4,9 @@
|
|||||||
|
|
||||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||||
|
|
||||||
==========================================
|
=====================================================================
|
||||||
Extending IBMNAS driver to support NAS based GPFS storage deployments
|
Extending IBMNAS driver to support NAS based GPFS storage deployments
|
||||||
==========================================
|
=====================================================================
|
||||||
|
|
||||||
https://blueprints.launchpad.net/cinder/+spec/add-gpfs-nas-to-ibmnas
|
https://blueprints.launchpad.net/cinder/+spec/add-gpfs-nas-to-ibmnas
|
||||||
|
|
||||||
|
@ -137,9 +137,9 @@ Create volume with replication enabled:
|
|||||||
When a replicated volume is created it is expected that the volume dictionary
|
When a replicated volume is created it is expected that the volume dictionary
|
||||||
will be populated as follows:
|
will be populated as follows:
|
||||||
|
|
||||||
** volume['replication_status'] = 'copying'
|
- volume['replication_status'] = 'copying'
|
||||||
** volume['replication_extended_status'] = <driver specific value>
|
- volume['replication_extended_status'] = <driver specific value>
|
||||||
** volume['driver_data'] = <driver specific value>
|
- volume['driver_data'] = <driver specific value>
|
||||||
|
|
||||||
The replica volume is hidden from the end user as the end user will
|
The replica volume is hidden from the end user as the end user will
|
||||||
never need to directly interact with the replica volume. Any interaction
|
never need to directly interact with the replica volume. Any interaction
|
||||||
@ -208,21 +208,28 @@ Re-type volume:
|
|||||||
|
|
||||||
The steps to implement this would look as follows:
|
The steps to implement this would look as follows:
|
||||||
* Do a diff['extra_specs'] and see if 'replication' is included.
|
* Do a diff['extra_specs'] and see if 'replication' is included.
|
||||||
|
|
||||||
* If replication was enabled for the original volume_type but is not
|
* If replication was enabled for the original volume_type but is not
|
||||||
not enabled for the new volume_type, then replication should be disabled.
|
not enabled for the new volume_type, then replication should be disabled.
|
||||||
|
|
||||||
* The replica should be deleted.
|
* The replica should be deleted.
|
||||||
|
|
||||||
* The volume dictionary should be updated as follows:
|
* The volume dictionary should be updated as follows:
|
||||||
** volume['replication_status'] = 'disabled'
|
|
||||||
** volume['replication_extended_status'] = None
|
- volume['replication_status'] = 'disabled'
|
||||||
** volume['driver_data'] = None
|
- volume['replication_extended_status'] = None
|
||||||
|
- volume['driver_data'] = None
|
||||||
|
|
||||||
* If replication was not enabled for the original volume_type but is
|
* If replication was not enabled for the original volume_type but is
|
||||||
enabled for the new volume_type, then replication should be enabled.
|
enabled for the new volume_type, then replication should be enabled.
|
||||||
|
|
||||||
* A volume replica should be created and the replication should
|
* A volume replica should be created and the replication should
|
||||||
be set up between the volume and the newly created replica.
|
be set up between the volume and the newly created replica.
|
||||||
|
|
||||||
* The volume dictionary should be updated as follows:
|
* The volume dictionary should be updated as follows:
|
||||||
** volume['replication_status'] = 'copying'
|
- volume['replication_status'] = 'copying'
|
||||||
** volume['replication_extended_status'] = <driver specific value>
|
- volume['replication_extended_status'] = <driver specific value>
|
||||||
** volume['driver_data'] = <driver specific value>
|
- volume['driver_data'] = <driver specific value>
|
||||||
|
|
||||||
Get Replication Status:
|
Get Replication Status:
|
||||||
|
|
||||||
@ -241,12 +248,14 @@ Get Replication Status:
|
|||||||
happen:
|
happen:
|
||||||
|
|
||||||
* volume['replication_status'] = <error | copying | active | active-stopped |
|
* volume['replication_status'] = <error | copying | active | active-stopped |
|
||||||
inactive>
|
inactive>
|
||||||
** 'error' if an error occurred with replication.
|
|
||||||
** 'copying' replication copying data to secondary (inconsistent)
|
- 'error' if an error occurred with replication.
|
||||||
** 'active' replication copying data to secondary (consistent)
|
- 'copying' replication copying data to secondary (inconsistent)
|
||||||
** 'active-stopped' replication data copy on hold (consistent)
|
- 'active' replication copying data to secondary (consistent)
|
||||||
** 'inactive' if replication data copy is stopped (inconsistent)
|
- 'active-stopped' replication data copy on hold (consistent)
|
||||||
|
- 'inactive' if replication data copy is stopped (inconsistent)
|
||||||
|
|
||||||
* volume['replication_extended_status'] = <driver specific value>
|
* volume['replication_extended_status'] = <driver specific value>
|
||||||
* volume['driver_data'] = <driver specific value>
|
* volume['driver_data'] = <driver specific value>
|
||||||
|
|
||||||
@ -266,9 +275,12 @@ Promote replica:
|
|||||||
|
|
||||||
As with the functions above, the volume driver is expected to update the
|
As with the functions above, the volume driver is expected to update the
|
||||||
volume dictionary as follows:
|
volume dictionary as follows:
|
||||||
|
|
||||||
* volume['replication_status'] = <error | inactive>
|
* volume['replication_status'] = <error | inactive>
|
||||||
** 'error' if an error occurred with replication.
|
|
||||||
** 'inactive' if replication data copy on hold (inconsistent)
|
- 'error' if an error occurred with replication.
|
||||||
|
- 'inactive' if replication data copy on hold (inconsistent)
|
||||||
|
|
||||||
* volume['replication_extended_status'] = <driver specific value>
|
* volume['replication_extended_status'] = <driver specific value>
|
||||||
* volume['driver_data'] = <driver specific value>
|
* volume['driver_data'] = <driver specific value>
|
||||||
|
|
||||||
@ -283,13 +295,16 @@ Re-enable replication:
|
|||||||
|
|
||||||
The backend driver is expected to update the following volume dictionary
|
The backend driver is expected to update the following volume dictionary
|
||||||
entries:
|
entries:
|
||||||
|
|
||||||
* volume['replication_status'] = <error | copying | active | active-stopped |
|
* volume['replication_status'] = <error | copying | active | active-stopped |
|
||||||
inactive>
|
inactive>
|
||||||
** 'error' if an error occurred with replication.
|
|
||||||
** 'copying' replication copying data to secondary (inconsistent)
|
- 'error' if an error occurred with replication.
|
||||||
** 'active' replication copying data to secondary (consistent)
|
- 'copying' replication copying data to secondary (inconsistent)
|
||||||
** 'active-stopped' replication data copy on hold (consistent)
|
- 'active' replication copying data to secondary (consistent)
|
||||||
** 'inactive' if replication data copy is stopped (inconsistent)
|
- 'active-stopped' replication data copy on hold (consistent)
|
||||||
|
- 'inactive' if replication data copy is stopped (inconsistent)
|
||||||
|
|
||||||
* volume['replication_extended_status'] = <driver specific value>
|
* volume['replication_extended_status'] = <driver specific value>
|
||||||
* volume['driver_data'] = <driver specific value>
|
* volume['driver_data'] = <driver specific value>
|
||||||
|
|
||||||
@ -331,23 +346,32 @@ admin.
|
|||||||
Data model impact
|
Data model impact
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
* The volumes table will be updated:
|
The volumes table will be updated:
|
||||||
** Add replication_status column (string) for indicating the status of
|
|
||||||
replication for a give volume. Possible values are:
|
* Add replication_status column (string) for indicating the status of
|
||||||
*** 'copying' - Data is being copied between volumes, the secondary is
|
replication for a give volume. Possible values are:
|
||||||
inconsistent.
|
|
||||||
*** 'disabled' - Volume replication is disabled.
|
- 'copying' - Data is being copied between volumes, the secondary is
|
||||||
*** 'error' - Replication is in error state.
|
inconsistent.
|
||||||
*** 'active' - Data is being copied to the secondary and the secondary is
|
|
||||||
consistent.
|
- 'disabled' - Volume replication is disabled.
|
||||||
*** 'active-stopped' - Data is not being copied to the secondary (on hold),
|
|
||||||
the secondary volume is consistent.
|
- 'error' - Replication is in error state.
|
||||||
*** 'inactive' - Data is not being copied to the secondary, the secondary
|
|
||||||
copy is inconsistent.
|
- 'active' - Data is being copied to the secondary and the secondary is
|
||||||
** Add replication_extended_status column to contain details with regards
|
consistent.
|
||||||
to replication status of the primary and secondary volumes.
|
|
||||||
** Add replication_driver_data column to contain additional details that
|
- 'active-stopped' - Data is not being copied to the secondary (on hold),
|
||||||
may be needed by a vendor's driver to implement replication on a backend.
|
the secondary volume is consistent.
|
||||||
|
|
||||||
|
- 'inactive' - Data is not being copied to the secondary, the secondary
|
||||||
|
copy is inconsistent.
|
||||||
|
|
||||||
|
- Add replication_extended_status column to contain details with regards
|
||||||
|
to replication status of the primary and secondary volumes.
|
||||||
|
|
||||||
|
- Add replication_driver_data column to contain additional details that
|
||||||
|
may be needed by a vendor's driver to implement replication on a backend.
|
||||||
|
|
||||||
|
|
||||||
State diagram for replication (status)
|
State diagram for replication (status)
|
||||||
@ -386,12 +410,14 @@ REST API impact
|
|||||||
|
|
||||||
Create volume API will have "source-replica" added:
|
Create volume API will have "source-replica" added:
|
||||||
|
|
||||||
{
|
.. code-block:: python
|
||||||
"volume":
|
|
||||||
{
|
{
|
||||||
"source-replica": "Volume uuid of primary to clone",
|
"volume":
|
||||||
}
|
{
|
||||||
}
|
"source-replica": "Volume uuid of primary to clone",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
* Promote volume to be the primary volume
|
* Promote volume to be the primary volume
|
||||||
|
@ -32,7 +32,9 @@ Proposed change
|
|||||||
===============
|
===============
|
||||||
|
|
||||||
Build a base VolumeDriver class and subclasses that describe feature sets,
|
Build a base VolumeDriver class and subclasses that describe feature sets,
|
||||||
like::
|
like:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
+-------------------------+
|
+-------------------------+
|
||||||
+----------------+ BaseVolumeDriver +---------------+
|
+----------------+ BaseVolumeDriver +---------------+
|
||||||
@ -40,10 +42,10 @@ like::
|
|||||||
| +-----------^-------------+ |
|
| +-----------^-------------+ |
|
||||||
| | |
|
| | |
|
||||||
| | |
|
| | |
|
||||||
+--------+-------------+ +-----------+-------------+ +------------+---------+
|
+-------+-------------+ +-----------+-------------+ +------------+---------+
|
||||||
| VolumeDriverBackup | | VolumeDriverSnapshot | | VolumeDriverImport |
|
| VolumeDriverBackup | | VolumeDriverSnapshot | | VolumeDriverImport |
|
||||||
| {abstract} | | {abstract} | | {abstract} |
|
| {abstract} | | {abstract} | | {abstract} |
|
||||||
+----------------------+ +-------------------------+ +----------------------+
|
+---------------------+ +-------------------------+ +----------------------+
|
||||||
|
|
||||||
|
|
||||||
If a driver implements the backup functionality and supports volume import it
|
If a driver implements the backup functionality and supports volume import it
|
||||||
|
@ -88,7 +88,7 @@ Other deployer impact
|
|||||||
---------------------
|
---------------------
|
||||||
|
|
||||||
* No new config options are required besides an extra allowed value for
|
* No new config options are required besides an extra allowed value for
|
||||||
'iscsi_helper' that would need to be explicitly set to 'chiscsi'.
|
'iscsi_helper' that would need to be explicitly set to 'chiscsi'.
|
||||||
* chiscsi target needs to be installed before it can be used.
|
* chiscsi target needs to be installed before it can be used.
|
||||||
|
|
||||||
Developer impact
|
Developer impact
|
||||||
|
@ -17,13 +17,17 @@ Problem description
|
|||||||
===================
|
===================
|
||||||
|
|
||||||
* Create CG from CG snapshot
|
* Create CG from CG snapshot
|
||||||
|
|
||||||
Currently a user can create a Consistency Group and create a snapshot of a
|
Currently a user can create a Consistency Group and create a snapshot of a
|
||||||
Consistency Group. To restore from a Cgsnapshot, however, the following
|
Consistency Group. To restore from a Cgsnapshot, however, the following
|
||||||
steps need to be performed:
|
steps need to be performed:
|
||||||
|
|
||||||
1) Create a new Consistency Group.
|
1) Create a new Consistency Group.
|
||||||
2) Do the following for every volume in the Consistency group:
|
2) Do the following for every volume in the Consistency group:
|
||||||
|
|
||||||
a) Call "create volume from snapshot" for snapshot associated with every
|
a) Call "create volume from snapshot" for snapshot associated with every
|
||||||
volume in the original Consistency Group.
|
volume in the original Consistency Group.
|
||||||
|
|
||||||
There's no single API that allows a user to create a Consistency Group from
|
There's no single API that allows a user to create a Consistency Group from
|
||||||
a Cgsnapshot.
|
a Cgsnapshot.
|
||||||
|
|
||||||
@ -45,22 +49,30 @@ Proposed change
|
|||||||
===============
|
===============
|
||||||
|
|
||||||
* Create CG from CG snapshot
|
* Create CG from CG snapshot
|
||||||
|
|
||||||
* Add an API that allows a user to create a Consistency Group from a
|
* Add an API that allows a user to create a Consistency Group from a
|
||||||
Cgsnapshot.
|
Cgsnapshot.
|
||||||
|
|
||||||
* Add a Volume Driver API accordingly.
|
* Add a Volume Driver API accordingly.
|
||||||
|
|
||||||
* Modify Consistency Group
|
* Modify Consistency Group
|
||||||
|
|
||||||
* Add an API that adds existing volumes to CG and removing volumes from CG
|
* Add an API that adds existing volumes to CG and removing volumes from CG
|
||||||
after it is created.
|
after it is created.
|
||||||
|
|
||||||
* Add a Volume Driver API accordingly.
|
* Add a Volume Driver API accordingly.
|
||||||
|
|
||||||
* DB Schema Changes
|
* DB Schema Changes
|
||||||
|
|
||||||
The following changes are proposed:
|
The following changes are proposed:
|
||||||
|
|
||||||
* A new cg_volumetypes table will be created.
|
* A new cg_volumetypes table will be created.
|
||||||
* This new table will contain 3 columns:
|
* This new table will contain 3 columns:
|
||||||
|
|
||||||
* uuid of a cg_volumetype entry
|
* uuid of a cg_volumetype entry
|
||||||
* uuid of a consistencygroup
|
* uuid of a consistencygroup
|
||||||
* uuid of a volume type
|
* uuid of a volume type
|
||||||
|
|
||||||
* Upgrade and downgrade functions will be provided for db migrations.
|
* Upgrade and downgrade functions will be provided for db migrations.
|
||||||
|
|
||||||
Alternatives
|
Alternatives
|
||||||
@ -75,9 +87,10 @@ Data model impact
|
|||||||
The following changes are proposed:
|
The following changes are proposed:
|
||||||
* A new cg_volumetypes table will be created.
|
* A new cg_volumetypes table will be created.
|
||||||
* This new table will contain 3 columns:
|
* This new table will contain 3 columns:
|
||||||
* uuid of a cg_volumetype entry
|
|
||||||
* uuid of a consistencygroup
|
- uuid of a cg_volumetype entry
|
||||||
* uuid of a volume type
|
- uuid of a consistencygroup
|
||||||
|
- uuid of a volume type
|
||||||
|
|
||||||
REST API impact
|
REST API impact
|
||||||
---------------
|
---------------
|
||||||
@ -133,11 +146,16 @@ New Consistency Group APIs changes
|
|||||||
|
|
||||||
|
|
||||||
* Cinder Volume Driver API
|
* Cinder Volume Driver API
|
||||||
|
|
||||||
The following new volume driver APIs will be added:
|
The following new volume driver APIs will be added:
|
||||||
* def create_consistencygroup_from_cgsnapshot(self, context,
|
|
||||||
consistencygroup, volumes, cgsnapshot, snapshots)
|
.. code-block:: python
|
||||||
* def modify_consistencygroup(self, context, consistencygroup,
|
|
||||||
old_volumes, new_volumes)
|
def create_consistencygroup_from_cgsnapshot(self, context,
|
||||||
|
consistencygroup, volumes, cgsnapshot, snapshots)
|
||||||
|
|
||||||
|
def modify_consistencygroup(self, context, consistencygroup,
|
||||||
|
old_volumes, new_volumes)
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
---------------
|
---------------
|
||||||
@ -153,14 +171,20 @@ Other end user impact
|
|||||||
python-cinderclient needs to be changed to support the new APIs.
|
python-cinderclient needs to be changed to support the new APIs.
|
||||||
|
|
||||||
* Create CG from CG snapshot
|
* Create CG from CG snapshot
|
||||||
cinder consisgroup-create --name <name> --description <description>
|
|
||||||
--cgsnapshot <cgsnapshot uuid or name>
|
.. code-block:: bash
|
||||||
|
|
||||||
|
cinder consisgroup-create --name <name> --description <description>
|
||||||
|
--cgsnapshot <cgsnapshot uuid or name>
|
||||||
|
|
||||||
* Modify CG
|
* Modify CG
|
||||||
cinder consisgroup-modify <cg uuid or name> --name <new name>
|
|
||||||
--description <new description> --addvolumes
|
.. code-block:: bash
|
||||||
<volume uuid> [<volume uuid> ...] --removevolumes
|
|
||||||
<volume uuid> [<volume uuid> ...]
|
cinder consisgroup-modify <cg uuid or name> --name <new name>
|
||||||
|
--description <new description> --addvolumes
|
||||||
|
<volume uuid> [<volume uuid> ...] --removevolumes
|
||||||
|
<volume uuid> [<volume uuid> ...]
|
||||||
|
|
||||||
Performance Impact
|
Performance Impact
|
||||||
------------------
|
------------------
|
||||||
@ -192,11 +216,15 @@ Work Items
|
|||||||
----------
|
----------
|
||||||
|
|
||||||
1. API changes:
|
1. API changes:
|
||||||
|
|
||||||
* Create CG from CG snapshot API
|
* Create CG from CG snapshot API
|
||||||
* Modify CG API
|
* Modify CG API
|
||||||
|
|
||||||
2. Volume Driver API changes:
|
2. Volume Driver API changes:
|
||||||
|
|
||||||
* Create CG from CG snapshot
|
* Create CG from CG snapshot
|
||||||
* Modify CG
|
* Modify CG
|
||||||
|
|
||||||
3. DB schema changes
|
3. DB schema changes
|
||||||
|
|
||||||
Dependencies
|
Dependencies
|
||||||
|
@ -83,6 +83,8 @@ parameters 'target_alternative_portals' and 'target_alternative_iqns', which
|
|||||||
contain a list of portal IP address:port pairs, a list of alternative iqn's and
|
contain a list of portal IP address:port pairs, a list of alternative iqn's and
|
||||||
lun's corresponding to each portal address. For example:
|
lun's corresponding to each portal address. For example:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{"connection_info": {"driver_volume_type": "iscsi", ...
|
{"connection_info": {"driver_volume_type": "iscsi", ...
|
||||||
"data": {"target_portal": "10.0.1.2:3260",
|
"data": {"target_portal": "10.0.1.2:3260",
|
||||||
"target_alternative_portals": [
|
"target_alternative_portals": [
|
||||||
|
@ -88,6 +88,8 @@ a list of secondary iqn's and lun's corresponding to each portal address.
|
|||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{"connection_info": {"driver_volume_type": "iscsi", ...
|
{"connection_info": {"driver_volume_type": "iscsi", ...
|
||||||
"data": {"target_portals": ["10.0.1.2:3260",
|
"data": {"target_portals": ["10.0.1.2:3260",
|
||||||
"10.0.2.2:3260"],
|
"10.0.2.2:3260"],
|
||||||
|
@ -139,7 +139,8 @@ None
|
|||||||
Testing
|
Testing
|
||||||
=======
|
=======
|
||||||
|
|
||||||
* With volume_copy_bps_limit = 100MB/s for a fake backend driver,
|
* With volume_copy_bps_limit = 100MB/s for a fake backend driver:
|
||||||
|
|
||||||
* start a volume copy, then the bps limit is set to 100MB/s
|
* start a volume copy, then the bps limit is set to 100MB/s
|
||||||
* start a second volume copy, then the limit is updated to 50MB/s for both
|
* start a second volume copy, then the limit is updated to 50MB/s for both
|
||||||
* finish one of the copies, then the limit is resumed to 100MB/s
|
* finish one of the copies, then the limit is resumed to 100MB/s
|
||||||
|
@ -157,8 +157,7 @@ support for the reporting feature.
|
|||||||
|
|
||||||
References
|
References
|
||||||
==========
|
==========
|
||||||
* oslo-incubator module: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/commo
|
* oslo-incubator module: http://git.openstack.org/cgit/openstack/oslo-incubator/tree/openstack/common/report
|
||||||
n/report
|
* blog about nova guru reports: https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-incubator-guru-meditation-reports/
|
||||||
* blog about nova guru reports: https://www.berrange.com/posts/2015/02/19/nova-and-its-use-of-olso-i
|
* oslo.reports repo: https://github.com/directxman12/oslo.reports
|
||||||
ncubator-guru-meditation-reports/
|
|
||||||
* oslo.reports repo: https://github.com/directxman12/oslo.reports
|
|
||||||
|
@ -178,5 +178,8 @@ References
|
|||||||
==========
|
==========
|
||||||
|
|
||||||
* http://eavesdrop.openstack.org/meetings/cinder/2015
|
* http://eavesdrop.openstack.org/meetings/cinder/2015
|
||||||
|
|
||||||
/cinder.2015-05-27-16.00.log.html
|
/cinder.2015-05-27-16.00.log.html
|
||||||
* https://blueprints.launchpad.net/cinder/+spec/image-volume-cache
|
|
||||||
|
* https://blueprints.launchpad.net/cinder/+spec/image-volume-cache
|
||||||
|
|
||||||
|
@ -124,20 +124,31 @@ Xing and Eric respectively.
|
|||||||
+----------------+----------------+----------+--------------+
|
+----------------+----------------+----------+--------------+
|
||||||
|
|
||||||
Consider following use-cases :-
|
Consider following use-cases :-
|
||||||
|
|
||||||
a. Suppose, Mike(admin of root project or cloud admin) increases the
|
a. Suppose, Mike(admin of root project or cloud admin) increases the
|
||||||
``hard_limit`` of volumes in CMS to 400
|
``hard_limit`` of volumes in CMS to 400
|
||||||
|
|
||||||
b. Suppose, Mike increases the ``hard_limit`` of volumes in CMS to 500
|
b. Suppose, Mike increases the ``hard_limit`` of volumes in CMS to 500
|
||||||
|
|
||||||
c. Suppose, Mike delete the quota of CMS
|
c. Suppose, Mike delete the quota of CMS
|
||||||
|
|
||||||
d. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 350
|
d. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 350
|
||||||
|
|
||||||
e. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 200
|
e. Suppose, Mike reduces the ``hard_limit`` of volumes in CMS to 200
|
||||||
|
|
||||||
f. Suppose, Jay(Manager of CMS)increases the ``hard_limit`` of
|
f. Suppose, Jay(Manager of CMS)increases the ``hard_limit`` of
|
||||||
volumes in CMS to 400
|
volumes in CMS to 400
|
||||||
|
|
||||||
g. Suppose, Jay tries to view the quota of ATLAS
|
g. Suppose, Jay tries to view the quota of ATLAS
|
||||||
|
|
||||||
h. Suppose, Duncan tries to reduce the ``hard_limit`` of volumes in CMS to
|
h. Suppose, Duncan tries to reduce the ``hard_limit`` of volumes in CMS to
|
||||||
400.
|
400.
|
||||||
|
|
||||||
i. Suppose, Mike tries to increase the ``hard_limit`` of volumes in
|
i. Suppose, Mike tries to increase the ``hard_limit`` of volumes in
|
||||||
ProductionIT to 2000.
|
ProductionIT to 2000.
|
||||||
|
|
||||||
j. Suppose, Mike deletes the quota of Visualisation.
|
j. Suppose, Mike deletes the quota of Visualisation.
|
||||||
|
|
||||||
k. Suppose, Mike deletes the project Visualisation.
|
k. Suppose, Mike deletes the project Visualisation.
|
||||||
|
|
||||||
9. Suppose the company doesn't want a nested structure and want to
|
9. Suppose the company doesn't want a nested structure and want to
|
||||||
@ -528,11 +539,9 @@ added since Kilo release.
|
|||||||
References
|
References
|
||||||
==========
|
==========
|
||||||
|
|
||||||
* `Hierarchical Projects Wiki
|
* `Hierarchical Projects Wiki <https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
||||||
<https://wiki.openstack.org/wiki/HierarchicalMultitenancy>`_
|
|
||||||
|
|
||||||
* `Hierarchical Projects
|
* `Hierarchical Projects <http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html>`_
|
||||||
<http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html>`_
|
|
||||||
|
* `Hierarchical Projects Improvements <https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy-improvements>`_
|
||||||
|
|
||||||
* `Hierarchical Projects Improvements
|
|
||||||
<https://blueprints.launchpad.net/keystone/+spec/hierarchical-multitenancy-improvements>`_
|
|
||||||
|
@ -43,6 +43,7 @@ Proposed change
|
|||||||
|
|
||||||
* The existing Create CG from Source API takes an existing CG snapshot
|
* The existing Create CG from Source API takes an existing CG snapshot
|
||||||
as the source.
|
as the source.
|
||||||
|
|
||||||
* This blueprint proposes to modify the existing API to accept an existing
|
* This blueprint proposes to modify the existing API to accept an existing
|
||||||
CG as the source.
|
CG as the source.
|
||||||
|
|
||||||
@ -51,9 +52,12 @@ Alternatives
|
|||||||
|
|
||||||
Without the proposed changes, we can create a CG from an existing CG
|
Without the proposed changes, we can create a CG from an existing CG
|
||||||
with these steps:
|
with these steps:
|
||||||
|
|
||||||
* Create an empty CG.
|
* Create an empty CG.
|
||||||
|
|
||||||
* Create a cloned volume from an existing volume in an existing CG
|
* Create a cloned volume from an existing volume in an existing CG
|
||||||
and add to the new CG.
|
and add to the new CG.
|
||||||
|
|
||||||
* Repeat the above step for all volumes in the CG.
|
* Repeat the above step for all volumes in the CG.
|
||||||
|
|
||||||
Data model impact
|
Data model impact
|
||||||
@ -68,9 +72,12 @@ REST API impact
|
|||||||
Consistency Group API change
|
Consistency Group API change
|
||||||
|
|
||||||
* Create Consistency Group from Source
|
* Create Consistency Group from Source
|
||||||
* V2/<tenant id>/consistencygroups/create_from_src
|
* V2/<tenant id>/consistencygroups/create_from_src
|
||||||
|
|
||||||
* Method: POST
|
* Method: POST
|
||||||
* JSON schema definition for V2::
|
* JSON schema definition for V2:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
"consistencygroup-from-src":
|
"consistencygroup-from-src":
|
||||||
@ -87,9 +94,12 @@ Consistency Group API change
|
|||||||
of the new CG.
|
of the new CG.
|
||||||
|
|
||||||
* Cinder Volume Driver API
|
* Cinder Volume Driver API
|
||||||
|
|
||||||
Two new optional parameters will be added to the existing volume driver API:
|
Two new optional parameters will be added to the existing volume driver API:
|
||||||
|
|
||||||
* def create_consistencygroup_from_src(self, context, group, volumes,
|
* def create_consistencygroup_from_src(self, context, group, volumes,
|
||||||
cgsnapshot=None, snapshots=None, src_group=None, src_volumes=None)
|
cgsnapshot=None, snapshots=None, src_group=None, src_volumes=None)
|
||||||
|
|
||||||
* Note only "src_group" and "src_volumes" are new parameters.
|
* Note only "src_group" and "src_volumes" are new parameters.
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
|
@ -160,4 +160,6 @@ References
|
|||||||
* Nova's spec for db archiving: https://review.openstack.org/#/c/18493/
|
* Nova's spec for db archiving: https://review.openstack.org/#/c/18493/
|
||||||
|
|
||||||
* Discussion in openstack-dev mailing list:
|
* Discussion in openstack-dev mailing list:
|
||||||
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029952.html
|
|
||||||
|
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029952.html
|
||||||
|
|
||||||
|
@ -112,24 +112,29 @@ The following existing v2 GET APIs will support the new sorting parameters:
|
|||||||
{items} will be replaced by the appropriate entities as follows:
|
{items} will be replaced by the appropriate entities as follows:
|
||||||
|
|
||||||
* For snapshots:
|
* For snapshots:
|
||||||
* /v2/{tenant_id}/snapshots
|
|
||||||
* /v2/{tenant_id}/snapshots/detail
|
- /v2/{tenant_id}/snapshots
|
||||||
|
- /v2/{tenant_id}/snapshots/detail
|
||||||
|
|
||||||
* For volume transfers:
|
* For volume transfers:
|
||||||
* /v2/{tenant_id}/os-volume-transfer
|
|
||||||
* /v2/{tenant_id}/os-volume-transfer/detail
|
- /v2/{tenant_id}/os-volume-transfer
|
||||||
|
- /v2/{tenant_id}/os-volume-transfer/detail
|
||||||
|
|
||||||
* For consistency group:
|
* For consistency group:
|
||||||
* /v2/{tenant_id}/consistencygroups
|
|
||||||
* /v2/{tenant_id}/consistencygroups/detail
|
- /v2/{tenant_id}/consistencygroups
|
||||||
|
- /v2/{tenant_id}/consistencygroups/detail
|
||||||
|
|
||||||
* For consistency group snapshots:
|
* For consistency group snapshots:
|
||||||
* /v2/{tenant_id}/cgsnapshots
|
|
||||||
* /v2/{tenant_id}/cgsnapshots/detail
|
- /v2/{tenant_id}/cgsnapshots
|
||||||
|
- /v2/{tenant_id}/cgsnapshots/detail
|
||||||
|
|
||||||
* For backups:
|
* For backups:
|
||||||
* /v2/{tenant_id}/backups
|
|
||||||
* /v2/{tenant_id}/backups/detail
|
- /v2/{tenant_id}/backups
|
||||||
|
- /v2/{tenant_id}/backups/detail
|
||||||
|
|
||||||
The existing API needs to support the following new Request Parameters for
|
The existing API needs to support the following new Request Parameters for
|
||||||
the above cinder concepts:
|
the above cinder concepts:
|
||||||
@ -165,34 +170,49 @@ The next link will be put in the response returned from cinder if it is
|
|||||||
necessary.
|
necessary.
|
||||||
|
|
||||||
* For snapshots, it replies:
|
* For snapshots, it replies:
|
||||||
{
|
|
||||||
"snapshots": [<List of snapshots>],
|
.. code-block:: python
|
||||||
"snapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
|
||||||
}
|
{
|
||||||
|
"snapshots": [<List of snapshots>],
|
||||||
|
"snapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||||
|
}
|
||||||
|
|
||||||
* For volume transfers, it replies:
|
* For volume transfers, it replies:
|
||||||
{
|
|
||||||
"transfers": [<List of transfers>],
|
.. code-block:: python
|
||||||
"transfers_links": [{'href': '<next_link>', 'rel': 'next'}]
|
|
||||||
}
|
{
|
||||||
|
"transfers": [<List of transfers>],
|
||||||
|
"transfers_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||||
|
}
|
||||||
|
|
||||||
* For consistency group, it replies:
|
* For consistency group, it replies:
|
||||||
{
|
|
||||||
"consistencygroups": [<List of consistencygroups>],
|
.. code-block:: python
|
||||||
"consistencygroups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
|
||||||
}
|
{
|
||||||
|
"consistencygroups": [<List of consistencygroups>],
|
||||||
|
"consistencygroups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||||
|
}
|
||||||
|
|
||||||
* For consistency group snapshots, it replies:
|
* For consistency group snapshots, it replies:
|
||||||
{
|
|
||||||
"cgsnapshots": [<List of cgsnapshots>],
|
|
||||||
"cgsnapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
|
||||||
}
|
|
||||||
|
|
||||||
* For backups, it replies::
|
.. code-block:: python
|
||||||
{
|
|
||||||
"backups": [<List of backups>],
|
{
|
||||||
"backups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
"cgsnapshots": [<List of cgsnapshots>],
|
||||||
}
|
"cgsnapshots_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||||
|
}
|
||||||
|
|
||||||
|
* For backups, it replies:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
{
|
||||||
|
"backups": [<List of backups>],
|
||||||
|
"backups_links": [{'href': '<next_link>', 'rel': 'next'}]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
|
@ -206,6 +206,7 @@ The qos capability describes some corner cases for us:
|
|||||||
|
|
||||||
The vendor:persona key is another good example of a ``vendor unique``
|
The vendor:persona key is another good example of a ``vendor unique``
|
||||||
capability:
|
capability:
|
||||||
|
|
||||||
This is very much like QoS, and again, note that we're just providing what
|
This is very much like QoS, and again, note that we're just providing what
|
||||||
the valid values are.
|
the valid values are.
|
||||||
|
|
||||||
|
@ -50,7 +50,7 @@ level steps to take:
|
|||||||
this volume will henceforth be known as the 'image-volume'. This volume
|
this volume will henceforth be known as the 'image-volume'. This volume
|
||||||
would be owned by a special tenant that is controlled by Cinder.
|
would be owned by a special tenant that is controlled by Cinder.
|
||||||
* Update the cache to know about this new image-volume and its image
|
* Update the cache to know about this new image-volume and its image
|
||||||
contents.
|
contents.
|
||||||
* Clone the image-volume.
|
* Clone the image-volume.
|
||||||
|
|
||||||
This new behavior would be enabled via a new volume driver config option
|
This new behavior would be enabled via a new volume driver config option
|
||||||
|
@ -22,8 +22,8 @@ connected/exported to the compute host and nova instance. If the volume state
|
|||||||
is reset to 'available' and 'detached', it can be attached to a second
|
is reset to 'available' and 'detached', it can be attached to a second
|
||||||
instance (note that this is not multi-attach) and lead to data corruption.
|
instance (note that this is not multi-attach) and lead to data corruption.
|
||||||
This spec proposes to plumb cinder APIs that allow an admin to set the state to
|
This spec proposes to plumb cinder APIs that allow an admin to set the state to
|
||||||
'available' and 'detached' in a safe way, meaning that the backend storage and
|
'available' and 'detached' in a safe way, meaning that the backend storage and
|
||||||
cinder db are synchronized.
|
cinder db are synchronized.
|
||||||
An attempt was made to merge a Nova spec to add a '--force' option, but it has
|
An attempt was made to merge a Nova spec to add a '--force' option, but it has
|
||||||
stalled and will likely be replaced with some Nova code changes. See:
|
stalled and will likely be replaced with some Nova code changes. See:
|
||||||
https://review.openstack.org/#/c/84048/44
|
https://review.openstack.org/#/c/84048/44
|
||||||
@ -78,9 +78,9 @@ Current Fix: Using python-cinderclient 'reset-state' to set the Cinder DB to
|
|||||||
Proposed Fix: An attempt to detach from Nova will fail, since Nova does not
|
Proposed Fix: An attempt to detach from Nova will fail, since Nova does not
|
||||||
know about this volume.
|
know about this volume.
|
||||||
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
||||||
cinder to call the backend to cleanup (default implementation is
|
cinder to call the backend to cleanup (default implementation is
|
||||||
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
||||||
state to 'available' and 'detached'.
|
state to 'available' and 'detached'.
|
||||||
|
|
||||||
UseCase2: Cinder DB 'attaching', storage back end 'attached', Nova DB
|
UseCase2: Cinder DB 'attaching', storage back end 'attached', Nova DB
|
||||||
shows block device for this volume.
|
shows block device for this volume.
|
||||||
@ -90,11 +90,11 @@ Current Fix: Using python-cinderclient 'reset-state' to set the Cinder DB to
|
|||||||
attached. Nova will not allow this volume to be re-attached.
|
attached. Nova will not allow this volume to be re-attached.
|
||||||
Proposed Fix: An attempt to detach from Nova will cleanup on the Nova side
|
Proposed Fix: An attempt to detach from Nova will cleanup on the Nova side
|
||||||
(after proposed Nova changes) but fail in Cinder since the state is
|
(after proposed Nova changes) but fail in Cinder since the state is
|
||||||
'attaching'.
|
'attaching'.
|
||||||
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
||||||
cinder to call the backend to cleanup (default implementation is
|
cinder to call the backend to cleanup (default implementation is
|
||||||
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
||||||
state to 'available' and 'detached'.
|
state to 'available' and 'detached'.
|
||||||
|
|
||||||
UseCase3: Cinder DB 'detaching', storage back end 'available', Nova DB
|
UseCase3: Cinder DB 'detaching', storage back end 'available', Nova DB
|
||||||
does not show block device for this volume.
|
does not show block device for this volume.
|
||||||
@ -106,9 +106,9 @@ Current Fix: Using python-cinderclient 'reset-state' to set the Cinder DB to
|
|||||||
Proposed Fix: An attempt to detach from Nova will fail, since Nova does not
|
Proposed Fix: An attempt to detach from Nova will fail, since Nova does not
|
||||||
know about this volume.
|
know about this volume.
|
||||||
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
||||||
cinder to call the backend to cleanup (default implementation is
|
cinder to call the backend to cleanup (default implementation is
|
||||||
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
||||||
state to 'available' and 'detached'.
|
state to 'available' and 'detached'.
|
||||||
|
|
||||||
UseCase4: Cinder DB 'detaching', storage back end 'attached', Nova DB
|
UseCase4: Cinder DB 'detaching', storage back end 'attached', Nova DB
|
||||||
has a block device for this volume.
|
has a block device for this volume.
|
||||||
@ -120,11 +120,11 @@ Current Fix: Using python-cinderclient 'reset-state' to set the Cinder DB to
|
|||||||
attached. Nova will not allow this volume to be re-attached.
|
attached. Nova will not allow this volume to be re-attached.
|
||||||
Proposed Fix: An attempt to detach from Nova will cleanup on the Nova side
|
Proposed Fix: An attempt to detach from Nova will cleanup on the Nova side
|
||||||
(after proposed Nova changes) but fail in Cinder since the state is
|
(after proposed Nova changes) but fail in Cinder since the state is
|
||||||
'attaching'.
|
'attaching'.
|
||||||
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
Implementing Cinder force_detach and exposing to the cinderclient will allow
|
||||||
cinder to call the backend to cleanup (default implementation is
|
cinder to call the backend to cleanup (default implementation is
|
||||||
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
terminate_connection and detach, but can be overridden) and then set Cinder DB
|
||||||
state to 'available' and 'detached'.
|
state to 'available' and 'detached'.
|
||||||
|
|
||||||
UseCase5: During an attach, initialize_connection() times out. Cinder DB is
|
UseCase5: During an attach, initialize_connection() times out. Cinder DB is
|
||||||
'available', volume is attached, Nova DB does not show the block device.
|
'available', volume is attached, Nova DB does not show the block device.
|
||||||
@ -150,12 +150,19 @@ accomplished via manual intervention (i.e. 'cinder force-detach....'
|
|||||||
(Links to proposed Nova changes will be provided ASAP)
|
(Links to proposed Nova changes will be provided ASAP)
|
||||||
|
|
||||||
Cinder force-detach API currently calls:
|
Cinder force-detach API currently calls:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
volume_api.terminate_connection(...)
|
volume_api.terminate_connection(...)
|
||||||
self.volume_api.detach(...)
|
self.volume_api.detach(...)
|
||||||
|
|
||||||
This will be modified to call into the VolumeManager with a new
|
This will be modified to call into the VolumeManager with a new
|
||||||
force_detach(...)
|
force_detach(...)
|
||||||
|
|
||||||
api/contrib/volume_actions.py: force_detach(...)
|
api/contrib/volume_actions.py: force_detach(...)
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
try:
|
try:
|
||||||
volume_rpcapi.force_detach(...)
|
volume_rpcapi.force_detach(...)
|
||||||
except: #catch and add debug message
|
except: #catch and add debug message
|
||||||
@ -164,6 +171,9 @@ api/contrib/volume_actions.py: force_detach(...)
|
|||||||
self._reset_status(..) #fix DB if backend cleanup is successful
|
self._reset_status(..) #fix DB if backend cleanup is successful
|
||||||
|
|
||||||
volume/manager.py: force_detach(...)
|
volume/manager.py: force_detach(...)
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
self.driver.force_detach(..)
|
self.driver.force_detach(..)
|
||||||
|
|
||||||
Individual drivers will implement force_detach as needed by the driver, most
|
Individual drivers will implement force_detach as needed by the driver, most
|
||||||
|
@ -98,7 +98,9 @@ Alternatives
|
|||||||
------------
|
------------
|
||||||
|
|
||||||
There are a couple of alternatives:
|
There are a couple of alternatives:
|
||||||
|
|
||||||
* Detach the volume and back it up.
|
* Detach the volume and back it up.
|
||||||
|
|
||||||
* Take a snapshot of the attached volume, create a volume from the
|
* Take a snapshot of the attached volume, create a volume from the
|
||||||
snapshot and then back it up.
|
snapshot and then back it up.
|
||||||
|
|
||||||
@ -146,7 +148,9 @@ By default it is False. The force flag is not needed for
|
|||||||
* Create backup
|
* Create backup
|
||||||
* V2/<tenant id>/backups
|
* V2/<tenant id>/backups
|
||||||
* Method: POST
|
* Method: POST
|
||||||
* JSON schema definition for V2::
|
* JSON schema definition for V2:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
"backup":
|
"backup":
|
||||||
@ -165,18 +169,24 @@ The following driver APIs will be added to support attach snapshot and
|
|||||||
detach snapshot.
|
detach snapshot.
|
||||||
|
|
||||||
attach snapshot:
|
attach snapshot:
|
||||||
* def _attach_snapshot(self, context, snapshot, properties,
|
|
||||||
remote=False)
|
.. code-block:: python
|
||||||
* def create_export_snapshot(self, conext, snapshot)
|
|
||||||
* def initialize_connection_snapshot(self, snapshot, properties,
|
def _attach_snapshot(self, context, snapshot, properties,
|
||||||
** kwargs)
|
remote=False)
|
||||||
|
def create_export_snapshot(self, conext, snapshot)
|
||||||
|
def initialize_connection_snapshot(self, snapshot, properties,
|
||||||
|
** kwargs)
|
||||||
|
|
||||||
detach snapshot:
|
detach snapshot:
|
||||||
* def _detach_snapshot(self, context, attach_info, snapshot,
|
|
||||||
properties, force=False, remote=False)
|
.. code-block:: python
|
||||||
* def terminate_connection_snapshot(self, snapshot, properties,
|
|
||||||
** kwargs)
|
def _detach_snapshot(self, context, attach_info, snapshot,
|
||||||
* def remove_export_snapshot(self, context, snapshot)
|
properties, force=False, remote=False)
|
||||||
|
def terminate_connection_snapshot(self, snapshot, properties,
|
||||||
|
** kwargs)
|
||||||
|
def remove_export_snapshot(self, context, snapshot)
|
||||||
|
|
||||||
Alternatively we can use an is_snapshot flag for volume and snapshot
|
Alternatively we can use an is_snapshot flag for volume and snapshot
|
||||||
to share common code without adding new functions, but it will make
|
to share common code without adding new functions, but it will make
|
||||||
|
@ -80,7 +80,7 @@ Using a volume created based on "cirros-0.3.0-x86_64-disk" to show
|
|||||||
the time-consuming between solution A and solution B.
|
the time-consuming between solution A and solution B.
|
||||||
* [Common]Create a volume based on image "cirros-0.3.0-x86_64-disk".
|
* [Common]Create a volume based on image "cirros-0.3.0-x86_64-disk".
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# cinder list
|
root@devaio:/home# cinder list
|
||||||
+--------------------------------------+-----------+------+------+
|
+--------------------------------------+-----------+------+------+
|
||||||
@ -88,11 +88,12 @@ the time-consuming between solution A and solution B.
|
|||||||
+--------------------------------------+-----------+------+------+
|
+--------------------------------------+-----------+------+------+
|
||||||
| 3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 | available | None | 1 |
|
| 3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 | available | None | 1 |
|
||||||
+--------------------------------------+-----------+------+------+
|
+--------------------------------------+-----------+------+------+
|
||||||
|
|
||||||
* [Solution-A-step-1]Copy
|
* [Solution-A-step-1]Copy
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 from "volumes" pool to
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 from "volumes" pool to
|
||||||
"images" pool.
|
"images" pool.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd cp
|
root@devaio:/home# time rbd cp
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 images/test
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 images/test
|
||||||
@ -106,7 +107,7 @@ the time-consuming between solution A and solution B.
|
|||||||
* [Solution-B-step-1]Create a snapshot of volume
|
* [Solution-B-step-1]Create a snapshot of volume
|
||||||
3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 and protect it.
|
3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 and protect it.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd snap create --pool volumes --image
|
root@devaio:/home# time rbd snap create --pool volumes --image
|
||||||
volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 --snap image_test
|
volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8 --snap image_test
|
||||||
@ -125,7 +126,7 @@ the time-consuming between solution A and solution B.
|
|||||||
* [Solution-B-step-2]Do clone operation on
|
* [Solution-B-step-2]Do clone operation on
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd clone
|
root@devaio:/home# time rbd clone
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||||
@ -137,7 +138,7 @@ the time-consuming between solution A and solution B.
|
|||||||
|
|
||||||
* [Solution-B-step-3]Flatten the clone image images/snapshot_clone_image_test.
|
* [Solution-B-step-3]Flatten the clone image images/snapshot_clone_image_test.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd flatten images/snapshot_clone_image_test
|
root@devaio:/home# time rbd flatten images/snapshot_clone_image_test
|
||||||
|
|
||||||
@ -150,7 +151,7 @@ the time-consuming between solution A and solution B.
|
|||||||
* [Solution-B-step-4]Unprotect the snap
|
* [Solution-B-step-4]Unprotect the snap
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd snap unprotect
|
root@devaio:/home# time rbd snap unprotect
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||||
@ -162,7 +163,7 @@ the time-consuming between solution A and solution B.
|
|||||||
* [Solution-B-step-5]Delete the no dependency snap
|
* [Solution-B-step-5]Delete the no dependency snap
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test.
|
||||||
|
|
||||||
::
|
.. code-block:: bash
|
||||||
|
|
||||||
root@devaio:/home# time rbd snap rm
|
root@devaio:/home# time rbd snap rm
|
||||||
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
volumes/volume-3d6e5781-e3ac-4106-bfed-0aa0dd3af1f8@image_test
|
||||||
|
@ -59,6 +59,8 @@ could easily be extended to be included in future work. Backend devices
|
|||||||
(drivers) would be listed in the cinder.conf file and we add entries
|
(drivers) would be listed in the cinder.conf file and we add entries
|
||||||
to indicate pairing. This could look something like this in the conf file:
|
to indicate pairing. This could look something like this in the conf file:
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[driver-foo]
|
[driver-foo]
|
||||||
volume_driver=xxxx
|
volume_driver=xxxx
|
||||||
valid_replication_devices='backend=backend_name-a',
|
valid_replication_devices='backend=backend_name-a',
|
||||||
@ -67,12 +69,16 @@ to indicate pairing. This could look something like this in the conf file:
|
|||||||
Alternatively the replication target can potentially be a device unknown
|
Alternatively the replication target can potentially be a device unknown
|
||||||
to Cinder
|
to Cinder
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[driver-foo]
|
[driver-foo]
|
||||||
volume_driver=xxxx
|
volume_driver=xxxx
|
||||||
valid_replication_devices='remote_device={'some unique access meta}',...
|
valid_replication_devices='remote_device={'some unique access meta}',...
|
||||||
|
|
||||||
Or a combination of the two even
|
Or a combination of the two even
|
||||||
|
|
||||||
|
.. code-block:: ini
|
||||||
|
|
||||||
[driver-foo]
|
[driver-foo]
|
||||||
volume_driver=xxxx
|
volume_driver=xxxx
|
||||||
valid_replication_devices='remote_device={'some unique access meta}',
|
valid_replication_devices='remote_device={'some unique access meta}',
|
||||||
@ -93,17 +99,19 @@ from the create call. The flow would be something like this:
|
|||||||
scheduler.
|
scheduler.
|
||||||
|
|
||||||
* Add the following API calls
|
* Add the following API calls
|
||||||
enable_replication(volume)
|
- enable_replication(volume)
|
||||||
disable_replication(volume)
|
- disable_replication(volume)
|
||||||
failover_replicated_volume(volume)
|
- failover_replicated_volume(volume)
|
||||||
udpate_replication_targets()
|
- update_replication_targets()
|
||||||
[mechanism to add tgts external to the conf file * optional]
|
|
||||||
get_replication_targets()
|
+ [mechanism to add tgts external to the conf file * optional]
|
||||||
[mechanism for an admin to query what a backend has configured]
|
|
||||||
|
- get_replication_targets()
|
||||||
|
+ [mechanism for an admin to query what a backend has configured]
|
||||||
|
|
||||||
|
|
||||||
Special considerations
|
Special considerations
|
||||||
-----------------
|
----------------------
|
||||||
* volume-types
|
* volume-types
|
||||||
There should not be a requirement of an exact match of volume-types between
|
There should not be a requirement of an exact match of volume-types between
|
||||||
the primary and secondary volumes in the replication set. If a backend "can"
|
the primary and secondary volumes in the replication set. If a backend "can"
|
||||||
@ -154,10 +162,12 @@ Special considerations
|
|||||||
Workflow diagram
|
Workflow diagram
|
||||||
-----------------
|
-----------------
|
||||||
Create call on the left:
|
Create call on the left:
|
||||||
No change to workflow
|
* No change to workflow
|
||||||
|
|
||||||
Replication calls on the right:
|
Replication calls on the right:
|
||||||
Direct to manager then driver via host entry
|
* Direct to manager then driver via host entry
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
+-----------+
|
+-----------+
|
||||||
+--< +Volume API + >---------+ Enable routing directly to
|
+--< +Volume API + >---------+ Enable routing directly to
|
||||||
@ -218,12 +228,12 @@ REST API impact
|
|||||||
---------------
|
---------------
|
||||||
|
|
||||||
We would need to add the API calls mentioned above:
|
We would need to add the API calls mentioned above:
|
||||||
enable_replication(volume)
|
* enable_replication(volume)
|
||||||
disable_replication(volume)
|
* disable_replication(volume)
|
||||||
failover_replicated_volume(volume)
|
* failover_replicated_volume(volume)
|
||||||
udpate_replication_targets()
|
* udpate_replication_targets()
|
||||||
[mechanism to add tgts external to the conf file * optional]
|
[mechanism to add tgts external to the conf file * optional]
|
||||||
get_replication_targets()
|
* get_replication_targets()
|
||||||
[mechanism for an admin to query what a backend has configured]
|
[mechanism for an admin to query what a backend has configured]
|
||||||
|
|
||||||
I think augmenting the existing calls is better than reusing them, but we can
|
I think augmenting the existing calls is better than reusing them, but we can
|
||||||
|
@ -90,6 +90,7 @@ in Neutron shows over 10x speedup comparing to usual ``sudo rootwrap`` call.
|
|||||||
Total speedup for Cinder shows impressive results too [#rw_perf]:
|
Total speedup for Cinder shows impressive results too [#rw_perf]:
|
||||||
test scenario CinderVolumes.create_and_delete_volume
|
test scenario CinderVolumes.create_and_delete_volume
|
||||||
Current performance :
|
Current performance :
|
||||||
|
|
||||||
+----------------------+-----------+-----------+-----------+-------+
|
+----------------------+-----------+-----------+-----------+-------+
|
||||||
| action | min (sec) | avg (sec) | max (sec) | count |
|
| action | min (sec) | avg (sec) | max (sec) | count |
|
||||||
+----------------------+-----------+-----------+-----------+-------+
|
+----------------------+-----------+-----------+-----------+-------+
|
||||||
@ -97,6 +98,7 @@ Current performance :
|
|||||||
| cinder.delete_volume | 13.535 | 24.958 | 32.959 | 8 |
|
| cinder.delete_volume | 13.535 | 24.958 | 32.959 | 8 |
|
||||||
| total | 16.314 | 30.718 | 35.96 | 8 |
|
| total | 16.314 | 30.718 | 35.96 | 8 |
|
||||||
+----------------------+-----------+-----------+-----------+-------+
|
+----------------------+-----------+-----------+-----------+-------+
|
||||||
|
|
||||||
Load duration: 131.423681974
|
Load duration: 131.423681974
|
||||||
Full duration: 135.794852018
|
Full duration: 135.794852018
|
||||||
|
|
||||||
@ -109,6 +111,7 @@ With use_rootwrap_daemon enabled:
|
|||||||
| cinder.delete_volume | 2.183 | 2.226 | 2.353 | 8 |
|
| cinder.delete_volume | 2.183 | 2.226 | 2.353 | 8 |
|
||||||
| total | 4.673 | 4.845 | 5.3 | 8 |
|
| total | 4.673 | 4.845 | 5.3 | 8 |
|
||||||
+----------------------+-----------+-----------+-----------+-------+
|
+----------------------+-----------+-----------+-----------+-------+
|
||||||
|
|
||||||
Load duration: 19.7548749447
|
Load duration: 19.7548749447
|
||||||
Full duration: 22.2729279995
|
Full duration: 22.2729279995
|
||||||
|
|
||||||
|
@ -115,23 +115,25 @@ Data model impact
|
|||||||
A new table, called service_versions, would be created to track the RPC and
|
A new table, called service_versions, would be created to track the RPC and
|
||||||
object versions. The schema would be as follows:
|
object versions. The schema would be as follows:
|
||||||
|
|
||||||
service_versions = Table(
|
.. code-block:: python
|
||||||
'service_versions', meta,
|
|
||||||
Column('created_at', DateTime(timezone=False)),
|
|
||||||
Column('updated_at', DateTime(timezone=False)),
|
|
||||||
Column('deleted_at', DateTime(timezone=False)),
|
|
||||||
Column('deleted', Boolean(create_constraint=True, name=None)),
|
|
||||||
Column('id', Integer, primary_key=True, nullable=False),
|
|
||||||
Column('service_id', String(length=255)),
|
|
||||||
Column('rpc_current_version', String(length=36)),
|
|
||||||
Column('rpc_available_version', String(length=36)),
|
|
||||||
Column('object_current_version', String(length=36)),
|
|
||||||
Column('object_available_version', String(length=36)),
|
|
||||||
mysql_engine='InnoDB'
|
|
||||||
)
|
|
||||||
|
|
||||||
The service_id is service name + host. The *_current_version is the version
|
service_versions = Table(
|
||||||
a service is currently running and pinned at. The *_available_version is the
|
'service_versions', meta,
|
||||||
|
Column('created_at', DateTime(timezone=False)),
|
||||||
|
Column('updated_at', DateTime(timezone=False)),
|
||||||
|
Column('deleted_at', DateTime(timezone=False)),
|
||||||
|
Column('deleted', Boolean(create_constraint=True, name=None)),
|
||||||
|
Column('id', Integer, primary_key=True, nullable=False),
|
||||||
|
Column('service_id', String(length=255)),
|
||||||
|
Column('rpc_current_version', String(length=36)),
|
||||||
|
Column('rpc_available_version', String(length=36)),
|
||||||
|
Column('object_current_version', String(length=36)),
|
||||||
|
Column('object_available_version', String(length=36)),
|
||||||
|
mysql_engine='InnoDB'
|
||||||
|
)
|
||||||
|
|
||||||
|
The service_id is service name + host. The \*_current_version is the version
|
||||||
|
a service is currently running and pinned at. The \*_available_version is the
|
||||||
version a service knows about and (if newer) could upgrade to.
|
version a service knows about and (if newer) could upgrade to.
|
||||||
|
|
||||||
REST API impact
|
REST API impact
|
||||||
@ -205,17 +207,17 @@ Work Items
|
|||||||
* Register each cinder service's RPC and object versions on startup.
|
* Register each cinder service's RPC and object versions on startup.
|
||||||
|
|
||||||
* Create RPC compatibility layer in each cinder component's rpcapi.py to
|
* Create RPC compatibility layer in each cinder component's rpcapi.py to
|
||||||
massage the object and RPC interface before it is sent over RPC.
|
massage the object and RPC interface before it is sent over RPC.
|
||||||
|
|
||||||
* Create a "cinder-manage version upgrade" CLI to switch each cinder service
|
* Create a "cinder-manage version upgrade" CLI to switch each cinder service
|
||||||
to use the latest versions.
|
to use the latest versions.
|
||||||
|
|
||||||
|
|
||||||
Dependencies
|
Dependencies
|
||||||
============
|
============
|
||||||
|
|
||||||
* oslo_versionobjects for volumes, backups, service, consistency_group,
|
* oslo_versionobjects for volumes, backups, service, consistency_group,
|
||||||
quota need to be merged so that objects can be made backwards compatible.
|
quota need to be merged so that objects can be made backwards compatible.
|
||||||
|
|
||||||
|
|
||||||
Testing
|
Testing
|
||||||
|
@ -63,12 +63,17 @@ None
|
|||||||
REST API impact
|
REST API impact
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
Add a new REST API to delete backup in v2::
|
Add a new REST API to delete backup in v2:
|
||||||
*POST /v2/{tenant_id}/backups/{id}/action
|
|
||||||
|
|
||||||
{
|
.. code-block:: console
|
||||||
"os-force_delete": {}
|
|
||||||
}
|
POST /v2/{tenant_id}/backups/{id}/action
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
{
|
||||||
|
"os-force_delete": {}
|
||||||
|
}
|
||||||
|
|
||||||
Normal http response code:
|
Normal http response code:
|
||||||
202
|
202
|
||||||
|
@ -34,8 +34,7 @@ Benefits:
|
|||||||
* Import snapshots function could provide an effective means to manage the
|
* Import snapshots function could provide an effective means to manage the
|
||||||
import volumes.
|
import volumes.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
For those volume drivers which could not delete volume with snapshots,
|
For those volume drivers which could not delete volume with snapshots,
|
||||||
we could not delete the import volume that has snapshots.
|
we could not delete the import volume that has snapshots.
|
||||||
By using import snapshots function to import snapshots, we could first
|
By using import snapshots function to import snapshots, we could first
|
||||||
@ -97,8 +96,13 @@ REST API impact
|
|||||||
|
|
||||||
1. Add one API "manage_snapshot"
|
1. Add one API "manage_snapshot"
|
||||||
|
|
||||||
The rest API look like this in v2::
|
The rest API look like this in v2:
|
||||||
/v2/{project_id}/os-snapshot-manage
|
|
||||||
|
.. code-block: console
|
||||||
|
|
||||||
|
POST /v2/{project_id}/os-snapshot-manage
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
'snapshot':{
|
'snapshot':{
|
||||||
@ -110,13 +114,17 @@ The rest API look like this in v2::
|
|||||||
|
|
||||||
* The <string> 'host' means cinder volume host name.
|
* The <string> 'host' means cinder volume host name.
|
||||||
No default value.
|
No default value.
|
||||||
|
|
||||||
* The <string> 'volume_id' means the import snapshot's volume id.
|
* The <string> 'volume_id' means the import snapshot's volume id.
|
||||||
No default value.
|
No default value.
|
||||||
|
|
||||||
* The <string> 'ref' means the import snapshot name
|
* The <string> 'ref' means the import snapshot name
|
||||||
exist in storage back-end.
|
exist in storage back-end.
|
||||||
No default value.
|
No default value.
|
||||||
|
|
||||||
The response body of it is like::
|
The response body of it is like:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
"snapshot": {
|
"snapshot": {
|
||||||
@ -142,12 +150,17 @@ The response body of it is like::
|
|||||||
|
|
||||||
2. Add one API "unmanage_snapshot".
|
2. Add one API "unmanage_snapshot".
|
||||||
|
|
||||||
The rest API look like this in v2::
|
The rest API look like this in v2:
|
||||||
/v2/{project_id}/os-snapshot-manage/{id}/action
|
|
||||||
|
|
||||||
{
|
.. code-block:: console
|
||||||
'os-unmanage':{}
|
|
||||||
}
|
POST /v2/{project_id}/os-snapshot-manage/{id}/action
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
{
|
||||||
|
'os-unmanage':{}
|
||||||
|
}
|
||||||
|
|
||||||
The status code will be HTTP 202 when the request has succeeded.
|
The status code will be HTTP 202 when the request has succeeded.
|
||||||
|
|
||||||
|
@ -129,27 +129,42 @@ the operations on image metadata of volume.
|
|||||||
**Common http response code(s)**
|
**Common http response code(s)**
|
||||||
|
|
||||||
* Modify Success: `200 OK`
|
* Modify Success: `200 OK`
|
||||||
|
|
||||||
* Failure: `400 Bad Request` with details.
|
* Failure: `400 Bad Request` with details.
|
||||||
* Forbidden: `403 Forbidden` e.g. no permission to update the
|
|
||||||
specific metadata
|
* Forbidden: `403 Forbidden`
|
||||||
* Not found: `501 Not Implemented` e.g. The server doesn't recognize
|
|
||||||
the request method
|
e.g. no permission to update the specific metadata
|
||||||
* Not found: `405 Not allowed` e.g. HEAD is not allowed on the resource
|
|
||||||
|
* Not found: `501 Not Implemented`
|
||||||
|
|
||||||
|
e.g. The server doesn't recognize the request method
|
||||||
|
|
||||||
|
* Not found: `405 Not allowed`
|
||||||
|
|
||||||
|
e.g. HEAD is not allowed on the resource
|
||||||
|
|
||||||
|
|
||||||
**Update volume image metadata**
|
**Update volume image metadata**
|
||||||
|
|
||||||
* Method type
|
* Method type
|
||||||
PUT
|
PUT
|
||||||
|
|
||||||
* API version
|
* API version
|
||||||
PUT /v2/{project_id}/volumes/{volume_id}/image_metadata
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
PUT /v2/{project_id}/volumes/{volume_id}/image_metadata
|
||||||
|
|
||||||
* JSON schema definition
|
* JSON schema definition
|
||||||
{
|
|
||||||
"image_metadata": {
|
.. code-block:: python
|
||||||
"key": "v2"
|
|
||||||
}
|
{
|
||||||
}
|
"image_metadata": {
|
||||||
|
"key": "v2"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
To unset a image metadata key value, specify only the key name.
|
To unset a image metadata key value, specify only the key name.
|
||||||
To set a image metadata key value, specify the key and value pair.
|
To set a image metadata key value, specify the key and value pair.
|
||||||
@ -174,8 +189,10 @@ Other end user impact
|
|||||||
* Provide Cinder API to allow a user to update an image property.
|
* Provide Cinder API to allow a user to update an image property.
|
||||||
CLI-python API that triggers the update.
|
CLI-python API that triggers the update.
|
||||||
|
|
||||||
# Sets or deletes volume image metadata
|
.. code-block:: console
|
||||||
cinder image-metadata <volume-id> set <property-name = value>
|
|
||||||
|
# Sets or deletes volume image metadata
|
||||||
|
cinder image-metadata <volume-id> set <property-name = value>
|
||||||
|
|
||||||
Performance Impact
|
Performance Impact
|
||||||
------------------
|
------------------
|
||||||
@ -221,23 +238,30 @@ Changes to Cinder:
|
|||||||
|
|
||||||
#. Define property protections config files in Cinder
|
#. Define property protections config files in Cinder
|
||||||
(Deployer need to keep the files in sync with Glance's)
|
(Deployer need to keep the files in sync with Glance's)
|
||||||
|
|
||||||
#. Sync the properties protection code from Glance into Cinder
|
#. Sync the properties protection code from Glance into Cinder
|
||||||
(The common protection code will be shared in Cinder)
|
(The common protection code will be shared in Cinder)
|
||||||
|
|
||||||
#. Extend existing volume_image_metadatas(VolumeImageMetadataController)
|
#. Extend existing volume_image_metadatas(VolumeImageMetadataController)
|
||||||
controller extension to add update capability.
|
controller extension to add update capability.
|
||||||
|
|
||||||
#. Reuse update_volume_metadata method in volume API for updating image
|
#. Reuse update_volume_metadata method in volume API for updating image
|
||||||
metadata and differentiate user/image metadata by introducing a new
|
metadata and differentiate user/image metadata by introducing a new
|
||||||
constant "meta_type"
|
constant "meta_type"
|
||||||
|
|
||||||
#. Add update_volume_image_metadata method to volume API.
|
#. Add update_volume_image_metadata method to volume API.
|
||||||
|
|
||||||
#. Check against property protections config files
|
#. Check against property protections config files
|
||||||
(property-protections-policies.conf or property-protections-roles.conf)
|
(property-protections-policies.conf or property-protections-roles.conf)
|
||||||
if the property has update protection.
|
if the property has update protection.
|
||||||
|
|
||||||
#. Update DB API and driver to allow image metadata updates.
|
#. Update DB API and driver to allow image metadata updates.
|
||||||
|
|
||||||
Changes to Cinder python client:
|
Changes to Cinder python client:
|
||||||
|
|
||||||
#. Provide Cinder API to allow a user to update an image property.
|
#. Provide Cinder API to allow a user to update an image property.
|
||||||
CLI-python API that triggers the update.
|
CLI-python API that triggers the update.
|
||||||
|
|
||||||
# Sets or deletes volume image metadata
|
# Sets or deletes volume image metadata
|
||||||
cinder image-metadata <volume-id> set <property-name = value>
|
cinder image-metadata <volume-id> set <property-name = value>
|
||||||
|
|
||||||
@ -269,39 +293,40 @@ We propose to define two samples config file in favor of Property Protections,
|
|||||||
that is property-protections-roles.conf and property-protections-policies.conf.
|
that is property-protections-roles.conf and property-protections-policies.conf.
|
||||||
|
|
||||||
* property-protections-policies.conf
|
* property-protections-policies.conf
|
||||||
This is a template file when using policy rule for property protections.
|
This is a template file when using policy rule for property protections.
|
||||||
|
|
||||||
Example: Limit all property interactions to admin only using policy
|
Example: Limit all property interactions to admin only using policy
|
||||||
rule context_is_admin defined in policy.json.
|
rule context_is_admin defined in policy.json.
|
||||||
|
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| [.*] |
|
| [.*] |
|
||||||
+===================================================================+
|
+===================================================================+
|
||||||
| create = context_is_admin |
|
| create = context_is_admin |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| read = context_is_admin |
|
| read = context_is_admin |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| update = context_is_admin |
|
| update = context_is_admin |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| delete = context_is_admin |
|
| delete = context_is_admin |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
|
|
||||||
* property-protections-roles.conf
|
* property-protections-roles.conf
|
||||||
This is a template file when property protections is based on user's role.
|
|
||||||
Example: Allow both admins and users with the billing role to read and modify
|
|
||||||
properties prefixed with x_billing_code_.
|
|
||||||
|
|
||||||
+-------------------------------------------------------------------+
|
This is a template file when property protections is based on user's role.
|
||||||
| [^x_billing_code_.*] |
|
Example: Allow both admins and users with the billing role to read and modify
|
||||||
+===================================================================+
|
properties prefixed with ``x_billing_code_``.
|
||||||
| create = admin,billing |
|
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| read = admin, billing |
|
| [^x_billing_code_.*] |
|
||||||
+-------------------------------------------------------------------+
|
+===================================================================+
|
||||||
| update = admin,billing |
|
| create = admin,billing |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
| delete = admin,billing |
|
| read = admin, billing |
|
||||||
+-------------------------------------------------------------------+
|
+-------------------------------------------------------------------+
|
||||||
|
| update = admin,billing |
|
||||||
|
+-------------------------------------------------------------------+
|
||||||
|
| delete = admin,billing |
|
||||||
|
+-------------------------------------------------------------------+
|
||||||
|
|
||||||
Please refer to here, http://docs.openstack.org/developer/glance/property-protections.html
|
Please refer to here, http://docs.openstack.org/developer/glance/property-protections.html
|
||||||
for the details explanation of the format.
|
for the details explanation of the format.
|
||||||
|
@ -52,7 +52,9 @@ it's reasonable to figure that a user shouldn't have to handle this).
|
|||||||
|
|
||||||
There are two losses of performance in requiring back and forth between
|
There are two losses of performance in requiring back and forth between
|
||||||
cinder-volume and the backend to delete a volume and snapshots:
|
cinder-volume and the backend to delete a volume and snapshots:
|
||||||
|
|
||||||
A. Extra time spent checking the status of X requests.
|
A. Extra time spent checking the status of X requests.
|
||||||
|
|
||||||
B. Time spent merging snapshot data into another snapshot or volume
|
B. Time spent merging snapshot data into another snapshot or volume
|
||||||
which is going to immediately be deleted.
|
which is going to immediately be deleted.
|
||||||
|
|
||||||
@ -95,20 +97,27 @@ This case is for volume drivers that wish to handle mass volume/snapshot
|
|||||||
deletion in an optimized fashion.
|
deletion in an optimized fashion.
|
||||||
|
|
||||||
When a volume delete request is received:
|
When a volume delete request is received:
|
||||||
|
|
||||||
Starting in the volume manager...
|
Starting in the volume manager...
|
||||||
|
|
||||||
1. Check for a driver capability of 'volume_with_snapshots_delete'.
|
1. Check for a driver capability of 'volume_with_snapshots_delete'.
|
||||||
(Name TBD.) This will be a new abc driver feature.
|
(Name TBD.) This will be a new abc driver feature.
|
||||||
|
|
||||||
2. If the driver supports this, call driver.delete_volume_and_snapshots().
|
2. If the driver supports this, call driver.delete_volume_and_snapshots().
|
||||||
This will be passed the volume, and a list of all relevant
|
This will be passed the volume, and a list of all relevant
|
||||||
snapshots.
|
snapshots.
|
||||||
|
|
||||||
3. No exception thrown by the driver will indicate that everything
|
3. No exception thrown by the driver will indicate that everything
|
||||||
was successfully deleted. The driver may return information indicating
|
was successfully deleted. The driver may return information indicating
|
||||||
that the volume itself is intact, but snapshot operations failed.
|
that the volume itself is intact, but snapshot operations failed.
|
||||||
|
|
||||||
4. Volume manager now moves all snapshots and the volume from 'deleting'
|
4. Volume manager now moves all snapshots and the volume from 'deleting'
|
||||||
to deleted. (volume_destroy/snapshot_destroy)
|
to deleted. (volume_destroy/snapshot_destroy)
|
||||||
|
|
||||||
5. If an exception occurred, set the volume and all snapshots to
|
5. If an exception occurred, set the volume and all snapshots to
|
||||||
'error_deleting'. We don't have enough information to do anything
|
'error_deleting'. We don't have enough information to do anything
|
||||||
else safely.
|
else safely.
|
||||||
|
|
||||||
6. The driver returns a list of dicts indicating the new statuses of
|
6. The driver returns a list of dicts indicating the new statuses of
|
||||||
the volume and each snapshot. This allows handling cases where
|
the volume and each snapshot. This allows handling cases where
|
||||||
deletion of some things succeeded but the process did not complete.
|
deletion of some things succeeded but the process did not complete.
|
||||||
|
@ -36,7 +36,7 @@ On API nodes given current code we are open to races in the code that affect
|
|||||||
resources on the database, and this will be exacerbated when working with
|
resources on the database, and this will be exacerbated when working with
|
||||||
Active/Active configurations.
|
Active/Active configurations.
|
||||||
|
|
||||||
Manager Local Locks
|
Local Manager Locks
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
We have multiple local locks in the manager code of the volume nodes to prevent
|
We have multiple local locks in the manager code of the volume nodes to prevent
|
||||||
|
@ -84,7 +84,7 @@ Cinder deployments use these drivers. Also some non-RemoteFS-based drivers are
|
|||||||
using local locks too.
|
using local locks too.
|
||||||
|
|
||||||
We could also replace current locks with some DB-based locking. This was
|
We could also replace current locks with some DB-based locking. This was
|
||||||
proposed by gegulieo in specs to remove local locks from the maanger_ and from
|
proposed by gegulieo in specs to remove local locks from the manager_ and from
|
||||||
drivers_, but increased the complexity of the solution and potentially required
|
drivers_, but increased the complexity of the solution and potentially required
|
||||||
more testing than relying on a broadly used DLM software.
|
more testing than relying on a broadly used DLM software.
|
||||||
|
|
||||||
|
@ -20,8 +20,10 @@ is totally complete.
|
|||||||
|
|
||||||
Both the dd way or driver-specific way will fail if we attach a volume when
|
Both the dd way or driver-specific way will fail if we attach a volume when
|
||||||
it is migrating:
|
it is migrating:
|
||||||
|
|
||||||
* If we migrate the volume through dd way, the source or target volume is not
|
* If we migrate the volume through dd way, the source or target volume is not
|
||||||
usable to server.
|
usable to server.
|
||||||
|
|
||||||
* If we migrate the volume through driver-specific way, the migration task is
|
* If we migrate the volume through driver-specific way, the migration task is
|
||||||
issued to the source backend by scheduler and the volume is owned by the
|
issued to the source backend by scheduler and the volume is owned by the
|
||||||
source backend during the whole migration. If we attach this volume to a
|
source backend during the whole migration. If we attach this volume to a
|
||||||
@ -61,11 +63,12 @@ just add a new way for developers or users to choose.
|
|||||||
These features are:
|
These features are:
|
||||||
|
|
||||||
* One array can take over other array's LUN as a remote LUN if these two
|
* One array can take over other array's LUN as a remote LUN if these two
|
||||||
arrays are connected with FC or iSCSI fabric.
|
arrays are connected with FC or iSCSI fabric.
|
||||||
|
|
||||||
* We can migrate a remote LUN to a local LUN and meanwhile the remote(source)
|
* We can migrate a remote LUN to a local LUN and meanwhile the remote(source)
|
||||||
LUN is writable and readable with the migration task is undergoing, after the
|
LUN is writable and readable with the migration task is undergoing, after the
|
||||||
migration is completed, the local(target) LUN is exactly the same as the
|
migration is completed, the local(target) LUN is exactly the same as the
|
||||||
remote(source) LUN and no data will be lost.
|
remote(source) LUN and no data will be lost.
|
||||||
|
|
||||||
To enable one array to take over other array's volume, we should allow one
|
To enable one array to take over other array's volume, we should allow one
|
||||||
driver to call other drivers' interfaces directly or indirectly. These two
|
driver to call other drivers' interfaces directly or indirectly. These two
|
||||||
|
@ -146,6 +146,7 @@ References
|
|||||||
|
|
||||||
A patch is proposed in Manila to solve a similar problem:
|
A patch is proposed in Manila to solve a similar problem:
|
||||||
https://review.openstack.org/#/c/315266/
|
https://review.openstack.org/#/c/315266/
|
||||||
|
|
||||||
Note that capabilities reporting for thin and thick provisioning
|
Note that capabilities reporting for thin and thick provisioning
|
||||||
in Manila is different from that in Cinder. In Manila, a driver reports
|
in Manila is different from that in Cinder. In Manila, a driver reports
|
||||||
`thin_provisioning = [True, False]` if it supports both thin and thick;
|
`thin_provisioning = [True, False]` if it supports both thin and thick;
|
||||||
|
@ -380,13 +380,17 @@ New Group APIs
|
|||||||
* V3/<tenant id>/groups/<group uuid>/action
|
* V3/<tenant id>/groups/<group uuid>/action
|
||||||
* Method: POST (We need to use "POST" not "DELETE" here because the request
|
* Method: POST (We need to use "POST" not "DELETE" here because the request
|
||||||
body has a flag and therefore is not empty.)
|
body has a flag and therefore is not empty.)
|
||||||
* JSON schema definition for V3::
|
* JSON schema definition for V3:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
"delete":
|
"delete":
|
||||||
{
|
{
|
||||||
"delete-volumes": False
|
"delete-volumes": False
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
* Set delete-volumes flag to True to delete a group with volumes in it.
|
* Set delete-volumes flag to True to delete a group with volumes in it.
|
||||||
This will delete the group and all the volumes. Deleting an empty
|
This will delete the group and all the volumes. Deleting an empty
|
||||||
group does not need the flag to be True.
|
group does not need the flag to be True.
|
||||||
|
@ -50,11 +50,17 @@ corresponding to some existing CG APIs.
|
|||||||
|
|
||||||
The missing pieces to support the existing CG feature using the generic
|
The missing pieces to support the existing CG feature using the generic
|
||||||
volume contructs are as follows.
|
volume contructs are as follows.
|
||||||
|
|
||||||
* A group_snapshots table (for cgsnapshots table)
|
* A group_snapshots table (for cgsnapshots table)
|
||||||
|
|
||||||
* Create group snapshot API (for create cgsnapshot)
|
* Create group snapshot API (for create cgsnapshot)
|
||||||
|
|
||||||
* Delete group snapshot API (for delete cgsnapshot)
|
* Delete group snapshot API (for delete cgsnapshot)
|
||||||
|
|
||||||
* List group snapshots API (for list cgsnapshots)
|
* List group snapshots API (for list cgsnapshots)
|
||||||
|
|
||||||
* Show group snapshot API (for show cgsnapshot)
|
* Show group snapshot API (for show cgsnapshot)
|
||||||
|
|
||||||
* Create group from source group or source group snapshot API (for
|
* Create group from source group or source group snapshot API (for
|
||||||
create CG from cgsnapshot or source CG)
|
create CG from cgsnapshot or source CG)
|
||||||
|
|
||||||
@ -158,7 +164,11 @@ Proposed change
|
|||||||
type spec, the manager will call create_group in the driver first and will
|
type spec, the manager will call create_group in the driver first and will
|
||||||
call create_consistencygroup in the driver if create_group is not
|
call create_consistencygroup in the driver if create_group is not
|
||||||
implemented.
|
implemented.
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{'consistent_group_snapshot_enabled': <is> True}
|
{'consistent_group_snapshot_enabled': <is> True}
|
||||||
|
|
||||||
Same applies to delete_group, update_group, create_group_snapshot,
|
Same applies to delete_group, update_group, create_group_snapshot,
|
||||||
delete_group_snapshot, and create_group_from_src. This way the new APIs
|
delete_group_snapshot, and create_group_from_src. This way the new APIs
|
||||||
will work with existing driver implementation of CG functions.
|
will work with existing driver implementation of CG functions.
|
||||||
|
@ -73,7 +73,7 @@ in Active-Active environments.
|
|||||||
Proposed change
|
Proposed change
|
||||||
===============
|
===============
|
||||||
|
|
||||||
This spec suggests modifying behavior introduced by `Tooz locks for A/A spec`_
|
This spec suggests modifying behavior introduced by `Tooz locks for A/A`_
|
||||||
for the case where the drivers don't need distributed locks. So we would use
|
for the case where the drivers don't need distributed locks. So we would use
|
||||||
local file locks in the drivers (if they use any) and for the locks in the
|
local file locks in the drivers (if they use any) and for the locks in the
|
||||||
manager we would use a locking mechanism based on the ``workers`` table that
|
manager we would use a locking mechanism based on the ``workers`` table that
|
||||||
@ -137,13 +137,16 @@ On a closer look at these 4 locks mentioned before we can classify them in 2
|
|||||||
categories:
|
categories:
|
||||||
|
|
||||||
- Locks for the resource of the operation.
|
- Locks for the resource of the operation.
|
||||||
|
|
||||||
- *${VOL_UUID}-detach_volume* - Used in ``detach_volume`` to prevent
|
- *${VOL_UUID}-detach_volume* - Used in ``detach_volume`` to prevent
|
||||||
multiple simultaneous detaches
|
multiple simultaneous detaches
|
||||||
|
|
||||||
- *${VOL_UUID}* - Used in ``attach_volume`` to prevent multiple
|
- *${VOL_UUID}* - Used in ``attach_volume`` to prevent multiple
|
||||||
simultaneous attaches
|
simultaneous attaches
|
||||||
|
|
||||||
- Locks that prevent deletion of the source of a volume creation (they are
|
- Locks that prevent deletion of the source of a volume creation (they are
|
||||||
created by ``create_volume`` method):
|
created by ``create_volume`` method):
|
||||||
|
|
||||||
- *${VOL_UUID}-delete_volume* - Used in ``delete_volume``
|
- *${VOL_UUID}-delete_volume* - Used in ``delete_volume``
|
||||||
- *${SNAPSHOT_UUID}-delete_snapshot* - Used in ``delete_snapshot``
|
- *${SNAPSHOT_UUID}-delete_snapshot* - Used in ``delete_snapshot``
|
||||||
|
|
||||||
|
@ -59,22 +59,31 @@ REST API impact
|
|||||||
---------------
|
---------------
|
||||||
|
|
||||||
* Add volume id list into response body of querying a CG detail if specifying
|
* Add volume id list into response body of querying a CG detail if specifying
|
||||||
the argument 'list_volume=True', this will be dependent on the Generic Group
|
the argument 'list_volume=True', this will be dependent on the Generic Group
|
||||||
spec now, so just leave a example here::
|
spec now, so just leave a example here:
|
||||||
|
|
||||||
GET /v3/{project_id}/consistencygroups/{consistency_group_id}?list_volume=True
|
.. code-block:: console
|
||||||
RESP BODY: {"consistencygroup": {"status": "XXX",
|
|
||||||
"description": "XXX",
|
|
||||||
...,
|
|
||||||
"volume_list":['volume_id1',
|
|
||||||
...,
|
|
||||||
'volume_idn']
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
* Add a filter "group_id=xxx" in URL of querying volume list/detail::
|
GET /v3/{project_id}/consistencygroups/{consistency_group_id}?list_volume=True
|
||||||
|
|
||||||
GET /v3/{project_id}/volumes?group_id=XXX
|
RESP BODY:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
|
{"consistencygroup": {"status": "XXX",
|
||||||
|
"description": "XXX",
|
||||||
|
...,
|
||||||
|
"volume_list":['volume_id1',
|
||||||
|
...,
|
||||||
|
'volume_idn']
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
* Add a filter "group_id=xxx" in URL of querying volume list/detail:
|
||||||
|
|
||||||
|
.. code-block:: console
|
||||||
|
|
||||||
|
GET /v3/{project_id}/volumes?group_id=XXX
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
---------------
|
---------------
|
||||||
|
@ -188,8 +188,8 @@ In ``os-brick/initiator/linuxfc.py``:
|
|||||||
|
|
||||||
* new class LinuxFibreChannelECKD
|
* new class LinuxFibreChannelECKD
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
::
|
|
||||||
def configure_eckd_device(self, device_number):
|
def configure_eckd_device(self, device_number):
|
||||||
"""Add the eckd volume to the Linux configuration. """
|
"""Add the eckd volume to the Linux configuration. """
|
||||||
|
|
||||||
|
@ -91,12 +91,13 @@ Modify REST API to restore backup:
|
|||||||
* Add volume_type and availability_zone in request body
|
* Add volume_type and availability_zone in request body
|
||||||
* JSON request schema definition:
|
* JSON request schema definition:
|
||||||
|
|
||||||
'backup-restore': {
|
.. code-block:: python
|
||||||
'volume_id': volume_id,
|
|
||||||
'volume_type': volume_type,
|
'backup-restore': {
|
||||||
'availability_zone': availability_zone,
|
'volume_id': volume_id,
|
||||||
'name': name
|
'volume_type': volume_type,
|
||||||
}
|
'availability_zone': availability_zone,
|
||||||
|
'name': name}
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
---------------
|
---------------
|
||||||
|
@ -181,4 +181,5 @@ New admin docs to explain how to use the API.
|
|||||||
References
|
References
|
||||||
==========
|
==========
|
||||||
|
|
||||||
None
|
None
|
||||||
|
|
||||||
|
@ -168,4 +168,5 @@ References
|
|||||||
_`[1]`: https://review.openstack.org/#/c/45026/
|
_`[1]`: https://review.openstack.org/#/c/45026/
|
||||||
_`[2]`: https://review.openstack.org/#/c/266688/
|
_`[2]`: https://review.openstack.org/#/c/266688/
|
||||||
_`[3]`: https://en.wikipedia.org/wiki/ReDoS
|
_`[3]`: https://en.wikipedia.org/wiki/ReDoS
|
||||||
_`[4]`: https://review.openstack.org/#/c/441516/
|
_`[4]`: https://review.openstack.org/#/c/441516/
|
||||||
|
|
||||||
|
@ -19,7 +19,7 @@ we are proposing in Ocata `[1]`_.
|
|||||||
|
|
||||||
The revert process will overwrite the current state and data of the volume.
|
The revert process will overwrite the current state and data of the volume.
|
||||||
If the volume was extended after the snapshot, the request would be rejected
|
If the volume was extended after the snapshot, the request would be rejected
|
||||||
(reason is described in proposed change section). It's assumed that the user
|
(reason is described in proposed change section). It's assumed that the user
|
||||||
doing a revert is discarding the actual state and data of the volume.
|
doing a revert is discarding the actual state and data of the volume.
|
||||||
|
|
||||||
The purpose of this feature is to give users the possibility of recover
|
The purpose of this feature is to give users the possibility of recover
|
||||||
|
@ -23,13 +23,15 @@ made provides a much more realistic view of how users interact with services.
|
|||||||
Use Cases
|
Use Cases
|
||||||
=========
|
=========
|
||||||
* Allows developers testing a new configuration option to change what the
|
* Allows developers testing a new configuration option to change what the
|
||||||
option is set to and test without having to restart all the Cinder services.
|
option is set to and test without having to restart all the Cinder services.
|
||||||
|
|
||||||
* Allows developers to test new functionality implemented in a driver by
|
* Allows developers to test new functionality implemented in a driver by
|
||||||
enabling and disabling configuration options in cinder.conf without
|
enabling and disabling configuration options in cinder.conf without
|
||||||
restarting Cinder services each time the option value is changed from
|
restarting Cinder services each time the option value is changed from
|
||||||
'disabled' to 'enabled'.
|
'disabled' to 'enabled'.
|
||||||
|
|
||||||
* Allows admins to manage ip addresses for various backends in cinder.conf
|
* Allows admins to manage ip addresses for various backends in cinder.conf
|
||||||
and have the connections dynamically update.
|
and have the connections dynamically update.
|
||||||
|
|
||||||
Proposed change
|
Proposed change
|
||||||
===============
|
===============
|
||||||
@ -50,25 +52,27 @@ this is a sizable issue that needs to be investigated.
|
|||||||
Alternatives
|
Alternatives
|
||||||
------------
|
------------
|
||||||
* Manual Approach: Continue to manually restart Cinder services when changes
|
* Manual Approach: Continue to manually restart Cinder services when changes
|
||||||
are made to settings in the cinder.conf file. This is the current approach and
|
are made to settings in the cinder.conf file. This is the current approach
|
||||||
is what we are trying to get around.
|
and is what we are trying to get around.
|
||||||
|
|
||||||
* Reload & Restart Approach: First each process would need to finish the
|
* Reload & Restart Approach: First each process would need to finish the
|
||||||
actions that were ongoing. Next the db and RPC connections would need to
|
actions that were ongoing. Next the db and RPC connections would need to
|
||||||
be dropped and then all caches and all driver caches would need to be
|
be dropped and then all caches and all driver caches would need to be
|
||||||
flushed before reloading. This approach adds a lot of complexity and lots of
|
flushed before reloading. This approach adds a lot of complexity and lots of
|
||||||
possibilities for failure since each cache has to be flushed- things could get
|
possibilities for failure since each cache has to be flushed- things could
|
||||||
missed or not flushed properly.
|
get missed or not flushed properly.
|
||||||
|
|
||||||
* File Watcher: Code will be added to the cinder processes to watch for
|
* File Watcher: Code will be added to the cinder processes to watch for
|
||||||
changes to the cinder.conf file. Once the processes see that changes have been
|
changes to the cinder.conf file. Once the processes see that changes have
|
||||||
made, the process will drain and take the necessary actions to reconfigure
|
been made, the process will drain and take the necessary actions to
|
||||||
itself and then auto-restart. This capability could also be controlled by a
|
reconfigure itself and then auto-restart. This capability could also be
|
||||||
configuration option so that if the user didn't want dynamic reconfiguration
|
controlled by a configuration option so that if the user didn't want dynamic
|
||||||
enabled, they could just disable it in the cinder.conf file. This approach is
|
reconfiguration enabled, they could just disable it in the cinder.conf file.
|
||||||
dangerous because it wouldn't account for configuration options that are saved
|
This approach is dangerous because it wouldn't account for configuration
|
||||||
into variables. To fix these cases, there would be a sizeable impact for
|
options that are saved into variables. To fix these cases, there would be a
|
||||||
developers finding and replacing all instances of configuration variables and
|
sizeable impact for developers finding and replacing all instances of
|
||||||
in doing so a number of assumptions of deployment, configuration, tools, and
|
configuration variables and in doing so a number of assumptions of
|
||||||
packaging would be broken.
|
deployment, configuration, tools, and packaging would be broken.
|
||||||
|
|
||||||
Data model impact
|
Data model impact
|
||||||
-----------------
|
-----------------
|
||||||
@ -118,13 +122,14 @@ Secondary assignee:
|
|||||||
|
|
||||||
Work Items
|
Work Items
|
||||||
----------
|
----------
|
||||||
*Implement handling of SIGHUP signal
|
|
||||||
**Ensure caches are cleared
|
* Implement handling of SIGHUP signal
|
||||||
**Dependent vars are updated
|
- Ensure caches are cleared
|
||||||
**Connections are cleanly dropped
|
- Dependent vars are updated
|
||||||
*Write Unit tests
|
- Connections are cleanly dropped
|
||||||
*Testing on a variety of drivers
|
* Write Unit tests
|
||||||
*Update devref to describe SIGHUP/reload process
|
* Testing on a variety of drivers
|
||||||
|
* Update devref to describe SIGHUP/reload process
|
||||||
|
|
||||||
Dependencies
|
Dependencies
|
||||||
============
|
============
|
||||||
|
@ -122,9 +122,12 @@ The proposed changes are:
|
|||||||
VOLUME_BACKUP_001_0002 : ({PROJECT}_{RESOURCE}_{ACTION_ID}_{MESSAGE_ID})
|
VOLUME_BACKUP_001_0002 : ({PROJECT}_{RESOURCE}_{ACTION_ID}_{MESSAGE_ID})
|
||||||
|
|
||||||
We could have these benefits:
|
We could have these benefits:
|
||||||
|
|
||||||
1. We don't need to define that much events. (we only need to
|
1. We don't need to define that much events. (we only need to
|
||||||
define less messages).
|
define less messages).
|
||||||
|
|
||||||
2. It's also unique cross all of OpenStack.
|
2. It's also unique cross all of OpenStack.
|
||||||
|
|
||||||
3. It's reading friendly and easy to classify or analysis.
|
3. It's reading friendly and easy to classify or analysis.
|
||||||
|
|
||||||
Alternatives
|
Alternatives
|
||||||
|
@ -198,40 +198,44 @@ URL for the group action API from the Generic Volume Group design::
|
|||||||
|
|
||||||
* Enable replication
|
* Enable replication
|
||||||
|
|
||||||
** Method: POST
|
- Method: POST
|
||||||
|
|
||||||
** JSON schema definition for V3::
|
- JSON schema definition for V3:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
'enable_replication':
|
'enable_replication': {}
|
||||||
{
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
** Normal response codes: 202
|
- Normal response codes: 202
|
||||||
|
|
||||||
** Error response codes: 400, 403, 404
|
- Error response codes: 400, 403, 404
|
||||||
|
|
||||||
* Disable replication
|
* Disable replication
|
||||||
|
|
||||||
** Method: POST
|
- Method: POST
|
||||||
|
|
||||||
|
- JSON schema definition for V3:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
** JSON schema definition for V3::
|
|
||||||
{
|
{
|
||||||
'disable_replication':
|
'disable_replication': {}
|
||||||
{
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
** Normal response codes: 202
|
- Normal response codes: 202
|
||||||
|
|
||||||
** Error response codes: 400, 403, 404
|
- Error response codes: 400, 403, 404
|
||||||
|
|
||||||
* Failover replication
|
* Failover replication
|
||||||
|
|
||||||
** Method: POST
|
- Method: POST
|
||||||
|
|
||||||
|
- JSON schema definition for V3:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
** JSON schema definition for V3::
|
|
||||||
{
|
{
|
||||||
'failover_replication':
|
'failover_replication':
|
||||||
{
|
{
|
||||||
@ -240,22 +244,26 @@ URL for the group action API from the Generic Volume Group design::
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
** Normal response codes: 202
|
- Normal response codes: 202
|
||||||
|
|
||||||
** Error response codes: 400, 403, 404
|
- Error response codes: 400, 403, 404
|
||||||
|
|
||||||
* List replication targets
|
* List replication targets
|
||||||
|
|
||||||
** Method: POST
|
- Method: POST
|
||||||
|
|
||||||
|
- JSON schema definition for V3:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
** JSON schema definition for V3::
|
|
||||||
{
|
{
|
||||||
'list_replication_targets':
|
'list_replication_targets': {}
|
||||||
{
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
** Response example::
|
- Response example:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
'replication_targets': [
|
'replication_targets': [
|
||||||
{
|
{
|
||||||
@ -271,7 +279,10 @@ URL for the group action API from the Generic Volume Group design::
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
** Response example for non-admin::
|
- Response example for non-admin:
|
||||||
|
|
||||||
|
.. code-block:: python
|
||||||
|
|
||||||
{
|
{
|
||||||
'replication_targets': [
|
'replication_targets': [
|
||||||
{
|
{
|
||||||
|
@ -208,7 +208,7 @@ Work Items
|
|||||||
|
|
||||||
* Add a warning when initializing c-vol without "enabled_backends"
|
* Add a warning when initializing c-vol without "enabled_backends"
|
||||||
* Modify the driver config utility to look for this section first, and override
|
* Modify the driver config utility to look for this section first, and override
|
||||||
values with backend specific ones if they are defined.
|
values with backend specific ones if they are defined.
|
||||||
* Add some unit tests
|
* Add some unit tests
|
||||||
* Incorporate into multi-backend tempest test configuration
|
* Incorporate into multi-backend tempest test configuration
|
||||||
|
|
||||||
|
@ -24,21 +24,21 @@ Current Workflow
|
|||||||
2. Failure Occurs
|
2. Failure Occurs
|
||||||
3. Fail over
|
3. Fail over
|
||||||
4. Promote Secondary Backend
|
4. Promote Secondary Backend
|
||||||
a. Freeze backend to prevent manage operations
|
a. Freeze backend to prevent manage operations
|
||||||
b. Stop cinder volume service
|
b. Stop cinder volume service
|
||||||
c. Update cinder.conf to have backend A replaced with B and B with C
|
c. Update cinder.conf to have backend A replaced with B and B with C
|
||||||
d. *Hack db to set backend to no longer be in 'failed-over' state*
|
d. *Hack db to set backend to no longer be in 'failed-over' state*
|
||||||
* This is the step this spec is concerned with
|
This is the step this spec is concerned with. Example:
|
||||||
* Example:
|
::
|
||||||
::
|
|
||||||
|
|
||||||
update services set disabled=0,
|
update services set disabled=0,
|
||||||
disabled_reason=NULL,
|
disabled_reason=NULL,
|
||||||
replication_status='enabled',
|
replication_status='enabled',
|
||||||
active_backend_id=NULL
|
active_backend_id=NULL
|
||||||
where id=3;
|
where id=3;
|
||||||
e. Start cinder volume service
|
|
||||||
f. Unfreeze backend
|
e. Start cinder volume service
|
||||||
|
f. Unfreeze backend
|
||||||
|
|
||||||
Use Cases
|
Use Cases
|
||||||
=========
|
=========
|
||||||
|
@ -65,17 +65,23 @@ REST API impact
|
|||||||
Add backend_state: up/down into response body of service list and also need
|
Add backend_state: up/down into response body of service list and also need
|
||||||
a microversions for this feature:
|
a microversions for this feature:
|
||||||
|
|
||||||
* GET /v3/{project_id}/os-services
|
.. code-block:: console
|
||||||
|
|
||||||
RESP BODY: {"services": [{"host": "host@backend1",
|
GET /v3/{project_id}/os-services
|
||||||
...,
|
|
||||||
"backend_status": "up",
|
RESP BODY:
|
||||||
},
|
|
||||||
{"host": "host@backend2",
|
.. code-block:: python
|
||||||
...,
|
|
||||||
"backend_status": "down",
|
{"services": [{"host": "host@backend1",
|
||||||
}]
|
...,
|
||||||
}
|
"backend_status": "up",
|
||||||
|
},
|
||||||
|
{"host": "host@backend2",
|
||||||
|
...,
|
||||||
|
"backend_status": "down",
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
Security impact
|
Security impact
|
||||||
---------------
|
---------------
|
||||||
|
@ -301,10 +301,12 @@ remove the existing validation of parameters which is there inside of
|
|||||||
controller methods which will again break the v2 apis.
|
controller methods which will again break the v2 apis.
|
||||||
|
|
||||||
Solution:
|
Solution:
|
||||||
1] Do the schema validation for v3 apis using the @validation.schema decorator
|
|
||||||
|
1. Do the schema validation for v3 apis using the @validation.schema decorator
|
||||||
similar to Nova and also keep the validation code which is there inside of
|
similar to Nova and also keep the validation code which is there inside of
|
||||||
method to keep v2 working.
|
method to keep v2 working.
|
||||||
2] Once the decision is made to remove the support to v2 we should remove the
|
|
||||||
|
2. Once the decision is made to remove the support to v2 we should remove the
|
||||||
validation code from inside of method.
|
validation code from inside of method.
|
||||||
|
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ Some notes about using this template:
|
|||||||
as changing parameters which can be returned or accepted, or even
|
as changing parameters which can be returned or accepted, or even
|
||||||
the semantics of what happens when a client calls into the API, then
|
the semantics of what happens when a client calls into the API, then
|
||||||
you should add the APIImpact flag to the commit message. Specifications with
|
you should add the APIImpact flag to the commit message. Specifications with
|
||||||
the APIImpact flag can be found with the following query::
|
the APIImpact flag can be found with the following query:
|
||||||
|
|
||||||
https://review.openstack.org/#/q/status:open+project:openstack/cinder-specs+message:apiimpact,n,z
|
https://review.openstack.org/#/q/status:open+project:openstack/cinder-specs+message:apiimpact,n,z
|
||||||
|
|
||||||
|
@ -1,129 +0,0 @@
|
|||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
||||||
# not use this file except in compliance with the License. You may obtain
|
|
||||||
# a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
||||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
||||||
# License for the specific language governing permissions and limitations
|
|
||||||
# under the License.
|
|
||||||
|
|
||||||
import glob
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
|
|
||||||
import docutils.core
|
|
||||||
import testtools
|
|
||||||
|
|
||||||
|
|
||||||
class TestTitles(testtools.TestCase):
|
|
||||||
def _get_title(self, section_tree):
|
|
||||||
section = {
|
|
||||||
'subtitles': [],
|
|
||||||
}
|
|
||||||
for node in section_tree:
|
|
||||||
if node.tagname == 'title':
|
|
||||||
section['name'] = node.rawsource
|
|
||||||
elif node.tagname == 'section':
|
|
||||||
subsection = self._get_title(node)
|
|
||||||
section['subtitles'].append(subsection['name'])
|
|
||||||
return section
|
|
||||||
|
|
||||||
def _get_titles(self, spec):
|
|
||||||
titles = {}
|
|
||||||
for node in spec:
|
|
||||||
if node.tagname == 'section':
|
|
||||||
section = self._get_title(node)
|
|
||||||
titles[section['name']] = section['subtitles']
|
|
||||||
return titles
|
|
||||||
|
|
||||||
def _check_titles(self, spec, titles):
|
|
||||||
self.assertEqual(8, len(titles),
|
|
||||||
"Titles count in '%s' doesn't match expected" % spec)
|
|
||||||
problem = 'Problem description'
|
|
||||||
self.assertIn(problem, titles)
|
|
||||||
|
|
||||||
self.assertIn('Use Cases', titles)
|
|
||||||
|
|
||||||
proposed = 'Proposed change'
|
|
||||||
self.assertIn(proposed, titles)
|
|
||||||
self.assertIn('Alternatives', titles[proposed])
|
|
||||||
self.assertIn('Data model impact', titles[proposed])
|
|
||||||
self.assertIn('REST API impact', titles[proposed])
|
|
||||||
self.assertIn('Security impact', titles[proposed])
|
|
||||||
self.assertIn('Notifications impact', titles[proposed])
|
|
||||||
self.assertIn('Other end user impact', titles[proposed])
|
|
||||||
self.assertIn('Performance Impact', titles[proposed])
|
|
||||||
self.assertIn('Other deployer impact', titles[proposed])
|
|
||||||
self.assertIn('Developer impact', titles[proposed])
|
|
||||||
|
|
||||||
impl = 'Implementation'
|
|
||||||
self.assertIn(impl, titles)
|
|
||||||
self.assertIn('Assignee(s)', titles[impl])
|
|
||||||
self.assertIn('Work Items', titles[impl])
|
|
||||||
|
|
||||||
deps = 'Dependencies'
|
|
||||||
self.assertIn(deps, titles)
|
|
||||||
|
|
||||||
testing = 'Testing'
|
|
||||||
self.assertIn(testing, titles)
|
|
||||||
|
|
||||||
docs = 'Documentation Impact'
|
|
||||||
self.assertIn(docs, titles)
|
|
||||||
|
|
||||||
refs = 'References'
|
|
||||||
self.assertIn(refs, titles)
|
|
||||||
|
|
||||||
def _check_lines_wrapping(self, tpl, raw):
|
|
||||||
for i, line in enumerate(raw.split("\n")):
|
|
||||||
if "http://" in line or "https://" in line:
|
|
||||||
continue
|
|
||||||
self.assertTrue(
|
|
||||||
len(line) < 80,
|
|
||||||
msg="%s:%d: Line limited to a maximum of 79 characters." %
|
|
||||||
(tpl, i + 1))
|
|
||||||
|
|
||||||
def _check_no_cr(self, tpl, raw):
|
|
||||||
matches = re.findall('\r', raw)
|
|
||||||
self.assertEqual(
|
|
||||||
len(matches), 0,
|
|
||||||
"Found %s literal carriage returns in file %s" %
|
|
||||||
(len(matches), tpl))
|
|
||||||
|
|
||||||
def _check_trailing_spaces(self, tpl, raw):
|
|
||||||
for i, line in enumerate(raw.split("\n")):
|
|
||||||
trailing_spaces = re.findall(" +$", line)
|
|
||||||
msg = "Found trailing spaces on line %s of %s" % (i + 1, tpl)
|
|
||||||
self.assertEqual(len(trailing_spaces), 0, msg)
|
|
||||||
|
|
||||||
def test_template(self):
|
|
||||||
# NOTE (e0ne): adding 'template.rst' to ignore dirs to exclude it from
|
|
||||||
# os.listdir output
|
|
||||||
ignored_dirs = {'template.rst', 'api'}
|
|
||||||
|
|
||||||
files = ['specs/template.rst']
|
|
||||||
|
|
||||||
# NOTE (e0ne): We don't check specs in 'api' directory because
|
|
||||||
# they don't match template.rts. Uncomment code below it you want
|
|
||||||
# to test them.
|
|
||||||
# files.extend(glob.glob('specs/api/*/*'))
|
|
||||||
|
|
||||||
releases = set(os.listdir('specs')) - ignored_dirs
|
|
||||||
for release in releases:
|
|
||||||
specs = glob.glob('specs/%s/*' % release)
|
|
||||||
files.extend(specs)
|
|
||||||
|
|
||||||
for filename in files:
|
|
||||||
self.assertTrue(filename.endswith(".rst"),
|
|
||||||
"spec's file must use 'rst' extension.")
|
|
||||||
with open(filename) as f:
|
|
||||||
data = f.read()
|
|
||||||
|
|
||||||
spec = docutils.core.publish_doctree(data)
|
|
||||||
titles = self._get_titles(spec)
|
|
||||||
self._check_titles(filename, titles)
|
|
||||||
self._check_lines_wrapping(filename, data)
|
|
||||||
self._check_no_cr(filename, data)
|
|
||||||
self._check_trailing_spaces(filename, data)
|
|
16
tox.ini
16
tox.ini
@ -1,6 +1,6 @@
|
|||||||
[tox]
|
[tox]
|
||||||
minversion = 1.6
|
minversion = 1.6
|
||||||
envlist = docs,py27,py35,pep8
|
envlist = docs,pep8
|
||||||
skipsdist = True
|
skipsdist = True
|
||||||
|
|
||||||
[testenv]
|
[testenv]
|
||||||
@ -9,20 +9,22 @@ install_command = pip install -U {opts} {packages}
|
|||||||
setenv =
|
setenv =
|
||||||
VIRTUAL_ENV={envdir}
|
VIRTUAL_ENV={envdir}
|
||||||
deps = -r{toxinidir}/requirements.txt
|
deps = -r{toxinidir}/requirements.txt
|
||||||
commands = python setup.py testr --slowest --testr-args='{posargs}'
|
|
||||||
|
|
||||||
[testenv:docs]
|
[testenv:docs]
|
||||||
commands = python setup.py build_sphinx
|
whitelist_externals = rm
|
||||||
|
commands =
|
||||||
|
rm -fr doc/build
|
||||||
|
python setup.py build_sphinx
|
||||||
|
doc8 --ignore D001 doc/source
|
||||||
|
|
||||||
[testenv:pep8]
|
[testenv:pep8]
|
||||||
commands = flake8
|
commands =
|
||||||
|
flake8
|
||||||
|
doc8 --ignore D001 specs/
|
||||||
|
|
||||||
[testenv:venv]
|
[testenv:venv]
|
||||||
commands = {posargs}
|
commands = {posargs}
|
||||||
|
|
||||||
[testenv:cover]
|
|
||||||
commands = python setup.py testr --coverage --testr-args='{posargs}'
|
|
||||||
|
|
||||||
[flake8]
|
[flake8]
|
||||||
# E123, E125 skipped as they are invalid PEP-8.
|
# E123, E125 skipped as they are invalid PEP-8.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user