Modify spelling mistakes

When I read this specs file, I found a lot of spelling mistakes,
and I made a change here to give you a good code reading experience.

In specs, modify "acording" to "according" and  "priviledges" to
"privileges" and "Additionaly" to "Additionally" and so on. you can
get them from the "File Path" in the web.

Change-Id: I95228e1a57c7aa588c78d4a3af46b18daadac01d
This commit is contained in:
zhangbailin 2017-08-13 23:39:38 -07:00
parent a17d7135b9
commit 456b86528d
20 changed files with 28 additions and 28 deletions

View File

@ -128,7 +128,7 @@ Work Items
* Add the QoS enablement check and set the I/O throttling in create volume.
* Add the QoS enablement check and set the I/O throttling in create volume
from snapshot.
* Set the I/O throttling to the new volume acording to the original volume
* Set the I/O throttling to the new volume according to the original volume
in create clone volume.
* Copy the QoS attributes to the snapshot in create snapshot.
* Change the I/O throttling of the volume if the volume type is changed.

View File

@ -42,9 +42,9 @@ it would be able to perform the following:
* Attach/Detach Volume
* Get Volume Stats
Additionaly a ZFS Storage Appliance workflow (cinder.akwf) is provided
Additionally a ZFS Storage Appliance workflow (cinder.akwf) is provided
to help the admin to setup a user and role in the appliance with enought
priviledges to do cinder operations.
privileges to do cinder operations.
Also, cinder.conf has to be configured properly with zfssa specific
properties for the driver to work.

View File

@ -23,7 +23,7 @@ Volume Num Weighter is that scheduler could choose volume backend based on
volume number in volume backend, which could provide another mean to help
improve volume-backends' IO balance and volumes' IO performance.
Explain the benifit from volume number weighter by this use case.
Explain the benefit from volume number weighter by this use case.
Assume we have volume-backend-A with 300G and volume-backend-B with 100G.
Volume-backend-A's IO capabilities is the same volume-backend-B IO
@ -49,7 +49,7 @@ Proposed change
===============
Implement a volume number weighter:VolumeNumberWeighter.
1. _weigh_object fucntion return volume-backend's non-deleted volume number by
1. _weigh_object function return volume-backend's non-deleted volume number by
using db api volume_get_all_by_host.
2. Add a new config item volume_num_weight_multiplier and its default value is
-1, which means to spread volume among volume backend according to
@ -66,7 +66,7 @@ VolumeNumberWeighter, whichi provides a mean to help improve
volume-backends' IO balance and volumes' IO performance,
could not replace CapacityWeigher/AllocatedCapacityWeigher,
because CapacityWeigher/AllocatedCapacityWeigher could be used to provide
balance of volume-backends' free storage space when user foucs more on free
balance of volume-backends' free storage space when user focus more on free
space balance between volume-bakends.

View File

@ -62,7 +62,7 @@ Proposed change
===============
2 new volume drivers for iSCSI and FC should be developed, bridging Open stack
commands to XtremIO managment system (XMS) using XMS Rest API.
commands to XtremIO management system (XMS) using XMS Rest API.
The drivers should support the following Open stack actions:
* Volume Create/Delete

View File

@ -133,7 +133,7 @@ Alternatives
left up to a driver backend to store data, or with another external database
somewhere. Some drawbacks to this approach include having different
databases to maintain, update, and keep in sync which would be a less than
desireable user experience for a cloud admin, having multiple ways of doing
desirable user experience for a cloud admin, having multiple ways of doing
synchronization/locking in drivers can and will introduce bugs and
maintenance problems, and makes it more difficult for new driver developers
to make a good choice on where to store data.

View File

@ -125,7 +125,7 @@ backup.
Data model impact
-----------------
No percieved data model changes
No perceived data model changes
REST API impact
---------------
@ -163,7 +163,7 @@ python-cinderclient will be modified to accept "--incr" option. It may
include some validation code to validate if the full backup container path
is valid
Currenly backup functionality is not integrated with OpenStack dashboard. When
Currently backup functionality is not integrated with OpenStack dashboard. When
it happens, the dashboard will provide an option for user to choose incremental
backup

View File

@ -42,7 +42,7 @@ Example: Assume backend A has a total physical capacity of 100G.
There are 10G thick luns and 20G thin luns (10G out of the 20G thin luns
are written). In this case, free_capacity = 100 - 10 -10 = 80G.
free: This is calculated in the scheduler by substracting reserved space
free: This is calculated in the scheduler by subtracting reserved space
from free_capacity.
volume_size: This is an existing parameter. It is the size of the volume to

View File

@ -58,7 +58,7 @@ drivers operate.
Alternatives
------------
Leave things as they are today. (Not desireable.)
Leave things as they are today. (Not desirable.)
Data model impact
-----------------

View File

@ -161,7 +161,7 @@ api/contrib/volume_actions.py: force_detach(...)
except: #catch and add debug message
raise #something
self._reset_status(..) #fix DB if backend cleanup is sucessful
self._reset_status(..) #fix DB if backend cleanup is successful
volume/manager.py: force_detach(...)
self.driver.force_detach(..)
@ -170,7 +170,7 @@ Individual drivers will implement force_detach as needed by the driver, most
likely calling terminate_connection(..) and possibly other cleanup. The
force_detach(..) api should be idempotent: It should succeed if the volume is
not attached, and succeed if the volume starts off connected and can be
sucessfuly detached.
sucessfully detached.
Alternatives
------------

View File

@ -120,7 +120,7 @@ This change introduces new config variable ``use_rootwrap_daemon`` that
switches on new behavior. Note that by default ``use_rootwrap_daemon`` will be
turned off so to get the speedup one will have to turn it on. With it
turned on ``cinder-rootwrap-daemon`` is used to run commands that require root
priviledges.
privileges.
This change also introduces new binary ``cinder-rootwrap-daemon`` that should
be deployed beside ``cinder-rootwrap``.

View File

@ -106,7 +106,7 @@ When a volume delete request is received:
that the volume itself is intact, but snapshot operations failed.
4. Volume manager now moves all snapshots and the volume from 'deleting'
to deleted. (volume_destroy/snapshot_destroy)
5. If an exception occured, set the volume and all snapshots to
5. If an exception occurred, set the volume and all snapshots to
'error_deleting'. We don't have enough information to do anything
else safely.
6. The driver returns a list of dicts indicating the new statuses of

View File

@ -103,7 +103,7 @@ will be null if the backup is from a volume::
snapshot_id = Column(String(36))
Add the folowing new column to the backups table to record the timestamp of
Add the following new column to the backups table to record the timestamp of
the data::
data_timestamp = Column(DateTime)

View File

@ -124,7 +124,7 @@ Dependencies
* There will also need to be some Nova work done to initiate the call into
os-brick's new API to do the work, and then notify the VM after it's
successfull.
successful.
Testing
=======

View File

@ -202,7 +202,7 @@ volumes that were being backed up or restored.
This assumption is not safe if multiple backup processes can run
concurrently, and on separate nodes. At startup, a backup service
needs to distinguish betwen in-flight operations that are owned by
needs to distinguish between in-flight operations that are owned by
another backup-service instance and orphaned operations.
Eventually, it will make sense for a backup service process to
@ -257,14 +257,14 @@ these with methods that apparently have a bit more "special sauce"
than just preparing their volume for presentation as a block device.
We will need to analyze the codebase to root out any of these and
determine how to accomodate any special needs.
determine how to accommodate any special needs.
An example is the vmware volume driver [4], where a "backing" and
temporary vmdk file are created for the cinder volume and the
temporary vmdk file is used as the backup source. We will have to
determine whether all this can be done in the volume driver's
``initialize_connection`` method during ``attach``, or whether we will
require an additonal rpc hook to a *prepare_backup_volume* method or
require an additional rpc hook to a *prepare_backup_volume* method or
some such for volume drivers of this sort.

View File

@ -174,7 +174,7 @@ for the moment before the undergoing migration is done.
Performance Impact
------------------
As we know, driver assisted migration is mostly more efficent than host
As we know, driver assisted migration is mostly more efficient than host
copy, assuming that there is no read-through along with the migration.
If we attach the migrating volume and do read-through operations on the

View File

@ -221,7 +221,7 @@ including the service itself.
Normal http response code(s): 200
Reponse is a list of capabilities. Each capability is a simple noun or
Response is a list of capabilities. Each capability is a simple noun or
hyphenated compound noun. E.g:
.. code-block:: rest

View File

@ -73,7 +73,7 @@ Use Cases
Group snapshot supports two capabilities, that is
consistent_group_snapshot_enabled and group_snapshot_enabled. A group snapshot
with consistent_group_snapshot_enabled spec set to True is equivalent to
cgsnapshot that is existing today in Cinder and it can gurantee point-in-time
cgsnapshot that is existing today in Cinder and it can guarantee point-in-time
consistency at the storage level. A group snapshot with group_snapshot_enabled
spec set to True is a group of snapshots that does not guarantee consistency
at the storage level.

View File

@ -380,7 +380,7 @@ Phase 4. Horizon and CLI implementations to view notifications in more
formatted manner.
Phase 5. Handling of some special cases where generation of notifications
requires seperate handling like rabbitMQ related implementations for showing
requires separate handling like rabbitMQ related implementations for showing
notifications in case rabbitMQ is in failed state or rabbitMQ recipient is
in inactive state.

View File

@ -114,7 +114,7 @@ Other contributors:
Work Items
----------
* Implement 'do_setup' method in a base backup driver wich won't do anything
* Implement 'do_setup' method in a base backup driver which won't do anything
* Implement 'do_setup' in each backup driver.

View File

@ -66,7 +66,7 @@ actually do want the options set in DEFAULT.
Use Cases
=========
The main use-case here is for shared configs relevent to multiple backends
The main use-case here is for shared configs relevant to multiple backends
in cinder.conf. Also to help lessen some of the confusion about shared config
options in DEFAULT.