From 456b86528dbf79c79618f586935364730fed9ec1 Mon Sep 17 00:00:00 2001 From: zhangbailin Date: Sun, 13 Aug 2017 23:39:38 -0700 Subject: [PATCH] Modify spelling mistakes When I read this specs file, I found a lot of spelling mistakes, and I made a change here to give you a good code reading experience. In specs, modify "acording" to "according" and "priviledges" to "privileges" and "Additionaly" to "Additionally" and so on. you can get them from the "File Path" in the web. Change-Id: I95228e1a57c7aa588c78d4a3af46b18daadac01d --- specs/juno/cinder-storwize-driver-qos.rst | 2 +- specs/juno/oracle-zfssa-cinder-driver.rst | 4 ++-- specs/juno/support-volume-num-weighter.rst | 6 +++--- specs/juno/xtremio_cinder_volume_driver.rst | 2 +- specs/kilo/driver-private-data.rst | 2 +- specs/kilo/incremental-backup.rst | 4 ++-- specs/kilo/over-subscription-in-thin-provisioning.rst | 2 +- specs/kilo/remotefs-cfg-improvements.rst | 2 +- specs/liberty/implement-force-detach-for-safe-cleanup.rst | 4 ++-- specs/liberty/rootwrap-daemon-mode.rst | 4 ++-- specs/liberty/volume-and-snap-delete.rst | 2 +- specs/mitaka/backup-snapshots.rst | 2 +- specs/mitaka/brick-extend-attached-volume.rst | 2 +- specs/mitaka/scalable-backup-service.rst | 6 +++--- specs/newton/async-volume-migration.rst | 2 +- specs/newton/discovering-system-capabilities.rst | 2 +- specs/newton/group-snapshots.rst | 2 +- specs/newton/summarymessage.rst | 2 +- specs/pike/backup-init.rst | 2 +- specs/pike/shared-backend-config.rst | 2 +- 20 files changed, 28 insertions(+), 28 deletions(-) diff --git a/specs/juno/cinder-storwize-driver-qos.rst b/specs/juno/cinder-storwize-driver-qos.rst index 47176870..109c2261 100644 --- a/specs/juno/cinder-storwize-driver-qos.rst +++ b/specs/juno/cinder-storwize-driver-qos.rst @@ -128,7 +128,7 @@ Work Items * Add the QoS enablement check and set the I/O throttling in create volume. * Add the QoS enablement check and set the I/O throttling in create volume from snapshot. -* Set the I/O throttling to the new volume acording to the original volume +* Set the I/O throttling to the new volume according to the original volume in create clone volume. * Copy the QoS attributes to the snapshot in create snapshot. * Change the I/O throttling of the volume if the volume type is changed. diff --git a/specs/juno/oracle-zfssa-cinder-driver.rst b/specs/juno/oracle-zfssa-cinder-driver.rst index 86d0e3b2..5fbd8031 100644 --- a/specs/juno/oracle-zfssa-cinder-driver.rst +++ b/specs/juno/oracle-zfssa-cinder-driver.rst @@ -42,9 +42,9 @@ it would be able to perform the following: * Attach/Detach Volume * Get Volume Stats -Additionaly a ZFS Storage Appliance workflow (cinder.akwf) is provided +Additionally a ZFS Storage Appliance workflow (cinder.akwf) is provided to help the admin to setup a user and role in the appliance with enought -priviledges to do cinder operations. +privileges to do cinder operations. Also, cinder.conf has to be configured properly with zfssa specific properties for the driver to work. diff --git a/specs/juno/support-volume-num-weighter.rst b/specs/juno/support-volume-num-weighter.rst index ecbdfe41..50dc304f 100644 --- a/specs/juno/support-volume-num-weighter.rst +++ b/specs/juno/support-volume-num-weighter.rst @@ -23,7 +23,7 @@ Volume Num Weighter is that scheduler could choose volume backend based on volume number in volume backend, which could provide another mean to help improve volume-backends' IO balance and volumes' IO performance. -Explain the benifit from volume number weighter by this use case. +Explain the benefit from volume number weighter by this use case. Assume we have volume-backend-A with 300G and volume-backend-B with 100G. Volume-backend-A's IO capabilities is the same volume-backend-B IO @@ -49,7 +49,7 @@ Proposed change =============== Implement a volume number weighter:VolumeNumberWeighter. - 1. _weigh_object fucntion return volume-backend's non-deleted volume number by + 1. _weigh_object function return volume-backend's non-deleted volume number by using db api volume_get_all_by_host. 2. Add a new config item volume_num_weight_multiplier and its default value is -1, which means to spread volume among volume backend according to @@ -66,7 +66,7 @@ VolumeNumberWeighter, whichi provides a mean to help improve volume-backends' IO balance and volumes' IO performance, could not replace CapacityWeigher/AllocatedCapacityWeigher, because CapacityWeigher/AllocatedCapacityWeigher could be used to provide -balance of volume-backends' free storage space when user foucs more on free +balance of volume-backends' free storage space when user focus more on free space balance between volume-bakends. diff --git a/specs/juno/xtremio_cinder_volume_driver.rst b/specs/juno/xtremio_cinder_volume_driver.rst index 9d61652e..6cc2d767 100644 --- a/specs/juno/xtremio_cinder_volume_driver.rst +++ b/specs/juno/xtremio_cinder_volume_driver.rst @@ -62,7 +62,7 @@ Proposed change =============== 2 new volume drivers for iSCSI and FC should be developed, bridging Open stack -commands to XtremIO managment system (XMS) using XMS Rest API. +commands to XtremIO management system (XMS) using XMS Rest API. The drivers should support the following Open stack actions: * Volume Create/Delete diff --git a/specs/kilo/driver-private-data.rst b/specs/kilo/driver-private-data.rst index 498f4d4c..90d84932 100644 --- a/specs/kilo/driver-private-data.rst +++ b/specs/kilo/driver-private-data.rst @@ -133,7 +133,7 @@ Alternatives left up to a driver backend to store data, or with another external database somewhere. Some drawbacks to this approach include having different databases to maintain, update, and keep in sync which would be a less than - desireable user experience for a cloud admin, having multiple ways of doing + desirable user experience for a cloud admin, having multiple ways of doing synchronization/locking in drivers can and will introduce bugs and maintenance problems, and makes it more difficult for new driver developers to make a good choice on where to store data. diff --git a/specs/kilo/incremental-backup.rst b/specs/kilo/incremental-backup.rst index b199f340..9c4186b9 100644 --- a/specs/kilo/incremental-backup.rst +++ b/specs/kilo/incremental-backup.rst @@ -125,7 +125,7 @@ backup. Data model impact ----------------- -No percieved data model changes +No perceived data model changes REST API impact --------------- @@ -163,7 +163,7 @@ python-cinderclient will be modified to accept "--incr" option. It may include some validation code to validate if the full backup container path is valid -Currenly backup functionality is not integrated with OpenStack dashboard. When +Currently backup functionality is not integrated with OpenStack dashboard. When it happens, the dashboard will provide an option for user to choose incremental backup diff --git a/specs/kilo/over-subscription-in-thin-provisioning.rst b/specs/kilo/over-subscription-in-thin-provisioning.rst index b6b1285c..04e5b01e 100644 --- a/specs/kilo/over-subscription-in-thin-provisioning.rst +++ b/specs/kilo/over-subscription-in-thin-provisioning.rst @@ -42,7 +42,7 @@ Example: Assume backend A has a total physical capacity of 100G. There are 10G thick luns and 20G thin luns (10G out of the 20G thin luns are written). In this case, free_capacity = 100 - 10 -10 = 80G. -free: This is calculated in the scheduler by substracting reserved space +free: This is calculated in the scheduler by subtracting reserved space from free_capacity. volume_size: This is an existing parameter. It is the size of the volume to diff --git a/specs/kilo/remotefs-cfg-improvements.rst b/specs/kilo/remotefs-cfg-improvements.rst index 5c27f2df..9484ce02 100644 --- a/specs/kilo/remotefs-cfg-improvements.rst +++ b/specs/kilo/remotefs-cfg-improvements.rst @@ -58,7 +58,7 @@ drivers operate. Alternatives ------------ -Leave things as they are today. (Not desireable.) +Leave things as they are today. (Not desirable.) Data model impact ----------------- diff --git a/specs/liberty/implement-force-detach-for-safe-cleanup.rst b/specs/liberty/implement-force-detach-for-safe-cleanup.rst index 7c22090e..68ea65d5 100644 --- a/specs/liberty/implement-force-detach-for-safe-cleanup.rst +++ b/specs/liberty/implement-force-detach-for-safe-cleanup.rst @@ -161,7 +161,7 @@ api/contrib/volume_actions.py: force_detach(...) except: #catch and add debug message raise #something - self._reset_status(..) #fix DB if backend cleanup is sucessful + self._reset_status(..) #fix DB if backend cleanup is successful volume/manager.py: force_detach(...) self.driver.force_detach(..) @@ -170,7 +170,7 @@ Individual drivers will implement force_detach as needed by the driver, most likely calling terminate_connection(..) and possibly other cleanup. The force_detach(..) api should be idempotent: It should succeed if the volume is not attached, and succeed if the volume starts off connected and can be -sucessfuly detached. +sucessfully detached. Alternatives ------------ diff --git a/specs/liberty/rootwrap-daemon-mode.rst b/specs/liberty/rootwrap-daemon-mode.rst index 44074bf3..1c09e73c 100644 --- a/specs/liberty/rootwrap-daemon-mode.rst +++ b/specs/liberty/rootwrap-daemon-mode.rst @@ -120,7 +120,7 @@ This change introduces new config variable ``use_rootwrap_daemon`` that switches on new behavior. Note that by default ``use_rootwrap_daemon`` will be turned off so to get the speedup one will have to turn it on. With it turned on ``cinder-rootwrap-daemon`` is used to run commands that require root -priviledges. +privileges. This change also introduces new binary ``cinder-rootwrap-daemon`` that should be deployed beside ``cinder-rootwrap``. @@ -184,4 +184,4 @@ References http://paste.openstack.org/show/160890/ .. [#rw_eth] Alternative approaches - https://etherpad.openstack.org/p/neutron-agent-exec-performance \ No newline at end of file + https://etherpad.openstack.org/p/neutron-agent-exec-performance diff --git a/specs/liberty/volume-and-snap-delete.rst b/specs/liberty/volume-and-snap-delete.rst index c44eab5e..358f722d 100644 --- a/specs/liberty/volume-and-snap-delete.rst +++ b/specs/liberty/volume-and-snap-delete.rst @@ -106,7 +106,7 @@ When a volume delete request is received: that the volume itself is intact, but snapshot operations failed. 4. Volume manager now moves all snapshots and the volume from 'deleting' to deleted. (volume_destroy/snapshot_destroy) - 5. If an exception occured, set the volume and all snapshots to + 5. If an exception occurred, set the volume and all snapshots to 'error_deleting'. We don't have enough information to do anything else safely. 6. The driver returns a list of dicts indicating the new statuses of diff --git a/specs/mitaka/backup-snapshots.rst b/specs/mitaka/backup-snapshots.rst index 862261cd..a9589e80 100644 --- a/specs/mitaka/backup-snapshots.rst +++ b/specs/mitaka/backup-snapshots.rst @@ -103,7 +103,7 @@ will be null if the backup is from a volume:: snapshot_id = Column(String(36)) -Add the folowing new column to the backups table to record the timestamp of +Add the following new column to the backups table to record the timestamp of the data:: data_timestamp = Column(DateTime) diff --git a/specs/mitaka/brick-extend-attached-volume.rst b/specs/mitaka/brick-extend-attached-volume.rst index b8216762..37e3a6d4 100644 --- a/specs/mitaka/brick-extend-attached-volume.rst +++ b/specs/mitaka/brick-extend-attached-volume.rst @@ -124,7 +124,7 @@ Dependencies * There will also need to be some Nova work done to initiate the call into os-brick's new API to do the work, and then notify the VM after it's - successfull. + successful. Testing ======= diff --git a/specs/mitaka/scalable-backup-service.rst b/specs/mitaka/scalable-backup-service.rst index b17776d1..5379befa 100644 --- a/specs/mitaka/scalable-backup-service.rst +++ b/specs/mitaka/scalable-backup-service.rst @@ -202,7 +202,7 @@ volumes that were being backed up or restored. This assumption is not safe if multiple backup processes can run concurrently, and on separate nodes. At startup, a backup service -needs to distinguish betwen in-flight operations that are owned by +needs to distinguish between in-flight operations that are owned by another backup-service instance and orphaned operations. Eventually, it will make sense for a backup service process to @@ -257,14 +257,14 @@ these with methods that apparently have a bit more "special sauce" than just preparing their volume for presentation as a block device. We will need to analyze the codebase to root out any of these and -determine how to accomodate any special needs. +determine how to accommodate any special needs. An example is the vmware volume driver [4], where a "backing" and temporary vmdk file are created for the cinder volume and the temporary vmdk file is used as the backup source. We will have to determine whether all this can be done in the volume driver's ``initialize_connection`` method during ``attach``, or whether we will -require an additonal rpc hook to a *prepare_backup_volume* method or +require an additional rpc hook to a *prepare_backup_volume* method or some such for volume drivers of this sort. diff --git a/specs/newton/async-volume-migration.rst b/specs/newton/async-volume-migration.rst index 399dd680..347157ed 100644 --- a/specs/newton/async-volume-migration.rst +++ b/specs/newton/async-volume-migration.rst @@ -174,7 +174,7 @@ for the moment before the undergoing migration is done. Performance Impact ------------------ -As we know, driver assisted migration is mostly more efficent than host +As we know, driver assisted migration is mostly more efficient than host copy, assuming that there is no read-through along with the migration. If we attach the migrating volume and do read-through operations on the diff --git a/specs/newton/discovering-system-capabilities.rst b/specs/newton/discovering-system-capabilities.rst index 8e0153d2..8b245eb0 100644 --- a/specs/newton/discovering-system-capabilities.rst +++ b/specs/newton/discovering-system-capabilities.rst @@ -221,7 +221,7 @@ including the service itself. Normal http response code(s): 200 - Reponse is a list of capabilities. Each capability is a simple noun or + Response is a list of capabilities. Each capability is a simple noun or hyphenated compound noun. E.g: .. code-block:: rest diff --git a/specs/newton/group-snapshots.rst b/specs/newton/group-snapshots.rst index 595d8e5c..bb4fb2cb 100644 --- a/specs/newton/group-snapshots.rst +++ b/specs/newton/group-snapshots.rst @@ -73,7 +73,7 @@ Use Cases Group snapshot supports two capabilities, that is consistent_group_snapshot_enabled and group_snapshot_enabled. A group snapshot with consistent_group_snapshot_enabled spec set to True is equivalent to -cgsnapshot that is existing today in Cinder and it can gurantee point-in-time +cgsnapshot that is existing today in Cinder and it can guarantee point-in-time consistency at the storage level. A group snapshot with group_snapshot_enabled spec set to True is a group of snapshots that does not guarantee consistency at the storage level. diff --git a/specs/newton/summarymessage.rst b/specs/newton/summarymessage.rst index 334f3167..1be87f00 100644 --- a/specs/newton/summarymessage.rst +++ b/specs/newton/summarymessage.rst @@ -380,7 +380,7 @@ Phase 4. Horizon and CLI implementations to view notifications in more formatted manner. Phase 5. Handling of some special cases where generation of notifications -requires seperate handling like rabbitMQ related implementations for showing +requires separate handling like rabbitMQ related implementations for showing notifications in case rabbitMQ is in failed state or rabbitMQ recipient is in inactive state. diff --git a/specs/pike/backup-init.rst b/specs/pike/backup-init.rst index bb4150e7..d538562d 100644 --- a/specs/pike/backup-init.rst +++ b/specs/pike/backup-init.rst @@ -114,7 +114,7 @@ Other contributors: Work Items ---------- -* Implement 'do_setup' method in a base backup driver wich won't do anything +* Implement 'do_setup' method in a base backup driver which won't do anything * Implement 'do_setup' in each backup driver. diff --git a/specs/pike/shared-backend-config.rst b/specs/pike/shared-backend-config.rst index 596a65f6..32b83772 100644 --- a/specs/pike/shared-backend-config.rst +++ b/specs/pike/shared-backend-config.rst @@ -66,7 +66,7 @@ actually do want the options set in DEFAULT. Use Cases ========= -The main use-case here is for shared configs relevent to multiple backends +The main use-case here is for shared configs relevant to multiple backends in cinder.conf. Also to help lessen some of the confusion about shared config options in DEFAULT.