Consistently use literals for long options

Otherwise the `--` that prefix the long option name gets rendered as
`–`. This commit changes all instances of long options to literals.
Short options, and more generally commands, should also be changed to
literals in order to have consistent doc layout.

Change-Id: I920e5866387d8bb5b498f2635887dfb9de8a124b
This commit is contained in:
Martin André 2018-10-01 14:28:32 +02:00
parent 777c3bcbd1
commit fb690db721
16 changed files with 84 additions and 84 deletions

View File

@ -159,8 +159,8 @@ Configure container settings with ceph-ansible
----------------------------------------------
The group variables `ceph_osd_docker_memory_limit`, which corresponds
to `docker run ... --memory`, and `ceph_osd_docker_cpu_limit`, which
corresponds to `docker run ... --cpu-quota`, may be overridden
to ``docker run ... --memory``, and `ceph_osd_docker_cpu_limit`, which
corresponds to ``docker run ... --cpu-quota``, may be overridden
depending on the hardware configuration and the system needs. Below is
an example of setting custom values to these parameters::

View File

@ -128,7 +128,7 @@ cluster::
.. note::
It is also possible to copy the entire tripleo-heat-templates tree, and modify
the roles_data.yaml file in place, then deploy via `--templates <copy of tht>`
the roles_data.yaml file in place, then deploy via ``--templates <copy of tht>``
.. warning::
Note that in your custom roles you may not use any already predefined name

View File

@ -949,7 +949,7 @@ bridge, then this command would add a provider network on VLAN 201::
--shared provider_network
This command would create a shared network, but it is also possible to
specify a tenant instead of specifying --shared, and then that network will
specify a tenant instead of specifying ``--shared``, and then that network will
only be available to that tenant. If a provider network is marked as external,
then only the operator may create ports on that network. A subnet can be added
to a provider network if Neutron is to provide DHCP services to tenant VMs::

View File

@ -275,8 +275,8 @@ attributes can be specified at upload time as in::
.. note::
Adding --httpboot is optional but suggested if you need to ensure that the
``agent`` images are unique within your environment.
Adding ``--httpboot`` is optional but suggested if you need to ensure that
the ``agent`` images are unique within your environment.
.. admonition:: Prior to Rocky release
:class: stable
@ -524,28 +524,28 @@ Run the deploy command, including any additional parameters as necessary::
Example:
The following will present a behavior where the my_roles_data will persist,
due to the location of the custom roles data, which is stored in swift.
due to the location of the custom roles data, which is stored in swift::
* openstack overcloud deploy --templates -r my_roles_data.yaml
* heat stack-delete overcloud
openstack overcloud deploy --templates -r my_roles_data.yaml
heat stack-delete overcloud
Allow the stack to be deleted then continue.
Allow the stack to be deleted then continue::
* openstack overcloud deploy --templates
openstack overcloud deploy --templates
The execution of the above will still reference my_roles_data as the
unified command line client will perform a look up against the swift
storage. The reason for the unexpected behavior is due to the heatclient
lack of awareness of the swift storage.
The correct course of action should be as followed:
The correct course of action should be as followed::
* openstack overcloud deploy --templates -r my_roles_data.yaml
* openstack overcloud delete <stack name>
openstack overcloud deploy --templates -r my_roles_data.yaml
openstack overcloud delete <stack name>
Allow the stack to be deleted then continue.
Allow the stack to be deleted then continue::
* openstack overcloud deploy --templates
openstack overcloud deploy --templates
To deploy an overcloud with multiple controllers and achieve HA,
follow :doc:`../advanced_deployment/high_availability`.

View File

@ -359,16 +359,16 @@ Things to keep in mind while using discover-tempest-config
* If OpenStack was deployed using TripleO/Director, pass the deployment input
file tempest-deployer-input.conf to the :command:`discover-tempest-config` command with
--deployer-input option. The file contains some version specific values set
``--deployer-input`` option. The file contains some version specific values set
by the installer. More about the argument can be found in
`python-tempestconf's CLI documentation. <https://docs.openstack.org/python-tempestconf/latest/cli/cli_options.html>`_
* --remove option can be used to remove values from tempest.conf.
For example: **--remove network-feature-enabled.api_extensions=dvr**
* ``--remove`` option can be used to remove values from tempest.conf,
for example: ``--remove network-feature-enabled.api_extensions=dvr``.
The feature is useful when some values in tempest.conf are automatically
set by the discovery, but they are not wanted to be printed to tempest.conf.
More about the feature can be found
`here. <https://docs.openstack.org/python-tempestconf/latest/user/usage.html#prevent-some-key-value-pairs-to-be-set-in-tempest-conf>`_.
`here <https://docs.openstack.org/python-tempestconf/latest/user/usage.html#prevent-some-key-value-pairs-to-be-set-in-tempest-conf>`_.
Always save the state of resources before running tempest tests
@ -426,7 +426,7 @@ defined in tempest.conf to run tests against the targeted host.
$ tempest run --regex '(test_regex1 | test_regex2 | test_regex 3)'
* Use **--black-regex** argument to skip specific tests::
* Use ``--black-regex`` argument to skip specific tests::
$ tempest run -r '(api|scenario)' --black-regex='(keystone_tempest_plugin)'
@ -502,7 +502,7 @@ Running Tempest tests serially as well as in parallel
Generating HTML report of tempest tests
+++++++++++++++++++++++++++++++++++++++
* In order to generate tempest subunit files in v2 format, use **--subunit**
* In order to generate tempest subunit files in v2 format, use ``--subunit``
flag with tempest run::
$ tempest run -r '(test_regex)' --subunit

View File

@ -256,7 +256,7 @@ The following example is based on the single NIC configuration and assumes that
the environment had at least 3 total IP addresses available to it. The IPs are
used for the following:
- 1 IP address for the OpenStack services (this is the --local-ip from the
- 1 IP address for the OpenStack services (this is the ``--local-ip`` from the
deploy command)
- 1 IP used as a Virtual Router to provide connectivity to the Tenant network
is used for the OpenStack services (is automatically assigned in this example)
@ -389,7 +389,7 @@ The following example is based on the single NIC configuration and assumes that
the environment had at least 4 total IP addresses available to it. The IPs are
used for the following:
- 1 IP address for the OpenStack services (this is the --local-ip from the
- 1 IP address for the OpenStack services (this is the ``--local-ip`` from the
deploy command)
- 1 IP used as a Virtual Router to provide connectivity to the Tenant network
is used for the OpenStack services

View File

@ -67,7 +67,7 @@ For example, we can run a full MySQL backup with additional paths as::
--exclude-path /home/stack/
Note that we are excluding the folder `/home/stack/`
from the backup, but this folder is not included using the `--add-path`,
from the backup, but this folder is not included using the ``--add-path``,
CLI option, this is due to the fact that the `/home/stack/` folder is
added by default in any backup as it contains necessary files
to restore correctly the Undercloud.

View File

@ -17,7 +17,7 @@ MySQL backup
~~~~~~~~~~~~
If using HA the operator can run the database backup in any controller node
using the --single-transaction option when executing the mysqldump.
using the ``--single-transaction`` option when executing the mysqldump.
If the deployment is using containers the hieradata file containing the mysql
root password is located in the folder `/var/lib/config-data/mysql/etc/puppet/hieradata/`.
@ -101,7 +101,7 @@ We need to backup all files that can be used to recover
from a possible failure in the Overcloud controllers when
executing a minor update or a major upgrade.
The option `--ignore-failed-read` is added to the `tar`
The option ``--ignore-failed-read`` is added to the `tar`
command because the list of files to backup might be
different on each environment and we make the list of
paths to backup is as much general as possible.

View File

@ -71,7 +71,7 @@ the configuration passed into the prepare command (all the -e env.yaml files),
as well as updating_the_swift_stored_plan_ with the overcloud configuration.
As a result the UpgradePrepare class inherits all the Deploy_parser_arguments_,
including --stack and -e for the additional environment files. We explicitly
including ``--stack`` and ``-e`` for the additional environment files. We explicitly
set the update_plan_only_ argument so that the Heat stack update does not get
executed by the parent class and returns after completing all the template
processing.
@ -143,33 +143,33 @@ above. The upgrade run operation thus will simply execute those ansible playbook
generated by the upgrade prepare command, against the nodes specified in the
parameters.
The `--nodes` and `--roles` parameters are used to limit the ansible playbook
The ``--nodes`` and ``--roles`` parameters are used to limit the ansible playbook
execution to specific nodes. These are defined in a mutually
exclusive nodes_or_roles_ group and it is declared as required so that the
operator must pass either one of those and never both. Both `--roles` and
`--nodes` are used by ansible with the tripleo-ansible-inventory_. This creates
operator must pass either one of those and never both. Both ``--roles`` and
``--nodes`` are used by ansible with the tripleo-ansible-inventory_. This creates
the ansible inventory based on the Heat stack outputs, so that for example
`Controller` and `overcloud-controller-0` are both valid values for the
ansible-playbook `--limit`_ parameter.
ansible-playbook |--limit|_ parameter.
As documented in the run_user_docs_ and the nodes_or_roles_helptext_, the operator
*must* use `--roles` for the controllers. Upgrading the controlplane, one node
*must* use ``--roles`` for the controllers. Upgrading the controlplane, one node
at a time is not supported, mainly due to limitations in the pacemaker cluster
upgrade which needs to occur across all nodes in the same operation. The
operator may use `--roles` for non controlplane nodes or may prefer to specify
one or more specific nodes by name with `--nodes`. In either case the value
operator may use ``--roles`` for non controlplane nodes or may prefer to specify
one or more specific nodes by name with ``--nodes``. In either case the value
specified by the operator is simply passed through to ansible as the
limit_hosts_ parameter.
The `--ssh-user` and all other parameters are similarly
The ``--ssh-user`` and all other parameters are similarly
collected and passed to the ansible invocation which starts on the client side
in the run_update_ansible_action_ method call. Before diving into more detail
about the ansible playbook run it is also worth highlighting the `--skip-tags`_
about the ansible playbook run it is also worth highlighting the |--skip-tags|_
parameter that is used to skip certain ansible tasks with the ansible-skip-tags_
ansible-playbook parameter. For the Queens upgrade workflow we only support
skipping the step0 validation tasks that check services are running and this is
enforced by checking the value passed by the operator against the
MAJOR_UPGRADE_SKIP_TAGS_. Finally, the `--playbook`_ parameter as the name
MAJOR_UPGRADE_SKIP_TAGS_. Finally, the |--playbook|_ parameter as the name
suggests is used to specify the ansible playbook(s) to run. By default and
as you can see in the definition, this defaults to a special value 'all'
which causes all-upgrade-playbooks-to-run_. The value of all_playbooks
@ -195,15 +195,18 @@ before declaring the upgrade-run-success_!
.. _UpgradeRun: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L94
.. _nodes_or_roles: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L110
.. _tripleo-ansible-inventory: https://github.com/openstack/tripleo-common/blob/cef9c406514fd0b01b7984b89334d8e8abd7a244/tripleo_common/inventory.py#L1
.. _`--limit`: https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-l
.. |--limit| replace:: ``--limit``
.. _--limit: https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-l
.. _run_user_docs: https://docs.openstack.org/tripleo-docs/latest/install/post_deployment/upgrade.html#openstack-overcloud-upgrade-run
.. _nodes_or_roles_helptext: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L111-L131
.. _limit_hosts: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L207-L212
.. _run_update_ansible_action: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L212-L217
.. _`--skip-tags`: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L211
.. |--skip-tags| replace:: ``--skip-tags``
.. _--skip-tags: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L211
.. _ansible-skip-tags: https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-skip-tags
.. _MAJOR_UPGRADE_SKIP_TAGS: https://github.com/openstack/python-tripleoclient/blob/3931606423a17c40a4458eb4df3c47cc6a829dbb/tripleoclient/constants.py#L56
.. _`--playbook`: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L133-L150
.. |--playbook| replace:: ``--playbook``
.. _--playbook: https://github.com/openstack/python-tripleoclient/blob/c7b7b4e3dcd34f9e51686065e328e73556967bab/tripleoclient/v1/overcloud_upgrade.py#L133-L150
.. _all-upgrade-playbooks-to-run: https://github.com/openstack/python-tripleoclient/blob/3931606423a17c40a4458eb4df3c47cc6a829dbb/tripleoclient/utils.py#L946
.. _MAJOR_UPGRADE_PLAYBOOKS: https://github.com/openstack/python-tripleoclient/blob/3931606423a17c40a4458eb4df3c47cc6a829dbb/tripleoclient/constants.py#L53
.. _update_nodes_workflow_invocation: https://github.com/openstack/python-tripleoclient/blob/3931606423a17c40a4458eb4df3c47cc6a829dbb/tripleoclient/workflows/package_update.py#L85

View File

@ -177,11 +177,11 @@ Installing the Undercloud
The undercloud is containerized by default as of Rocky.
.. note::
It's possible to enable verbose logging with --verbose option.
It's possible to enable verbose logging with ``--verbose`` option.
.. note::
To install a deprecated instack undercloud, you'll need to deploy
with --use-heat=False option.
with ``--use-heat=False`` option.
In Rocky, we will run all the OpenStack services in a moby container runtime

View File

@ -111,19 +111,16 @@ Updating Undercloud Components
The undercloud is containerized by default as of Rocky. Therefore,
an undercloud deployed on Queens (non-containerized) will be upgraded
to a containerized undercloud on Rocky, by default.
To upgrade with instack undercloud, you'll need to upgrade
with --use-heat=False option. Note this isn't tested and not supported.
To upgrade with instack undercloud, you'll need to upgrade with
``--use-heat=False`` option. Note this isn't tested and not supported.
.. note::
It's possible to enable verbose logging with --verbose option.
It's possible to enable verbose logging with ``--verbose`` option.
To cleanup an undercloud after its upgrade, you'll need to set
upgrade_cleanup to True in undercloud.conf. It'll remove the rpms
that were deployed by instack-undercloud, after the upgrade to a
containerized undercloud.
.. note::
It's possible to enable verbose logging with --verbose option.
.. note::
If you added custom OVS ports to the undercloud (e.g. in a virtual

View File

@ -351,11 +351,11 @@ openstack overcloud upgrade run
This will run the ansible playbooks to deliver the upgrade configuration.
By default, 3 playbooks are executed: the upgrade_steps_playbook, then the
deploy_steps_playbook and finally the post_upgrade_steps_playbook. These
playbooks are invoked on those overcloud nodes specified by the --roles or
--nodes parameters, which are mutually exclusive. You are expected to use
--roles for controlplane nodes, since these need to be upgraded in the same
playbooks are invoked on those overcloud nodes specified by the ``--roles`` or
``--nodes`` parameters, which are mutually exclusive. You are expected to use
``--roles`` for controlplane nodes, since these need to be upgraded in the same
step. For non controlplane nodes, such as Compute or Storage, you can use
--nodes to specify a single node or list of nodes to upgrade. The controller
``--nodes`` to specify a single node or list of nodes to upgrade. The controller
nodes need to be the first upgraded, following by the compute and storage ones.
.. code-block:: bash
@ -365,7 +365,7 @@ nodes need to be the first upgraded, following by the compute and storage ones.
.. note::
*Optionally* you can specify `--playbook` to manually step through the upgrade
*Optionally* you can specify ``--playbook`` to manually step through the upgrade
playbooks: You need to run all three in this order and as specified below
(no path) for a full upgrade to Queens.
@ -386,7 +386,7 @@ At a minimum an operator should check the health of the pacemaker cluster
The operator may also want to confirm that openstack and related service
containers are all in a good state and using the image references passed
during upgrade prepare with the --container-registry-file parameter.
during upgrade prepare with the ``--container-registry-file`` parameter.
.. code-block:: bash
@ -401,16 +401,16 @@ during upgrade prepare with the --container-registry-file parameter.
completed, or it may drive unexpected results.
For non controlplane nodes, such as Compute or ObjectStorage, you can use
`--nodes overcloud-compute-0` to upgrade particular nodes, or even
``--nodes overcloud-compute-0`` to upgrade particular nodes, or even
"compute0,compute1,compute3" for multiple nodes. Note these are again
upgraded in parallel. Also note that you can still use the `--roles` parameter
upgraded in parallel. Also note that you can still use the ``--roles`` parameter
with non controlplane roles if that is preferred.
.. code-block:: bash
openstack overcloud upgrade run --nodes overcloud-compute-0
Use of `--nodes` allows the operator to upgrade some subset, perhaps just one,
Use of ``--nodes`` allows the operator to upgrade some subset, perhaps just one,
compute or other non controlplane node and verify that the upgrade is
successful. One may even migrate workloads onto the newly upgraded node and
confirm there are no problems, before deciding to proceed with upgrading the
@ -426,7 +426,7 @@ finally post_upgrade_steps_playbook.yaml in that order.
--playbook upgrade_steps_playbook.yaml
# etc for the other 2 as above example for controller
For re-run, you can specify --skip-tags validation to skip those step 0
For re-run, you can specify ``--skip-tags validation`` to skip those step 0
ansible tasks that check if services are running, in case you can't or
don't want to start them all.
@ -482,7 +482,7 @@ about why this is the case in the queens-upgrade-dev-docs_.
The Queens container image references that were passed into the
`openstack overcloud ffwd-upgrade prepare`_ with the
`--container-registry-file` parameter **must** be included as an
``--container-registry-file`` parameter **must** be included as an
environment file, with the -e option to the openstack overcloud
ffwd-upgrade run command, together with all other environment files
for your deployment.

View File

@ -71,7 +71,7 @@ the OpenStack release that you currently operate, perform these steps:
#. **Update run**
Run the update procedure on a subset of nodes selected via the
`--nodes` parameter:
``--nodes`` parameter:
.. code-block:: bash
@ -196,7 +196,7 @@ the registry file generated from the first step above::
openstack overcloud update --init-minor-update --container-registry-file latest-images.yaml
3. Invoke the minor update on the nodes specified with the --nodes
3. Invoke the minor update on the nodes specified with the ``--nodes``
parameter::
openstack overcloud update --nodes controller-0
@ -250,7 +250,7 @@ breakpoint on next one.
in the process by re-running same command.
.. note::
The --templates and --environment-file (-e) are now deprecated. They can still
be passed to the command, but they will be silently ignored. This is due to
the plan now used for deployment should only be modified via plan modification
commands.
The ``--templates`` and ``--environment-file`` (``-e``) are now deprecated.
They can still be passed to the command, but they will be silently ignored.
This is due to the plan now used for deployment should only be modified via
plan modification commands.

View File

@ -39,7 +39,7 @@ The tripleo cli has been updated accordingly to accommodate this new
workflow. In Queens a new `openstack overcloud upgrade` command is introduced
and it expects one of three subcommands: **prepare**, **run** and
**converge**. Each subcommand has its own set of options which you can explore
with --help:
with ``--help``:
.. code-block:: bash
@ -170,17 +170,17 @@ openstack overcloud upgrade run
This will run the ansible playbooks to deliver the upgrade configuration.
By default, 3 playbooks are executed: the upgrade_steps_playbook, then the
deploy_steps_playbook and finally the post_upgrade_steps_playbook. These
playbooks are invoked on those overcloud nodes specified by the --roles or
--nodes parameters, which are mutually exclusive. You are expected to use
--roles for controlplane nodes, since these need to be upgraded in the same
playbooks are invoked on those overcloud nodes specified by the ``--roles`` or
``--nodes`` parameters, which are mutually exclusive. You are expected to use
``--roles`` for controlplane nodes, since these need to be upgraded in the same
step. For non controlplane nodes, such as Compute or Storage, you can use
--nodes to specify a single node or list of nodes to upgrade.
``--nodes`` to specify a single node or list of nodes to upgrade.
.. code-block:: bash
openstack overcloud upgrade run --roles Controller
**Optionally** specify `--playbook` to manually step through the upgrade
**Optionally** specify ``--playbook`` to manually step through the upgrade
playbooks: You need to run all three in this order and as specified below
(no path) for a full upgrade to Queens.
@ -207,16 +207,16 @@ passed during upgrade prepare.
[root@overcloud-controller-0 ~]# docker ps -a
For non controlplane nodes, such as Compute or ObjectStorage, you can use
`--nodes overcloud-compute-0` to upgrade particular nodes, or even
``--nodes overcloud-compute-0`` to upgrade particular nodes, or even
"compute0,compute1,compute3" for multiple nodes. Note these are again
upgraded in parallel. Also note that you can still use the `--roles` parameter
upgraded in parallel. Also note that you can still use the ``--roles`` parameter
with non controlplane roles if that is preferred.
.. code-block:: bash
openstack overcloud upgrade run --nodes overcloud-compute-0
Use of `--nodes` allows the operator to upgrade some subset, perhaps just one,
Use of ``--nodes`` allows the operator to upgrade some subset, perhaps just one,
compute or other non controlplane node and verify that the upgrade is
successful. One may even migrate workloads onto the newly upgraded node and
confirm there are no problems, before deciding to proceed with upgrading the
@ -232,7 +232,7 @@ finally post_upgrade_steps_playbook.yaml in that order.
--playbook upgrade_steps_playbook.yaml
# etc for the other 2 as above example for controller
For re-run, you can specify --skip-tags validation to skip those step 0
For re-run, you can specify ``--skip-tags`` validation to skip those step 0
ansible tasks that check if services are running, in case you can't or
don't want to start them all.
@ -732,8 +732,8 @@ Upgrading the Overcloud to Newton and earlier
If this step fails, it may leave the pacemaker cluster stopped (together
with all OpenStack services on the controller nodes). The root cause and
restoration procedure may vary, but in simple cases the pacemaker cluster
can be started by logging into one of the controllers and running `sudo
pcs cluster start --all`.
can be started by logging into one of the controllers and running ``sudo
pcs cluster start --all``.
.. note::

View File

@ -102,7 +102,7 @@ For example
Run tripleo-repos to install the appropriate repositories. The option below
will enable the latest master TripleO packages and the latest promoted
packages for all other OpenStack services and dependencies. There are other
repository configurations available in tripleo-repos, see its --help output
repository configurations available in tripleo-repos, see its ``--help`` output
for details.
.. code-block:: bash

View File

@ -39,7 +39,7 @@ Example: Leave logs in a swift container
If you want to perform a sosreport but do not currently wish to download the
logs, you can leave them in a swift container for later retrieval. The
`--collect-only` and `-c` options can be leveraged to store the
``--collect-only`` and ``-c`` options can be leveraged to store the
logs in a swift container. For example::
openstack overcloud support report collect -c logs_20170601 --collect-only controller
@ -51,18 +51,18 @@ the `openstack overcloud support report collect` command by running::
openstack overcloud support report collect -c logs_20170601 --download-only -o /tmp/mylogs controller
.. note:: There is a `--skip-container-delete` option that can be used if you
.. note:: There is a ``--skip-container-delete`` option that can be used if you
want to leave the logs in swift but still download them. This option
is ignored if `--collect-only` or `--download-only` options are
is ignored if ``--collect-only`` or ``--download-only`` options are
provided.
Additional Options
^^^^^^^^^^^^^^^^^^
The `openstack overcloud support report collect` command has additional
The ``openstack overcloud support report collect`` command has additional
that can be passed to work with the log bundles. Run the command with
`--help` to see additional options::
``--help`` to see additional options::
openstack overcloud support report collect --help