From 4592ed06f9ca8dc75c6209eb789564b81d57caa2 Mon Sep 17 00:00:00 2001 From: Markus Zoeller Date: Fri, 15 Sep 2017 14:19:35 -0600 Subject: [PATCH] docs: remove ALL the unnecessary blockquotes Due to missing/superflous spaces, some of the content in the guide is displayed as blockquote without being really quoted content. This change adds/removes spaces to remove ALL the generated HTML blockquotes. Change-Id: I25b0d9fa64cd474a844b5f3e6c126395a4e80f2c --- .../advanced_deployment/custom_roles.rst | 20 +-- .../advanced_deployment/deploy_manila.rst | 4 +- .../advanced_deployment/extra_config.rst | 8 +- .../advanced_deployment/node_config.rst | 12 +- .../advanced_deployment/node_placement.rst | 10 +- .../install/advanced_deployment/ops_tools.rst | 108 ++++++++-------- .../advanced_deployment/ovs_dpdk_config.rst | 22 ++-- .../basic_deployment/basic_deployment_cli.rst | 72 ++++++----- .../basic_deployment/basic_deployment_ui.rst | 18 +-- .../containers_deployment/architecture.rst | 80 ++++++------ .../install/mistral-api/mistral-api.rst | 8 +- .../install/post_deployment/upgrade.rst | 120 +++++++++--------- .../install/post_deployment/vm_snapshot.rst | 4 +- 13 files changed, 244 insertions(+), 242 deletions(-) diff --git a/doc/source/install/advanced_deployment/custom_roles.rst b/doc/source/install/advanced_deployment/custom_roles.rst index 65e17822..85be46fe 100644 --- a/doc/source/install/advanced_deployment/custom_roles.rst +++ b/doc/source/install/advanced_deployment/custom_roles.rst @@ -43,7 +43,7 @@ be modified or removed. The steps to define your custom roles configuration are: -1. Copy the default roles provided by `tripleo-heat-templates`: +1. Copy the default roles provided by `tripleo-heat-templates`:: mkdir ~/roles cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles @@ -57,18 +57,18 @@ match the name of the role. For example if adding a new role named `Galera`, the role file name should be `Galera.yaml`. The file should at least contain the following items: - * name: Name of the role e.g "CustomController", mandatory - * ServicesDefault: List of services, optional, defaults to an empty list - See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml - for the list of supported services. Both files can be found in the top - tripleo-heat-templates folder +* name: Name of the role e.g "CustomController", mandatory +* ServicesDefault: List of services, optional, defaults to an empty list + See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml + for the list of supported services. Both files can be found in the top + tripleo-heat-templates folder Additional items like the ones below should be included as well: - * CountDefault: Default number of nodes, defaults to zero - * HostnameFormatDefault: Format string for hostname, optional - * Description: A few sentences describing the role and information - pertaining to the usage of the role. +* CountDefault: Default number of nodes, defaults to zero +* HostnameFormatDefault: Format string for hostname, optional +* Description: A few sentences describing the role and information + pertaining to the usage of the role. The role file format is a basic yaml structure. The expectation is that there is a single role per file. See the roles `README.rst` for additional details. For diff --git a/doc/source/install/advanced_deployment/deploy_manila.rst b/doc/source/install/advanced_deployment/deploy_manila.rst index 46c97d76..dd2af9b3 100644 --- a/doc/source/install/advanced_deployment/deploy_manila.rst +++ b/doc/source/install/advanced_deployment/deploy_manila.rst @@ -141,11 +141,11 @@ Deploying the Overcloud with an External Backend #. Copy the Manila driver-specific configuration file to your home directory: - - Generic driver:: + - Generic driver:: sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~ - - NetApp driver:: + - NetApp driver:: sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~ diff --git a/doc/source/install/advanced_deployment/extra_config.rst b/doc/source/install/advanced_deployment/extra_config.rst index 640ffa71..95b8288e 100644 --- a/doc/source/install/advanced_deployment/extra_config.rst +++ b/doc/source/install/advanced_deployment/extra_config.rst @@ -137,10 +137,10 @@ integration points for additional third-party services, drivers or plugins. The following interfaces are available: - * `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration - * `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration - * `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration - * `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles). +* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration +* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration +* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration +* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles). Below is an example of a per-node configuration template that shows additional node configuration via standard heat SoftwareConfig_ resources:: diff --git a/doc/source/install/advanced_deployment/node_config.rst b/doc/source/install/advanced_deployment/node_config.rst index 8d08b7b0..27481462 100644 --- a/doc/source/install/advanced_deployment/node_config.rst +++ b/doc/source/install/advanced_deployment/node_config.rst @@ -31,12 +31,12 @@ value for compute nodes:: The parameters available are: - * `ExtraConfig`: Apply the data to all nodes, e.g all roles - * `ComputeExtraConfig`: Apply the data only to Compute nodes - * `ControllerExtraConfig`: Apply the data only to Controller nodes - * `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes - * `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes - * `CephStorageExtraConfig`: Apply the data only to CephStorage nodes +* `ExtraConfig`: Apply the data to all nodes, e.g all roles +* `ComputeExtraConfig`: Apply the data only to Compute nodes +* `ControllerExtraConfig`: Apply the data only to Controller nodes +* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes +* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes +* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes For any custom roles (defined via roles_data.yaml) the parameter name will be RoleNameExtraConfig where RoleName is the name specified in roles_data.yaml. diff --git a/doc/source/install/advanced_deployment/node_placement.rst b/doc/source/install/advanced_deployment/node_placement.rst index c462fb36..b674e2ba 100644 --- a/doc/source/install/advanced_deployment/node_placement.rst +++ b/doc/source/install/advanced_deployment/node_placement.rst @@ -51,11 +51,11 @@ deploy command. The same approach is possible for each role via these parameters: - * ControllerSchedulerHints - * ComputeSchedulerHints - * BlockStorageSchedulerHints - * ObjectStorageSchedulerHints - * CephStorageSchedulerHints +* ControllerSchedulerHints +* ComputeSchedulerHints +* BlockStorageSchedulerHints +* ObjectStorageSchedulerHints +* CephStorageSchedulerHints For custom roles (defined via roles_data.yaml) the parameter will be named RoleNameSchedulerHints, where RoleName is the name specified in roles_data.yaml. diff --git a/doc/source/install/advanced_deployment/ops_tools.rst b/doc/source/install/advanced_deployment/ops_tools.rst index 8e5b6a26..c910c475 100644 --- a/doc/source/install/advanced_deployment/ops_tools.rst +++ b/doc/source/install/advanced_deployment/ops_tools.rst @@ -71,7 +71,7 @@ Before deploying the Overcloud 1. Install client packages on overcloud-full image: - - Prepare installation script:: + - Prepare installation script:: cat >install.sh< --dns-nameserver diff --git a/doc/source/install/basic_deployment/basic_deployment_ui.rst b/doc/source/install/basic_deployment/basic_deployment_ui.rst index 77f672e2..7abae176 100644 --- a/doc/source/install/basic_deployment/basic_deployment_ui.rst +++ b/doc/source/install/basic_deployment/basic_deployment_ui.rst @@ -68,9 +68,9 @@ the following command on the undercloud:: ssh -Nf user@virthost -L 0.0.0.0:443:192.168.24.2:443 # If SSL ssh -Nf user@virthost -L 0.0.0.0:3000:192.168.24.1:3000 # If no SSL - .. note:: Quickstart started creating the tunnel automatically - during Pike. If using an older version you will have to create - the tunnel manually, for example:: + .. note:: Quickstart started creating the tunnel automatically + during Pike. If using an older version you will have to create + the tunnel manually, for example:: ssh -F /root/.quickstart/ssh.config.ansible undercloud -L 0.0.0.0:443:192.168.24.2:443 @@ -189,10 +189,10 @@ deployment in general, as well as for each individual environment. .. admonition:: Newton :class: newton - In Newton it was not possible to configure individual - environments. The environment templates should be updated - directly with the required parameters before uploading a new - plan. + In Newton it was not possible to configure individual + environments. The environment templates should be updated + directly with the required parameters before uploading a new + plan. Individual roles can also be configured by clicking on the Pencil icon beside the role name on each card. @@ -203,8 +203,8 @@ beside the role name on each card. .. admonition:: Newton :class: newton - In Newton, you may need to assign at least one node to the role - before the related configuration options are loaded. + In Newton, you may need to assign at least one node to the role + before the related configuration options are loaded. Assign Nodes diff --git a/doc/source/install/containers_deployment/architecture.rst b/doc/source/install/containers_deployment/architecture.rst index 76d20be3..05f7085e 100644 --- a/doc/source/install/containers_deployment/architecture.rst +++ b/doc/source/install/containers_deployment/architecture.rst @@ -131,55 +131,55 @@ Each service may define output variable(s) which control config file generation, initialization, and stepwise deployment of all the containers for this service. The following sections are available: - * config_settings: This setting is generally inherited from the - puppet/services templates and may be appended to if required - to support the docker specific config settings. +* config_settings: This setting is generally inherited from the + puppet/services templates and may be appended to if required + to support the docker specific config settings. - * step_config: This setting controls the manifest that is used to - create docker config files via puppet. The puppet tags below are - used along with this manifest to generate a config directory for - this container. +* step_config: This setting controls the manifest that is used to + create docker config files via puppet. The puppet tags below are + used along with this manifest to generate a config directory for + this container. - * kolla_config: Contains YAML that represents how to map config files - into the kolla container. This config file is typically mapped into - the container itself at the /var/lib/kolla/config_files/config.json - location and drives how kolla's external config mechanisms work. +* kolla_config: Contains YAML that represents how to map config files + into the kolla container. This config file is typically mapped into + the container itself at the /var/lib/kolla/config_files/config.json + location and drives how kolla's external config mechanisms work. - * docker_config: Data that is passed to the docker-cmd hook to configure - a container, or step of containers at each step. See the available steps - below and the related docker-cmd hook documentation in the heat-agents - project. +* docker_config: Data that is passed to the docker-cmd hook to configure + a container, or step of containers at each step. See the available steps + below and the related docker-cmd hook documentation in the heat-agents + project. - * puppet_config: This section is a nested set of key value pairs - that drive the creation of config files using puppet. - Required parameters include: +* puppet_config: This section is a nested set of key value pairs + that drive the creation of config files using puppet. + Required parameters include: - * puppet_tags: Puppet resource tag names that are used to generate config - files with puppet. Only the named config resources are used to generate - a config file. Any service that specifies tags will have the default - tags of 'file,concat,file_line,augeas,cron' appended to the setting. - Example: keystone_config + * puppet_tags: Puppet resource tag names that are used to generate config + files with puppet. Only the named config resources are used to generate + a config file. Any service that specifies tags will have the default + tags of 'file,concat,file_line,augeas,cron' appended to the setting. + Example: keystone_config - * config_volume: The name of the volume (directory) where config files - will be generated for this service. Use this as the location to - bind mount into the running Kolla container for configuration. + * config_volume: The name of the volume (directory) where config files + will be generated for this service. Use this as the location to + bind mount into the running Kolla container for configuration. - * config_image: The name of the docker image that will be used for - generating configuration files. This is often the same container - that the runtime service uses. Some services share a common set of - config files which are generated in a common base container. + * config_image: The name of the docker image that will be used for + generating configuration files. This is often the same container + that the runtime service uses. Some services share a common set of + config files which are generated in a common base container. - * step_config: This setting controls the manifest that is used to - create docker config files via puppet. The puppet tags below are - used along with this manifest to generate a config directory for - this container. + * step_config: This setting controls the manifest that is used to + create docker config files via puppet. The puppet tags below are + used along with this manifest to generate a config directory for + this container. - * docker_puppet_tasks: This section provides data to drive the - docker-puppet.py tool directly. The task is executed only once - within the cluster (not on each node) and is useful for several - puppet snippets we require for initialization of things like - keystone endpoints, database users, etc. See docker-puppet.py - for formatting. +* docker_puppet_tasks: This section provides data to drive the + docker-puppet.py tool directly. The task is executed only once + within the cluster (not on each node) and is useful for several + puppet snippets we require for initialization of things like + keystone endpoints, database users, etc. See docker-puppet.py + for formatting. Docker steps diff --git a/doc/source/install/mistral-api/mistral-api.rst b/doc/source/install/mistral-api/mistral-api.rst index 4e195114..23adca02 100644 --- a/doc/source/install/mistral-api/mistral-api.rst +++ b/doc/source/install/mistral-api/mistral-api.rst @@ -142,12 +142,12 @@ it can be changed if they are all consistent. This will be the plan name. 1. Create the Swift container. - .. code-block:: bash + .. code-block:: bash openstack action execution run tripleo.plan.create_container \ '{"container":"my_cloud"}' - .. note:: + .. note:: Creating a swift container directly isn't sufficient, as this Mistral action also sets metadata on the container and may include further @@ -155,7 +155,7 @@ it can be changed if they are all consistent. This will be the plan name. 2. Upload the files to Swift. - .. code-block:: bash + .. code-block:: bash swift upload my_cloud path/to/tripleo/templates @@ -163,7 +163,7 @@ it can be changed if they are all consistent. This will be the plan name. for the uploaded templates, do some initial template processing and generate the passwords. - .. code-block:: bash + .. code-block:: bash openstack workflow execution create tripleo.plan_management.v1.create_deployment_plan \ '{"container":"my_cloud"}' diff --git a/doc/source/install/post_deployment/upgrade.rst b/doc/source/install/post_deployment/upgrade.rst index 0216c76f..f8af140f 100644 --- a/doc/source/install/post_deployment/upgrade.rst +++ b/doc/source/install/post_deployment/upgrade.rst @@ -30,31 +30,31 @@ Upgrading the Undercloud 1. Disable the old OpenStack release repositories and enable new release repositories on the undercloud: - .. admonition:: Mitaka to Newton - :class: mton + .. admonition:: Mitaka to Newton + :class: mton :: export CURRENT_VERSION=mitaka export NEW_VERSION=newton - .. admonition:: Newton to Ocata - :class: ntoo + .. admonition:: Newton to Ocata + :class: ntoo :: export CURRENT_VERSION=newton export NEW_VERSION=ocata - Backup and disable current repos. Note that the repository files might be - named differently depending on your installation:: + Backup and disable current repos. Note that the repository files might be + named differently depending on your installation:: mkdir /home/stack/REPOBACKUP sudo mv /etc/yum.repos.d/delorean* /home/stack/REPOBACKUP/ - Get and enable new repos for `NEW_VERSION`: + Get and enable new repos for `NEW_VERSION`: - .. include:: ../repositories.txt + .. include:: ../repositories.txt 2. Run undercloud upgrade: @@ -71,37 +71,37 @@ Upgrading the Undercloud .. admonition:: Mitaka to Newton :class: mton - In the first release of instack-undercloud newton(5.0.0), the undercloud - telemetry services are **disabled** by default. In order to maintain the - telemetry services during the mitaka to newton upgrade the operator must - explicitly enable them **before** running the undercloud upgrade. This - is done by adding:: + In the first release of instack-undercloud newton(5.0.0), the undercloud + telemetry services are **disabled** by default. In order to maintain the + telemetry services during the mitaka to newton upgrade the operator must + explicitly enable them **before** running the undercloud upgrade. This + is done by adding:: enable_telemetry = true - in the [DEFAULT] section of the undercloud.conf configuration file. + in the [DEFAULT] section of the undercloud.conf configuration file. - If you are using any newer newton release, this option is switched back - to **enabled** by default to make upgrade experience better. Hence, if - you are using a later newton release you don't need to explicitly enable - this option. + If you are using any newer newton release, this option is switched back + to **enabled** by default to make upgrade experience better. Hence, if + you are using a later newton release you don't need to explicitly enable + this option. .. admonition:: Ocata to Pike :class: mton - Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the - Pike release it is possible to use TripleO to deploy Ceph with - either ceph-ansible or puppet-ceph, though puppet-ceph is - deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph - repository must be enabled on the undercloud and the - ceph-ansible package must then be installed:: + Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the + Pike release it is possible to use TripleO to deploy Ceph with + either ceph-ansible or puppet-ceph, though puppet-ceph is + deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph + repository must be enabled on the undercloud and the + ceph-ansible package must then be installed:: sudo yum -y install --enablerepo=extras centos-release-ceph-jewel sudo yum -y install ceph-ansible - It is not yet possible to migrate an existing puppet-ceph - deployment to a ceph-ansible deployment. Only new deployments - are currently possible with ceph-ansible. + It is not yet possible to migrate an existing puppet-ceph + deployment to a ceph-ansible deployment. Only new deployments + are currently possible with ceph-ansible. The following commands will upgrade the undercloud: @@ -298,12 +298,12 @@ Upgrading the Overcloud to Newton and earlier :class: mton - **Deliver the migration for ceilometer to run under httpd.** + **Deliver the migration for ceilometer to run under httpd.** - This is to deliver the migration for ceilometer to be run under httpd (apache) - rather than eventlet as was the case before. To execute this step run - `overcloud deploy`, passing in the full set of environment files plus - `major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`:: + This is to deliver the migration for ceilometer to be run under httpd (apache) + rather than eventlet as was the case before. To execute this step run + `overcloud deploy`, passing in the full set of environment files plus + `major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`:: openstack overcloud deploy --templates \ -e \ @@ -354,19 +354,19 @@ Upgrading the Overcloud to Newton and earlier .. admonition:: Mitaka to Newton :class: mton - **Explicitly disable sahara services if so desired:** - As discussed at bug1630247_ sahara services are disabled by default - in the Newton overcloud deployment. This special case is handled for - the duration of the upgrade by defaulting to 'keep sahara-\*'. + **Explicitly disable sahara services if so desired:** + As discussed at bug1630247_ sahara services are disabled by default + in the Newton overcloud deployment. This special case is handled for + the duration of the upgrade by defaulting to 'keep sahara-\*'. - That is by default sahara services are restarted after the mitaka to - newton upgrade of controller nodes and sahara config is re-applied - during the final upgrade converge step. + That is by default sahara services are restarted after the mitaka to + newton upgrade of controller nodes and sahara config is re-applied + during the final upgrade converge step. - If an operator wishes to **disable** sahara services as part of the mitaka - to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ - environment file during the controller upgrade step as well as during - the converge step later:: + If an operator wishes to **disable** sahara services as part of the mitaka + to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ + environment file during the controller upgrade step as well as during + the converge step later:: openstack overcloud deploy --templates \ -e \ @@ -419,19 +419,19 @@ Upgrading the Overcloud to Newton and earlier .. admonition:: Mitaka to Newton :class: mton - **Explicitly disable sahara services if so desired:** - As discussed at bug1630247_ sahara services are disabled by default - in the Newton overcloud deployment. This special case is handled for - the duration of the upgrade by defaulting to 'keep sahara-\*'. + **Explicitly disable sahara services if so desired:** + As discussed at bug1630247_ sahara services are disabled by default + in the Newton overcloud deployment. This special case is handled for + the duration of the upgrade by defaulting to 'keep sahara-\*'. - That is by default sahara services are restarted after the mitaka to - newton upgrade of controller nodes and sahara config is re-applied - during the final upgrade converge step. + That is by default sahara services are restarted after the mitaka to + newton upgrade of controller nodes and sahara config is re-applied + during the final upgrade converge step. - If an operator wishes to **disable** sahara services as part of the mitaka - to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ - environment file during the controller upgrade earlier and converge - step here:: + If an operator wishes to **disable** sahara services as part of the mitaka + to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ + environment file during the controller upgrade earlier and converge + step here:: openstack overcloud deploy --templates \ -e \ @@ -461,13 +461,13 @@ Upgrading the Overcloud to Newton and earlier :class: mton - **Deliver the data migration for aodh.** + **Deliver the data migration for aodh.** - This is to deliver the data migration for aodh. In Newton, aodh uses its - own mysql backend. This step migrates all the existing alarm data from - mongodb to the new mysql backend. To execute this step run - `overcloud deploy`, passing in the full set of environment files plus - `major-upgrade-aodh-migration.yaml`:: + This is to deliver the data migration for aodh. In Newton, aodh uses its + own mysql backend. This step migrates all the existing alarm data from + mongodb to the new mysql backend. To execute this step run + `overcloud deploy`, passing in the full set of environment files plus + `major-upgrade-aodh-migration.yaml`:: openstack overcloud deploy --templates \ -e \ diff --git a/doc/source/install/post_deployment/vm_snapshot.rst b/doc/source/install/post_deployment/vm_snapshot.rst index 25355f27..c9a54c73 100644 --- a/doc/source/install/post_deployment/vm_snapshot.rst +++ b/doc/source/install/post_deployment/vm_snapshot.rst @@ -6,7 +6,7 @@ Create a snapshot of a running server Create a new image by taking a snapshot of a running server and download the image. - :: +:: nova image-create instance_name image_name glance image-download image_name --file exported_vm.qcow2 @@ -15,7 +15,7 @@ Import an image into Overcloud and launch an instance ----------------------------------------------------- Upload the exported image into glance in Overcloud and launch a new instance. - :: +:: glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported