docs: remove ALL the unnecessary blockquotes

Due to missing/superflous spaces, some of the content in the
guide is displayed as blockquote without being really quoted
content. This change adds/removes spaces to remove ALL the
generated HTML blockquotes.

Change-Id: I25b0d9fa64cd474a844b5f3e6c126395a4e80f2c
This commit is contained in:
Markus Zoeller 2017-09-15 14:19:35 -06:00
parent 2aa2da1d77
commit 4592ed06f9
13 changed files with 244 additions and 242 deletions

View File

@ -43,7 +43,7 @@ be modified or removed.
The steps to define your custom roles configuration are:
1. Copy the default roles provided by `tripleo-heat-templates`:
1. Copy the default roles provided by `tripleo-heat-templates`::
mkdir ~/roles
cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
@ -57,17 +57,17 @@ match the name of the role. For example if adding a new role named `Galera`,
the role file name should be `Galera.yaml`. The file should at least contain
the following items:
* name: Name of the role e.g "CustomController", mandatory
* ServicesDefault: List of services, optional, defaults to an empty list
* name: Name of the role e.g "CustomController", mandatory
* ServicesDefault: List of services, optional, defaults to an empty list
See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml
for the list of supported services. Both files can be found in the top
tripleo-heat-templates folder
Additional items like the ones below should be included as well:
* CountDefault: Default number of nodes, defaults to zero
* HostnameFormatDefault: Format string for hostname, optional
* Description: A few sentences describing the role and information
* CountDefault: Default number of nodes, defaults to zero
* HostnameFormatDefault: Format string for hostname, optional
* Description: A few sentences describing the role and information
pertaining to the usage of the role.
The role file format is a basic yaml structure. The expectation is that there

View File

@ -137,10 +137,10 @@ integration points for additional third-party services, drivers or plugins.
The following interfaces are available:
* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration
* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration
* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration
* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles).
* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration
* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration
* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration
* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles).
Below is an example of a per-node configuration template that shows additional node configuration
via standard heat SoftwareConfig_ resources::

View File

@ -31,12 +31,12 @@ value for compute nodes::
The parameters available are:
* `ExtraConfig`: Apply the data to all nodes, e.g all roles
* `ComputeExtraConfig`: Apply the data only to Compute nodes
* `ControllerExtraConfig`: Apply the data only to Controller nodes
* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes
* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes
* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes
* `ExtraConfig`: Apply the data to all nodes, e.g all roles
* `ComputeExtraConfig`: Apply the data only to Compute nodes
* `ControllerExtraConfig`: Apply the data only to Controller nodes
* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes
* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes
* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes
For any custom roles (defined via roles_data.yaml) the parameter name will
be RoleNameExtraConfig where RoleName is the name specified in roles_data.yaml.

View File

@ -51,11 +51,11 @@ deploy command.
The same approach is possible for each role via these parameters:
* ControllerSchedulerHints
* ComputeSchedulerHints
* BlockStorageSchedulerHints
* ObjectStorageSchedulerHints
* CephStorageSchedulerHints
* ControllerSchedulerHints
* ComputeSchedulerHints
* BlockStorageSchedulerHints
* ObjectStorageSchedulerHints
* CephStorageSchedulerHints
For custom roles (defined via roles_data.yaml) the parameter will be named
RoleNameSchedulerHints, where RoleName is the name specified in roles_data.yaml.

View File

@ -11,9 +11,9 @@ Execute below command to create the ``roles_data.yaml``::
Once a roles file is created, the following changes are required:
- Deploy Command
- Parameters
- Network Config
- Deploy Command
- Parameters
- Network Config
Deploy Command
----------------
@ -45,11 +45,11 @@ Parameters
Following are the list of parameters which need to be provided for deploying
with OVS DPDK support.
* OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver
* OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch
* OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver
* OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch
host process (lcore list)
* OvsDpdkMemoryChannels: Number of memory channels
* OvsDpdkSocketMemory: Socket memory list per NUMA node
* OvsDpdkMemoryChannels: Number of memory channels
* OvsDpdkSocketMemory: Socket memory list per NUMA node
Example::
@ -76,9 +76,9 @@ DPDK supported network interfaces should be specified in the network config
templates to configure OVS DPDK on the node. The following new network config
types have been added to support DPDK.
- ovs_user_bridge
- ovs_dpdk_port
- ovs_dpdk_bond
- ovs_user_bridge
- ovs_dpdk_port
- ovs_dpdk_bond
Example::

View File

@ -210,6 +210,8 @@ created on the undercloud, one should use a non-root user.
openstack overcloud image build
..
.. admonition:: RHEL
:class: rhel

View File

@ -131,26 +131,26 @@ Each service may define output variable(s) which control config file generation,
initialization, and stepwise deployment of all the containers for this service.
The following sections are available:
* config_settings: This setting is generally inherited from the
* config_settings: This setting is generally inherited from the
puppet/services templates and may be appended to if required
to support the docker specific config settings.
* step_config: This setting controls the manifest that is used to
* step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for
this container.
* kolla_config: Contains YAML that represents how to map config files
* kolla_config: Contains YAML that represents how to map config files
into the kolla container. This config file is typically mapped into
the container itself at the /var/lib/kolla/config_files/config.json
location and drives how kolla's external config mechanisms work.
* docker_config: Data that is passed to the docker-cmd hook to configure
* docker_config: Data that is passed to the docker-cmd hook to configure
a container, or step of containers at each step. See the available steps
below and the related docker-cmd hook documentation in the heat-agents
project.
* puppet_config: This section is a nested set of key value pairs
* puppet_config: This section is a nested set of key value pairs
that drive the creation of config files using puppet.
Required parameters include:
@ -174,7 +174,7 @@ The following sections are available:
used along with this manifest to generate a config directory for
this container.
* docker_puppet_tasks: This section provides data to drive the
* docker_puppet_tasks: This section provides data to drive the
docker-puppet.py tool directly. The task is executed only once
within the cluster (not on each node) and is useful for several
puppet snippets we require for initialization of things like

View File

@ -6,7 +6,7 @@ Create a snapshot of a running server
Create a new image by taking a snapshot of a running server and download the
image.
::
::
nova image-create instance_name image_name
glance image-download image_name --file exported_vm.qcow2
@ -15,7 +15,7 @@ Import an image into Overcloud and launch an instance
-----------------------------------------------------
Upload the exported image into glance in Overcloud and launch a new instance.
::
::
glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported