From 265d96bed16045e4bf6fd6d8655b8eee6fc7469b Mon Sep 17 00:00:00 2001 From: Juanita-Balaraj Date: Tue, 26 Oct 2021 17:31:52 -0400 Subject: [PATCH] Fixed \_ as the output was not rendering correctly (pick r5 updates only) Fixed Patchset 4 comments Fixed Patchset 3 comments and added additional updates Signed-off-by: Juanita-Balaraj Change-Id: I7482afc3a90bbdc94b6ecd8b6ac39d831b8a45db Signed-off-by: Juanita-Balaraj --- ...pplication-commands-and-helm-overrides.rst | 40 +++++++++---------- ...rials-authentication-and-authorization.rst | 2 +- ...ring-starlingx-system-data-and-storage.rst | 20 +++++----- ...kup-playbook-locally-on-the-controller.rst | 28 ++++++------- ...ning-ansible-restore-playbook-remotely.rst | 34 ++++++++-------- ...rapping-from-a-private-docker-registry.rst | 6 +-- ...rapping-from-a-private-docker-registry.rst | 6 +-- ...ate-orchestration-on-distributed-cloud.rst | 4 +- ...distributed-cloud-update-orchestration.rst | 10 ++--- .../device-image-update-orchestration.rst | 10 ++--- .../distributed-cloud-architecture.rst | 2 +- ...de-orchestration-process-using-the-cli.rst | 10 ++--- ...ng-redfish-platform-management-service.rst | 12 +++--- .../configuration-using-metadata.rst | 2 +- ...-ip-address-provisioning-using-the-cli.rst | 2 +- .../kubernetes/operator-command-logging.rst | 2 +- ...tl-and-helm-clients-directly-on-a-host.rst | 4 +- ...tapp-deployment-as-the-storage-backend.rst | 20 +++++----- .../kubernetes/disk-naming-conventions.rst | 2 +- .../install-additional-rbd-provisioners.rst | 2 +- ...iguration-storage-related-cli-commands.rst | 8 ++-- 21 files changed, 113 insertions(+), 113 deletions(-) diff --git a/doc/source/admintasks/kubernetes/admin-application-commands-and-helm-overrides.rst b/doc/source/admintasks/kubernetes/admin-application-commands-and-helm-overrides.rst index 34310317f..b54e5cde9 100644 --- a/doc/source/admintasks/kubernetes/admin-application-commands-and-helm-overrides.rst +++ b/doc/source/admintasks/kubernetes/admin-application-commands-and-helm-overrides.rst @@ -51,7 +51,7 @@ commands to manage containerized applications provided as part of |prod|. where: - **** + **** The name of the application to show details. For example: @@ -83,7 +83,7 @@ commands to manage containerized applications provided as part of |prod|. where the following are optional arguments: - **** + **** Assigns a custom name for application. You can use this name to interact with the application in the future. @@ -92,7 +92,7 @@ commands to manage containerized applications provided as part of |prod|. and the following is a positional argument: - **** + **** The path to the tar file containing the application to be uploaded. For example: @@ -126,7 +126,7 @@ commands to manage containerized applications provided as part of |prod|. where the following is a positional argument: - **** + **** The name of the application. and the following is an optional argument: @@ -175,7 +175,7 @@ commands to manage containerized applications provided as part of |prod|. +---------------------+--------------------------------+---------------+ - To show the overrides for a particular chart, use the following command. - System overrides are displayed in the **system\_overrides** section of + System overrides are displayed in the **system_overrides** section of the **Property** column. .. code-block:: none @@ -185,10 +185,10 @@ commands to manage containerized applications provided as part of |prod|. where the following are positional arguments: - **** + **** The name of the application. - **< chart\_name>** + **< chart_name>** The name of the chart. **** @@ -212,10 +212,10 @@ commands to manage containerized applications provided as part of |prod|. where the following are positional arguments: - **** + **** The name of the application. - **** + **** The name of the chart. **** @@ -257,7 +257,7 @@ commands to manage containerized applications provided as part of |prod|. | | DEBUG: true | +----------------+-------------------+ - The user overrides are shown in the **user\_overrides** section of the + The user overrides are shown in the **user_overrides** section of the **Property** column. .. note:: @@ -280,10 +280,10 @@ commands to manage containerized applications provided as part of |prod|. and the following are positional arguments: - **** + **** The name of the application. - **** + **** The name of the chart. **** @@ -302,10 +302,10 @@ commands to manage containerized applications provided as part of |prod|. where the following are positional arguments: - **** + **** The name of the application. - **** + **** The name of the chart. **** @@ -334,7 +334,7 @@ commands to manage containerized applications provided as part of |prod|. and the following is a positional argument: - **** + **** The name of the application to apply. For example: @@ -366,7 +366,7 @@ commands to manage containerized applications provided as part of |prod|. where: - **** + **** The name of the application to abort. For example: @@ -389,7 +389,7 @@ commands to manage containerized applications provided as part of |prod|. where the following are optional arguments: - **** + **** The name of the application to update. You can look up the name of an application using the @@ -417,7 +417,7 @@ commands to manage containerized applications provided as part of |prod|. and the following is a positional argument which must come last: - **** + **** The tar file containing the application manifest, Helm charts and configuration file. @@ -431,7 +431,7 @@ commands to manage containerized applications provided as part of |prod|. where: - **** + **** The name of the application to remove. For example: @@ -466,7 +466,7 @@ commands to manage containerized applications provided as part of |prod|. where: - **** + **** The name of the application to delete. You must run :command:`application-remove` before deleting an application. diff --git a/doc/source/admintasks/kubernetes/kubernetes-admin-tutorials-authentication-and-authorization.rst b/doc/source/admintasks/kubernetes/kubernetes-admin-tutorials-authentication-and-authorization.rst index 6645bcb92..885762d44 100644 --- a/doc/source/admintasks/kubernetes/kubernetes-admin-tutorials-authentication-and-authorization.rst +++ b/doc/source/admintasks/kubernetes/kubernetes-admin-tutorials-authentication-and-authorization.rst @@ -52,7 +52,7 @@ example, the user **testuser** is correct in the following URL, while cloud/systemController, access to the local registry can only be done using registry.local:9001. registry.central:9001 will be inaccessible. Installing a |CA|-signed certificate for the registry and the certificate of the |CA| as - an 'ssl\_ca' certificate will remove this restriction. + an 'ssl_ca' certificate will remove this restriction. For more information about Docker commands, see `https://docs.docker.com/engine/reference/commandline/docker/ `__. diff --git a/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst b/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst index 836f89db6..7a080a92e 100644 --- a/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst +++ b/doc/source/backup/kubernetes/restoring-starlingx-system-data-and-storage.rst @@ -128,7 +128,7 @@ conditions are in place: #. Install network connectivity required for the subcloud. #. Ensure that the backup file are available on the controller. Run both - Ansible Restore playbooks, restore\_platform.yml and restore\_user\_images.yml. + Ansible Restore playbooks, restore_platform.yml and restore_user_images.yml. For more information on restoring the back up file, see :ref:`Run Restore Playbook Locally on the Controller `, and :ref:`Run @@ -139,7 +139,7 @@ conditions are in place: The backup files contain the system data and updates. #. If the backup file contains patches, Ansible Restore playbook - restore\_platform.yml will apply the patches and prompt you to reboot the + restore_platform.yml will apply the patches and prompt you to reboot the system, you will need to re-run Ansible Restore playbook. The current software version on the controller is compared against the @@ -162,7 +162,7 @@ conditions are in place: Rerun the Ansible Playbook if there were patches applied and you were prompted to reboot the system. -#. Restore the local registry using the file restore\_user\_images.yml. +#. Restore the local registry using the file restore_user_images.yml. This must be done before unlocking controller-0. @@ -176,7 +176,7 @@ conditions are in place: becomes operational. #. If the system is a Distributed Cloud system controller, restore the **dc-vault** - using the restore\_dc\_vault.yml playbook. Perform this step after unlocking + using the restore_dc_vault.yml playbook. Perform this step after unlocking controller-0: .. code-block:: none @@ -261,7 +261,7 @@ conditions are in place: | 6 | compute-1 | worker | locked |disabled |offline | +----+-------------+------------+---------------+-----------+------------+ -#. Restore storage configuration. If :command:`wipe\_ceph\_osds` is set to +#. Restore storage configuration. If :command:`wipe_ceph_osds` is set to **True**, follow the same procedure used to restore **controller-1**, beginning with host **storage-0** and proceeding in sequence. @@ -275,12 +275,12 @@ conditions are in place: the restore procedure without interruption. Standard with Controller Storage install or reinstall depends on the - :command:`wipe\_ceph\_osds` configuration: + :command:`wipe_ceph_osds` configuration: - #. If :command:`wipe\_ceph\_osds` is set to **true**, reinstall the + #. If :command:`wipe_ceph_osds` is set to **true**, reinstall the storage hosts. - #. If :command:`wipe\_ceph\_osds` is set to **false** \(default + #. If :command:`wipe_ceph_osds` is set to **false** \(default option\), do not reinstall the storage hosts. .. caution:: @@ -315,9 +315,9 @@ conditions are in place: .. caution:: Do not proceed until the Ceph cluster is healthy and the message - HEALTH\_OK appears. + HEALTH_OK appears. - If the message HEALTH\_WARN appears, wait a few minutes and then try + If the message HEALTH_WARN appears, wait a few minutes and then try again. If the warning condition persists, consult the public documentation for troubleshooting Ceph monitors \(for example, `http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin diff --git a/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst b/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst index 9a6118173..84ac8f256 100644 --- a/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst +++ b/doc/source/backup/kubernetes/running-ansible-backup-playbook-locally-on-the-controller.rst @@ -15,7 +15,7 @@ Use the following command to run the Ansible Backup playbook and back up the ~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass= admin_password=" -e "backup_user_local_registry=true" -The and need to be set correctly +The and need to be set correctly using the ``-e`` option on the command line, or an override file, or in the Ansible secret file. @@ -23,44 +23,44 @@ The output files will be named: .. _running-ansible-backup-playbook-locally-on-the-controller-ul-wj1-vxh-pmb: -- inventory\_hostname\_platform\_backup\_timestamp.tgz +- inventory_hostname_platform_backup_timestamp.tgz -- inventory\_hostname\_openstack\_backup\_timestamp.tgz +- inventory_hostname_openstack_backup_timestamp.tgz -- inventory\_hostname\_docker\_local\_registry\_backup\_timestamp.tgz +- inventory_hostname_docker_local_registry_backup_timestamp.tgz -- inventory\_hostname\_dc\_vault\_backup\_timestamp.tgz +- inventory_hostname_dc_vault_backup_timestamp.tgz The variables prefix can be overridden using the ``-e`` option on the command line or by using an override file. .. _running-ansible-backup-playbook-locally-on-the-controller-ul-rdp-gyh-pmb: -- platform\_backup\_filename\_prefix +- platform_backup_filename_prefix -- openstack\_backup\_filename\_prefix +- openstack_backup_filename_prefix -- docker\_local\_registry\_backup\_filename\_prefix +- docker_local_registry_backup_filename_prefix -- dc\_vault\_backup\_filename\_prefix +- dc_vault_backup_filename_prefix The generated backup tar files will be displayed in the following format, for example: .. _running-ansible-backup-playbook-locally-on-the-controller-ul-p3b-f13-pmb: -- localhost\_docker\_local\_registry\_backup\_2020\_07\_15\_21\_24\_22.tgz +- localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz -- localhost\_platform\_backup\_2020\_07\_15\_21\_24\_22.tgz +- localhost_platform_backup_2020_07_15_21_24_22.tgz -- localhost\_openstack\_backup\_2020\_07\_15\_21\_24\_22.tgz +- localhost_openstack_backup_2020_07_15_21_24_22.tgz -- localhost\_dc\_vault\_backup\_2020\_07\_15\_21\_24\_22.tgz +- localhost_dc_vault_backup_2020_07_15_21_24_22.tgz These files are located by default in the /opt/backups directory on controller-0, and contains the complete system backup. -If the default location needs to be modified, the variable backup\_dir can +If the default location needs to be modified, the variable backup_dir can be overridden using the ``-e`` option on the command line or by using an override file. diff --git a/doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst b/doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst index 466c1582e..099dbc1e0 100644 --- a/doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst +++ b/doc/source/backup/kubernetes/system-backup-running-ansible-restore-playbook-remotely.rst @@ -73,24 +73,24 @@ In this method you can run Ansible Restore playbook and point to controller-0. expects both the **initial_backup_dir** and **backup_filename** to be specified. - - The backup\_filename is the platform backup tar file. It must be + - The backup_filename is the platform backup tar file. It must be provided using the ``-e`` option on the command line, for example: .. code-block:: none -e backup\_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz - - The initial\_backup\_dir is the location on the Ansible control + - The initial_backup_dir is the location on the Ansible control machine where the platform backup tar file is placed to restore the platform. It must be provided using ``-e`` option on the command line. - - The :command:`admin\_password`, :command:`ansible\_become\_pass`, - and :command:`ansible\_ssh\_pass` need to be set correctly using + - The :command:`admin_password`, :command:`ansible_become_pass`, + and :command:`ansible_ssh_pass` need to be set correctly using the ``-e`` option on the command line or in the Ansible secret file. - :command:`ansible\_ssh\_pass` is the password to the sysadmin user + :command:`ansible_ssh_pass` is the password to the sysadmin user on controller-0. - - The :command:`ansible\_remote\_tmp` should be set to a new + - The :command:`ansible_remote_tmp` should be set to a new directory \(not required to create it ahead of time\) under /home/sysadmin on controller-0 using the ``-e`` option on the command line. @@ -106,7 +106,7 @@ In this method you can run Ansible Restore playbook and point to controller-0. the patches and prompt you to reboot the system. Then you will need to re-run Ansible Restore playbook. -#. After running the restore\_platform.yml playbook, you can restore the local +#. After running the restore_platform.yml playbook, you can restore the local registry images. .. note:: @@ -119,33 +119,33 @@ In this method you can run Ansible Restore playbook and point to controller-0. where optional-extra-vars can be: - - The backup\_filename is the local registry backup tar file. It + - The backup_filename is the local registry backup tar file. It must be provided using the ``-e`` option on the command line, for example: .. code-block:: none - -e backup\_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz + -e backup_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz - - The initial\_backup\_dir is the location on the Ansible control + - The initial_backup_dir is the location on the Ansible control machine where the platform backup tar file is located. It must be provided using ``-e`` option on the command line. - - The :command:`ansible\_become\_pass`, and - :command:`ansible\_ssh\_pass` need to be set correctly using the + - The :command:`ansible_become_pass`, and + :command:`ansible_ssh_pass` need to be set correctly using the ``-e`` option on the command line or in the Ansible secret file. - :command:`ansible\_ssh\_pass` is the password to the sysadmin user + :command:`ansible_ssh_pass` is the password to the sysadmin user on controller-0. - - The backup\_dir should be set to a directory on controller-0. + - The backup_dir should be set to a directory on controller-0. The directory must have sufficient space for local registry backup - to be copied. The backup\_dir is set using the ``-e`` option on the + to be copied. The backup_dir is set using the ``-e`` option on the command line. - - The :command:`ansible\_remote\_tmp` should be set to a new + - The :command:`ansible_remote_tmp` should be set to a new directory on controller-0. Ansible will use this directory to copy files, and the directory must have sufficient space for local - registry backup to be copied. The :command:`ansible\_remote\_tmp` + registry backup to be copied. The :command:`ansible_remote_tmp` is set using the ``-e`` option on the command line. For example, run the local registry restore playbook, where diff --git a/doc/source/deploy_install_guides/r5_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst b/doc/source/deploy_install_guides/r5_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst index 4672dbec5..970b9f0ce 100644 --- a/doc/source/deploy_install_guides/r5_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst +++ b/doc/source/deploy_install_guides/r5_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst @@ -33,13 +33,13 @@ your server is isolated from the public Internet. username: password: - Where ```` and - ```` are your login credentials for the + Where ```` and + ```` are your login credentials for the ```` private Docker registry. .. note:: ```` must be a DNS name resolvable by the dns servers - configured in the ``dns\_servers:`` structure of the ansible bootstrap + configured in the ``dns_servers:`` structure of the ansible bootstrap override file /home/sysadmin/localhost.yml. #. For any additional local registry images required, use the full image name diff --git a/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst b/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst index 83bc743bb..6577f8658 100644 --- a/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst +++ b/doc/source/deploy_install_guides/r6_release/bare_metal/bootstrapping-from-a-private-docker-registry.rst @@ -33,13 +33,13 @@ your server is isolated from the public Internet. username: password: - Where ```` and - ```` are your login credentials for the + Where ```` and + ```` are your login credentials for the ```` private Docker registry. .. note:: ```` must be a DNS name resolvable by the dns servers - configured in the ``dns\_servers:`` structure of the ansible bootstrap + configured in the ``dns_servers:`` structure of the ansible bootstrap override file /home/sysadmin/localhost.yml. #. For any additional local registry images required, use the full image name diff --git a/doc/source/dist_cloud/kubernetes/configuring-kubernetes-update-orchestration-on-distributed-cloud.rst b/doc/source/dist_cloud/kubernetes/configuring-kubernetes-update-orchestration-on-distributed-cloud.rst index 397ab4462..503d123e2 100644 --- a/doc/source/dist_cloud/kubernetes/configuring-kubernetes-update-orchestration-on-distributed-cloud.rst +++ b/doc/source/dist_cloud/kubernetes/configuring-kubernetes-update-orchestration-on-distributed-cloud.rst @@ -138,13 +138,13 @@ controller for access by subclouds. For example: **--subcloud-apply-type** Determines whether the subclouds are upgraded in parallel, or serially. If - this is not specified using the CLI, the values for subcloud\_update\_type + this is not specified using the CLI, the values for subcloud_update_type defined for each subcloud group will be used by default. **--max-parallel-subclouds** Sets the maximum number of subclouds that can be upgraded in parallel \(default 20\). If this is not specified using the CLI, the values for - max\_parallel\_subclouds defined for each subcloud group will be used by + max_parallel_subclouds defined for each subcloud group will be used by default. **--stop-on-failure** diff --git a/doc/source/dist_cloud/kubernetes/creating-an-update-strategy-for-distributed-cloud-update-orchestration.rst b/doc/source/dist_cloud/kubernetes/creating-an-update-strategy-for-distributed-cloud-update-orchestration.rst index 816900a5e..9195ad057 100644 --- a/doc/source/dist_cloud/kubernetes/creating-an-update-strategy-for-distributed-cloud-update-orchestration.rst +++ b/doc/source/dist_cloud/kubernetes/creating-an-update-strategy-for-distributed-cloud-update-orchestration.rst @@ -35,8 +35,8 @@ You must be in **SystemController** mode. To change the mode, see tab. .. image:: figures/vhy1525122582274.png - - + + #. On the Cloud Patching Orchestration tab, click **Create Strategy**. @@ -49,7 +49,7 @@ You must be in **SystemController** mode. To change the mode, see parallel or serially. If this is not specified using the |CLI|, the values for - :command:`subcloud\_update\_type` defined for each subcloud group will + :command:`subcloud_update_type` defined for each subcloud group will be used by default. **max-parallel-subclouds** @@ -57,7 +57,7 @@ You must be in **SystemController** mode. To change the mode, see \(default 20\). If this is not specified using the |CLI|, the values for - :command:`max\_parallel\_subclouds` defined for each subcloud group + :command:`max_parallel_subclouds` defined for each subcloud group will be used by default. **stop-on-failure** @@ -82,7 +82,7 @@ You must be in **SystemController** mode. To change the mode, see To change the update strategy settings, you must delete the update strategy and create a new one. -.. seealso:: +.. seealso:: :ref:`Customizing the Update Configuration for Distributed Cloud Update Orchestration ` diff --git a/doc/source/dist_cloud/kubernetes/device-image-update-orchestration.rst b/doc/source/dist_cloud/kubernetes/device-image-update-orchestration.rst index 5970d8cdd..9999892e5 100644 --- a/doc/source/dist_cloud/kubernetes/device-image-update-orchestration.rst +++ b/doc/source/dist_cloud/kubernetes/device-image-update-orchestration.rst @@ -54,7 +54,7 @@ device image updates, including |FPGA| updates. .. code-block:: none - ~(keystone_admin)]$ system host-device-list + ~(keystone_admin)]$ system host-device-list To list devices from the central cloud, run: @@ -143,7 +143,7 @@ device image updates, including |FPGA| updates. parallel, or serially. If this is not specified using the |CLI|, the values for - :command:`subcloud\_update\_type` defined for each subcloud group will + :command:`subcloud_update_type` defined for each subcloud group will be used by default. **max-parallel-subclouds** @@ -151,7 +151,7 @@ device image updates, including |FPGA| updates. \(default 20\). If this is not specified using the |CLI|, the values for - :command:`max\_parallel\_subclouds` defined for each subcloud group + :command:`max_parallel_subclouds` defined for each subcloud group will be used by default. **stop-on-failure** @@ -198,7 +198,7 @@ device image updates, including |FPGA| updates. | created_at | 2020-08-11T18:13:40.576659 | | updated_at | 2020-08-11T18:13:56.525459 | +------------------------+----------------------------+ - + #. Monitor progress as the strategy is applied. @@ -217,7 +217,7 @@ device image updates, including |FPGA| updates. +-----------+-------+----------+------------------------------+----------------------------+----------------------------+ | subcloud3 | 2 | applying | apply phase is 18% complete | 2020-08-13 14:12:13.457588 | None | +-----------+-------+----------+------------------------------+----------------------------+----------------------------+ - + - To monitor the step currently being performed on a specific subcloud, do the following: diff --git a/doc/source/dist_cloud/kubernetes/distributed-cloud-architecture.rst b/doc/source/dist_cloud/kubernetes/distributed-cloud-architecture.rst index 93b81251e..b7bbdc3e6 100644 --- a/doc/source/dist_cloud/kubernetes/distributed-cloud-architecture.rst +++ b/doc/source/dist_cloud/kubernetes/distributed-cloud-architecture.rst @@ -12,7 +12,7 @@ connected to the Central Cloud over L3 networks. The Central Cloud has two regions: RegionOne, used to manage the nodes in the Central Cloud, and System Controller, used to manage the subclouds in the |prod-dc| system. You can select RegionOne or System Controller regions from the -Horizon Web interface or by setting the environment variable +Horizon Web interface or by setting the environment variable if using the CLI. **Central Cloud** diff --git a/doc/source/dist_cloud/kubernetes/distributed-upgrade-orchestration-process-using-the-cli.rst b/doc/source/dist_cloud/kubernetes/distributed-upgrade-orchestration-process-using-the-cli.rst index 2a2408a93..f8da0fed4 100644 --- a/doc/source/dist_cloud/kubernetes/distributed-upgrade-orchestration-process-using-the-cli.rst +++ b/doc/source/dist_cloud/kubernetes/distributed-upgrade-orchestration-process-using-the-cli.rst @@ -42,7 +42,7 @@ following conditions: require remote install using Redfish. - Redfish |BMC| is required for orchestrated subcloud upgrades. The install - values, and :command:`bmc\_password` for each |AIO-SX| subcloud controller + values, and :command:`bmc_password` for each |AIO-SX| subcloud controller must be provided using the following |CLI| command on the System Controller. .. note:: @@ -84,7 +84,7 @@ following conditions: each subcloud, use the following command to remove the previous upgrade data: - :command:`sudo rm /opt/platform-backup/upgrade\_data\*` + :command:`sudo rm /opt/platform-backup/upgrade_data\*` .. rubric:: |proc| @@ -93,7 +93,7 @@ following conditions: #. Review the upgrade status for the subclouds. After the System Controller upgrade is completed, wait for 10 minutes for - the **load\_sync\_status** of all subclouds to be updated. + the **load_sync_status** of all subclouds to be updated. To identify which subclouds are upgrade-current \(in-sync\), use the :command:`subcloud list` command. For example: @@ -169,7 +169,7 @@ following conditions: upgraded in parallel, or serially. If this is not specified using the CLI, the values for - :command:`subcloud\_update\_type` defined for each subcloud group will + :command:`subcloud_update_type` defined for each subcloud group will be used by default. **max-parallel-subclouds** @@ -177,7 +177,7 @@ following conditions: \(default 20\). If this is not specified using the CLI, the values for - :command:`max\_parallel\_subclouds` defined for each subcloud group + :command:`max_parallel_subclouds` defined for each subcloud group will be used by default. **stop-on-failure** diff --git a/doc/source/dist_cloud/kubernetes/installing-a-subcloud-using-redfish-platform-management-service.rst b/doc/source/dist_cloud/kubernetes/installing-a-subcloud-using-redfish-platform-management-service.rst index b73a7e0bd..96b03e9bd 100644 --- a/doc/source/dist_cloud/kubernetes/installing-a-subcloud-using-redfish-platform-management-service.rst +++ b/doc/source/dist_cloud/kubernetes/installing-a-subcloud-using-redfish-platform-management-service.rst @@ -31,8 +31,8 @@ subcloud, the subcloud installation has these phases: After a successful remote installation of a subcloud in a Distributed Cloud system, a subsequent remote reinstallation fails because of an existing ssh - key entry in the /root/.ssh/known\_hosts on the System Controller. In this - case, delete the host key entry, if present, from /root/.ssh/known\_hosts + key entry in the /root/.ssh/known_hosts on the System Controller. In this + case, delete the host key entry, if present, from /root/.ssh/known_hosts on the System Controller before doing reinstallations. .. rubric:: |prereq| @@ -203,7 +203,7 @@ subcloud, the subcloud installation has these phases: password: type: docker - Where can be found by running the following command as + Where can be found by running the following command as 'sysadmin' on the Central Cloud: .. code-block:: none @@ -230,7 +230,7 @@ subcloud, the subcloud installation has these phases: If you prefer to install container images from the default WRS |AWS| ECR external registries, make the following substitutions for the - **docker\_registries** sections of the file. + **docker_registries** sections of the file. .. code-block:: none @@ -332,9 +332,9 @@ subcloud, the subcloud installation has these phases: bootstrapping and deployment by monitoring the following log files on the active controller in the Central Cloud. - /var/log/dcmanager/\_install\_.log. + /var/log/dcmanager/_install_.log. - /var/log/dcmanager/\_bootstrap\_.log. + /var/log/dcmanager/_bootstrap_.log. For example: diff --git a/doc/source/guest_integration/openstack/configuration-using-metadata.rst b/doc/source/guest_integration/openstack/configuration-using-metadata.rst index 3ebf130d7..eff6b5e63 100644 --- a/doc/source/guest_integration/openstack/configuration-using-metadata.rst +++ b/doc/source/guest_integration/openstack/configuration-using-metadata.rst @@ -29,7 +29,7 @@ configuration file. To send user data when calling nova boot, use the ``--user-data /path/to/filename`` option, or use the Heat service and set the -``user\_data`` property and ``user\_data\_format`` to RAW. +``user_data`` property and ``user_data_format`` to RAW. On initialization, the |VM| queries the metadata service through either the EC2 compatibility API. For example: diff --git a/doc/source/node_management/kubernetes/node_interfaces/interface-ip-address-provisioning-using-the-cli.rst b/doc/source/node_management/kubernetes/node_interfaces/interface-ip-address-provisioning-using-the-cli.rst index dc09bb750..31b614ef1 100644 --- a/doc/source/node_management/kubernetes/node_interfaces/interface-ip-address-provisioning-using-the-cli.rst +++ b/doc/source/node_management/kubernetes/node_interfaces/interface-ip-address-provisioning-using-the-cli.rst @@ -82,7 +82,7 @@ They can only be modified to use different physical ports when required. ``ifname`` The name of the interface. - ``ip\_address`` + ``ip_address`` An IPv4 or IPv6 address. ``prefix`` diff --git a/doc/source/security/kubernetes/operator-command-logging.rst b/doc/source/security/kubernetes/operator-command-logging.rst index e108f123c..d10ba9439 100644 --- a/doc/source/security/kubernetes/operator-command-logging.rst +++ b/doc/source/security/kubernetes/operator-command-logging.rst @@ -49,7 +49,7 @@ Remarks :command:`system modify --description="A TEST"` is logged to sysinv-api.log because it issues a REST POST call - :command:`system snmp-comm-delete "TEST\_COMMUNITY1"` - is logged to sysinv-api.log because it issues a REST DELETE call + :command:`system snmp-comm-delete "TEST_COMMUNITY1"` - is logged to sysinv-api.log because it issues a REST DELETE call - If the :command:`sysinv` command only issues a REST GET call, it is not logged. diff --git a/doc/source/security/kubernetes/security-install-kubectl-and-helm-clients-directly-on-a-host.rst b/doc/source/security/kubernetes/security-install-kubectl-and-helm-clients-directly-on-a-host.rst index dac823ed2..b527e6936 100644 --- a/doc/source/security/kubernetes/security-install-kubectl-and-helm-clients-directly-on-a-host.rst +++ b/doc/source/security/kubernetes/security-install-kubectl-and-helm-clients-directly-on-a-host.rst @@ -86,12 +86,12 @@ configuration is required in order to use :command:`helm`. .. note:: In order for your remote host to trust the certificate used by the |prod-long| K8S API, you must ensure that the - ``k8s\_root\_ca\_cert`` specified at install time is a trusted + ``k8s_root_ca_cert`` specified at install time is a trusted |CA| certificate by your host. Follow the instructions for adding a trusted |CA| certificate for the operating system distribution of your particular host. - If you did not specify a ``k8s\_root\_ca\_cert`` at install + If you did not specify a ``k8s_root_ca_cert`` at install time, then specify ``--insecure-skip-tls-verify``, as shown below. The following example configures the default ~/.kube/config. See the diff --git a/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst b/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst index d1ae460dc..fb1ac55b7 100644 --- a/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst +++ b/doc/source/storage/kubernetes/configure-an-external-netapp-deployment-as-the-storage-backend.rst @@ -85,7 +85,7 @@ procedure. .. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost: #. Configure Netapps configurable parameters and run the provided - install\_netapp\_backend.yml ansible playbook to enable connectivity to + install_netapp_backend.yml ansible playbook to enable connectivity to Netapp as a storage backend for |prod|. #. Provide Netapp backend configurable parameters in an overrides yaml @@ -98,11 +98,11 @@ procedure. The following parameters are mandatory: - ``ansible\_become\_pass`` + ``ansible_become_pass`` Provide the admin password. - ``netapp\_backends`` - ``name`` + ``netapp_backends`` + **name** A name for the storage class. ``provisioner`` @@ -136,18 +136,18 @@ procedure. The following parameters are optional: - ``trident\_setup\_dir`` + ``trident_setup_dir`` Set a staging directory for generated configuration files. The default is /tmp/trident. - ``trident\_namespace`` + ``trident_namespace`` Set this option to use an alternate Kubernetes namespace. - ``trident\_rest\_api\_port`` + ``trident_rest_api_port`` Use an alternate port for the Trident REST API. The default is 8000. - ``trident\_install\_extra\_params`` + ``trident_install_extra_params`` Add extra space-separated parameters when installing trident. For complete listings of available parameters, see @@ -190,8 +190,8 @@ procedure. username: "admin" password: "secret" - This file is sectioned into ``netapp\_k8s\_storageclass``, - ``netapp\_k8s\_snapshotstorageclasses``, and ``netapp\_backends``. + This file is sectioned into ``netapp_k8s_storageclass``, + ``netapp_k8s_snapshotstorageclasses``, and ``netapp_backends``. You can add multiple backends and/or storage classes. .. note:: diff --git a/doc/source/storage/kubernetes/disk-naming-conventions.rst b/doc/source/storage/kubernetes/disk-naming-conventions.rst index 04e6ee4ec..cc08a0d09 100644 --- a/doc/source/storage/kubernetes/disk-naming-conventions.rst +++ b/doc/source/storage/kubernetes/disk-naming-conventions.rst @@ -18,7 +18,7 @@ complement on a host. In the Horizon Web interface and in |CLI| output, both identifications are shown. For example, the output of the :command:`system host-disk-show` -command includes both the **device\_node** and the **device\_path**. +command includes both the **device_node** and the **device_path**. .. code-block:: none diff --git a/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst b/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst index 986361e74..fbbf33a2a 100644 --- a/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst +++ b/doc/source/storage/kubernetes/install-additional-rbd-provisioners.rst @@ -24,7 +24,7 @@ This procedure uses standard Helm mechanisms to install a second #. Capture a list of monitors. - This will be stored in the environment variable ```` and + This will be stored in the environment variable ```` and used in the following step. .. code-block:: none diff --git a/doc/source/storage/kubernetes/storage-configuration-storage-related-cli-commands.rst b/doc/source/storage/kubernetes/storage-configuration-storage-related-cli-commands.rst index ede1f8976..901572d01 100644 --- a/doc/source/storage/kubernetes/storage-configuration-storage-related-cli-commands.rst +++ b/doc/source/storage/kubernetes/storage-configuration-storage-related-cli-commands.rst @@ -24,7 +24,7 @@ You can change the space allotted for the Ceph monitor, if required. ~(keystone_admin)]$ system ceph-mon-modify ceph_mon_gib= -where ```` is the size in GiB to use for the Ceph monitor. +where ```` is the size in GiB to use for the Ceph monitor. The value must be between 21 and 40 GiB. .. code-block:: none @@ -140,10 +140,10 @@ The following are optional arguments: For a Ceph backend, this is a user-assigned name for the backend. The default is **ceph-store** for a Ceph backend. -**-t,** ``--tier\_uuid`` +**-t,** ``--tier_uuid`` For a Ceph backend, is the UUID of a storage tier to back. -**-c,** ``--ceph\_conf`` +**-c,** ``--ceph_conf`` Location of the Ceph configuration file used for provisioning an external backend. @@ -152,7 +152,7 @@ The following are optional arguments: reversible. ``--ceph-mon-gib`` - For a Ceph backend, this is the space in gibibytes allotted for the + For a Ceph backend, this is the space in GB allotted for the Ceph monitor. .. note::