Fixed \_ as the output was not rendering correctly (pick r5 updates only)
Fixed Patchset 4 comments Fixed Patchset 3 comments and added additional updates Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com> Change-Id: I7482afc3a90bbdc94b6ecd8b6ac39d831b8a45db Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
This commit is contained in:
parent
11bebe59c3
commit
265d96bed1
@ -51,7 +51,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where:
|
where:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to show details.
|
The name of the application to show details.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
@ -83,7 +83,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following are optional arguments:
|
where the following are optional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
Assigns a custom name for application. You can use this name to
|
Assigns a custom name for application. You can use this name to
|
||||||
interact with the application in the future.
|
interact with the application in the future.
|
||||||
|
|
||||||
@ -92,7 +92,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
and the following is a positional argument:
|
and the following is a positional argument:
|
||||||
|
|
||||||
**<tar\_file>**
|
**<tar_file>**
|
||||||
The path to the tar file containing the application to be uploaded.
|
The path to the tar file containing the application to be uploaded.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
@ -126,7 +126,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following is a positional argument:
|
where the following is a positional argument:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application.
|
The name of the application.
|
||||||
|
|
||||||
and the following is an optional argument:
|
and the following is an optional argument:
|
||||||
@ -175,7 +175,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
+---------------------+--------------------------------+---------------+
|
+---------------------+--------------------------------+---------------+
|
||||||
|
|
||||||
- To show the overrides for a particular chart, use the following command.
|
- To show the overrides for a particular chart, use the following command.
|
||||||
System overrides are displayed in the **system\_overrides** section of
|
System overrides are displayed in the **system_overrides** section of
|
||||||
the **Property** column.
|
the **Property** column.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -185,10 +185,10 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following are positional arguments:
|
where the following are positional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application.
|
The name of the application.
|
||||||
|
|
||||||
**< chart\_name>**
|
**< chart_name>**
|
||||||
The name of the chart.
|
The name of the chart.
|
||||||
|
|
||||||
**<namespace>**
|
**<namespace>**
|
||||||
@ -212,10 +212,10 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following are positional arguments:
|
where the following are positional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application.
|
The name of the application.
|
||||||
|
|
||||||
**<chart\_name>**
|
**<chart_name>**
|
||||||
The name of the chart.
|
The name of the chart.
|
||||||
|
|
||||||
**<namespace>**
|
**<namespace>**
|
||||||
@ -257,7 +257,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
| | DEBUG: true |
|
| | DEBUG: true |
|
||||||
+----------------+-------------------+
|
+----------------+-------------------+
|
||||||
|
|
||||||
The user overrides are shown in the **user\_overrides** section of the
|
The user overrides are shown in the **user_overrides** section of the
|
||||||
**Property** column.
|
**Property** column.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -280,10 +280,10 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
and the following are positional arguments:
|
and the following are positional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application.
|
The name of the application.
|
||||||
|
|
||||||
**<chart\_name>**
|
**<chart_name>**
|
||||||
The name of the chart.
|
The name of the chart.
|
||||||
|
|
||||||
**<namespace>**
|
**<namespace>**
|
||||||
@ -302,10 +302,10 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following are positional arguments:
|
where the following are positional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application.
|
The name of the application.
|
||||||
|
|
||||||
**<chart\_name>**
|
**<chart_name>**
|
||||||
The name of the chart.
|
The name of the chart.
|
||||||
|
|
||||||
**<namespace>**
|
**<namespace>**
|
||||||
@ -334,7 +334,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
and the following is a positional argument:
|
and the following is a positional argument:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to apply.
|
The name of the application to apply.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
@ -366,7 +366,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where:
|
where:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to abort.
|
The name of the application to abort.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
@ -389,7 +389,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where the following are optional arguments:
|
where the following are optional arguments:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to update.
|
The name of the application to update.
|
||||||
|
|
||||||
You can look up the name of an application using the
|
You can look up the name of an application using the
|
||||||
@ -417,7 +417,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
and the following is a positional argument which must come last:
|
and the following is a positional argument which must come last:
|
||||||
|
|
||||||
**<tar\_file>**
|
**<tar_file>**
|
||||||
The tar file containing the application manifest, Helm charts and
|
The tar file containing the application manifest, Helm charts and
|
||||||
configuration file.
|
configuration file.
|
||||||
|
|
||||||
@ -431,7 +431,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where:
|
where:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to remove.
|
The name of the application to remove.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
@ -466,7 +466,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
|
|
||||||
where:
|
where:
|
||||||
|
|
||||||
**<app\_name>**
|
**<app_name>**
|
||||||
The name of the application to delete.
|
The name of the application to delete.
|
||||||
|
|
||||||
You must run :command:`application-remove` before deleting an application.
|
You must run :command:`application-remove` before deleting an application.
|
||||||
|
@ -52,7 +52,7 @@ example, the user **testuser** is correct in the following URL, while
|
|||||||
cloud/systemController, access to the local registry can only be done using
|
cloud/systemController, access to the local registry can only be done using
|
||||||
registry.local:9001. registry.central:9001 will be inaccessible. Installing
|
registry.local:9001. registry.central:9001 will be inaccessible. Installing
|
||||||
a |CA|-signed certificate for the registry and the certificate of the |CA| as
|
a |CA|-signed certificate for the registry and the certificate of the |CA| as
|
||||||
an 'ssl\_ca' certificate will remove this restriction.
|
an 'ssl_ca' certificate will remove this restriction.
|
||||||
|
|
||||||
For more information about Docker commands, see
|
For more information about Docker commands, see
|
||||||
`https://docs.docker.com/engine/reference/commandline/docker/ <https://docs.docker.com/engine/reference/commandline/docker/>`__.
|
`https://docs.docker.com/engine/reference/commandline/docker/ <https://docs.docker.com/engine/reference/commandline/docker/>`__.
|
||||||
|
@ -128,7 +128,7 @@ conditions are in place:
|
|||||||
#. Install network connectivity required for the subcloud.
|
#. Install network connectivity required for the subcloud.
|
||||||
|
|
||||||
#. Ensure that the backup file are available on the controller. Run both
|
#. Ensure that the backup file are available on the controller. Run both
|
||||||
Ansible Restore playbooks, restore\_platform.yml and restore\_user\_images.yml.
|
Ansible Restore playbooks, restore_platform.yml and restore_user_images.yml.
|
||||||
For more information on restoring the back up file, see :ref:`Run Restore
|
For more information on restoring the back up file, see :ref:`Run Restore
|
||||||
Playbook Locally on the Controller
|
Playbook Locally on the Controller
|
||||||
<running-restore-playbook-locally-on-the-controller>`, and :ref:`Run
|
<running-restore-playbook-locally-on-the-controller>`, and :ref:`Run
|
||||||
@ -139,7 +139,7 @@ conditions are in place:
|
|||||||
The backup files contain the system data and updates.
|
The backup files contain the system data and updates.
|
||||||
|
|
||||||
#. If the backup file contains patches, Ansible Restore playbook
|
#. If the backup file contains patches, Ansible Restore playbook
|
||||||
restore\_platform.yml will apply the patches and prompt you to reboot the
|
restore_platform.yml will apply the patches and prompt you to reboot the
|
||||||
system, you will need to re-run Ansible Restore playbook.
|
system, you will need to re-run Ansible Restore playbook.
|
||||||
|
|
||||||
The current software version on the controller is compared against the
|
The current software version on the controller is compared against the
|
||||||
@ -162,7 +162,7 @@ conditions are in place:
|
|||||||
Rerun the Ansible Playbook if there were patches applied and you were
|
Rerun the Ansible Playbook if there were patches applied and you were
|
||||||
prompted to reboot the system.
|
prompted to reboot the system.
|
||||||
|
|
||||||
#. Restore the local registry using the file restore\_user\_images.yml.
|
#. Restore the local registry using the file restore_user_images.yml.
|
||||||
|
|
||||||
This must be done before unlocking controller-0.
|
This must be done before unlocking controller-0.
|
||||||
|
|
||||||
@ -176,7 +176,7 @@ conditions are in place:
|
|||||||
becomes operational.
|
becomes operational.
|
||||||
|
|
||||||
#. If the system is a Distributed Cloud system controller, restore the **dc-vault**
|
#. If the system is a Distributed Cloud system controller, restore the **dc-vault**
|
||||||
using the restore\_dc\_vault.yml playbook. Perform this step after unlocking
|
using the restore_dc_vault.yml playbook. Perform this step after unlocking
|
||||||
controller-0:
|
controller-0:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -261,7 +261,7 @@ conditions are in place:
|
|||||||
| 6 | compute-1 | worker | locked |disabled |offline |
|
| 6 | compute-1 | worker | locked |disabled |offline |
|
||||||
+----+-------------+------------+---------------+-----------+------------+
|
+----+-------------+------------+---------------+-----------+------------+
|
||||||
|
|
||||||
#. Restore storage configuration. If :command:`wipe\_ceph\_osds` is set to
|
#. Restore storage configuration. If :command:`wipe_ceph_osds` is set to
|
||||||
**True**, follow the same procedure used to restore **controller-1**,
|
**True**, follow the same procedure used to restore **controller-1**,
|
||||||
beginning with host **storage-0** and proceeding in sequence.
|
beginning with host **storage-0** and proceeding in sequence.
|
||||||
|
|
||||||
@ -275,12 +275,12 @@ conditions are in place:
|
|||||||
the restore procedure without interruption.
|
the restore procedure without interruption.
|
||||||
|
|
||||||
Standard with Controller Storage install or reinstall depends on the
|
Standard with Controller Storage install or reinstall depends on the
|
||||||
:command:`wipe\_ceph\_osds` configuration:
|
:command:`wipe_ceph_osds` configuration:
|
||||||
|
|
||||||
#. If :command:`wipe\_ceph\_osds` is set to **true**, reinstall the
|
#. If :command:`wipe_ceph_osds` is set to **true**, reinstall the
|
||||||
storage hosts.
|
storage hosts.
|
||||||
|
|
||||||
#. If :command:`wipe\_ceph\_osds` is set to **false** \(default
|
#. If :command:`wipe_ceph_osds` is set to **false** \(default
|
||||||
option\), do not reinstall the storage hosts.
|
option\), do not reinstall the storage hosts.
|
||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
@ -315,9 +315,9 @@ conditions are in place:
|
|||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
Do not proceed until the Ceph cluster is healthy and the message
|
Do not proceed until the Ceph cluster is healthy and the message
|
||||||
HEALTH\_OK appears.
|
HEALTH_OK appears.
|
||||||
|
|
||||||
If the message HEALTH\_WARN appears, wait a few minutes and then try
|
If the message HEALTH_WARN appears, wait a few minutes and then try
|
||||||
again. If the warning condition persists, consult the public
|
again. If the warning condition persists, consult the public
|
||||||
documentation for troubleshooting Ceph monitors \(for example,
|
documentation for troubleshooting Ceph monitors \(for example,
|
||||||
`http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
`http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
||||||
|
@ -15,7 +15,7 @@ Use the following command to run the Ansible Backup playbook and back up the
|
|||||||
|
|
||||||
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
||||||
|
|
||||||
The <admin\_password> and <ansible\_become\_pass\> need to be set correctly
|
The <admin_password> and <ansible_become_pass> need to be set correctly
|
||||||
using the ``-e`` option on the command line, or an override file, or in the
|
using the ``-e`` option on the command line, or an override file, or in the
|
||||||
Ansible secret file.
|
Ansible secret file.
|
||||||
|
|
||||||
@ -23,44 +23,44 @@ The output files will be named:
|
|||||||
|
|
||||||
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-wj1-vxh-pmb:
|
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-wj1-vxh-pmb:
|
||||||
|
|
||||||
- inventory\_hostname\_platform\_backup\_timestamp.tgz
|
- inventory_hostname_platform_backup_timestamp.tgz
|
||||||
|
|
||||||
- inventory\_hostname\_openstack\_backup\_timestamp.tgz
|
- inventory_hostname_openstack_backup_timestamp.tgz
|
||||||
|
|
||||||
- inventory\_hostname\_docker\_local\_registry\_backup\_timestamp.tgz
|
- inventory_hostname_docker_local_registry_backup_timestamp.tgz
|
||||||
|
|
||||||
- inventory\_hostname\_dc\_vault\_backup\_timestamp.tgz
|
- inventory_hostname_dc_vault_backup_timestamp.tgz
|
||||||
|
|
||||||
The variables prefix can be overridden using the ``-e`` option on the command
|
The variables prefix can be overridden using the ``-e`` option on the command
|
||||||
line or by using an override file.
|
line or by using an override file.
|
||||||
|
|
||||||
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-rdp-gyh-pmb:
|
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-rdp-gyh-pmb:
|
||||||
|
|
||||||
- platform\_backup\_filename\_prefix
|
- platform_backup_filename_prefix
|
||||||
|
|
||||||
- openstack\_backup\_filename\_prefix
|
- openstack_backup_filename_prefix
|
||||||
|
|
||||||
- docker\_local\_registry\_backup\_filename\_prefix
|
- docker_local_registry_backup_filename_prefix
|
||||||
|
|
||||||
- dc\_vault\_backup\_filename\_prefix
|
- dc_vault_backup_filename_prefix
|
||||||
|
|
||||||
The generated backup tar files will be displayed in the following format,
|
The generated backup tar files will be displayed in the following format,
|
||||||
for example:
|
for example:
|
||||||
|
|
||||||
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-p3b-f13-pmb:
|
.. _running-ansible-backup-playbook-locally-on-the-controller-ul-p3b-f13-pmb:
|
||||||
|
|
||||||
- localhost\_docker\_local\_registry\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
- localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz
|
||||||
|
|
||||||
- localhost\_platform\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
- localhost_platform_backup_2020_07_15_21_24_22.tgz
|
||||||
|
|
||||||
- localhost\_openstack\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
- localhost_openstack_backup_2020_07_15_21_24_22.tgz
|
||||||
|
|
||||||
- localhost\_dc\_vault\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
- localhost_dc_vault_backup_2020_07_15_21_24_22.tgz
|
||||||
|
|
||||||
These files are located by default in the /opt/backups directory on
|
These files are located by default in the /opt/backups directory on
|
||||||
controller-0, and contains the complete system backup.
|
controller-0, and contains the complete system backup.
|
||||||
|
|
||||||
If the default location needs to be modified, the variable backup\_dir can
|
If the default location needs to be modified, the variable backup_dir can
|
||||||
be overridden using the ``-e`` option on the command line or by using an
|
be overridden using the ``-e`` option on the command line or by using an
|
||||||
override file.
|
override file.
|
||||||
|
|
||||||
|
@ -73,24 +73,24 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
expects both the **initial_backup_dir** and **backup_filename**
|
expects both the **initial_backup_dir** and **backup_filename**
|
||||||
to be specified.
|
to be specified.
|
||||||
|
|
||||||
- The backup\_filename is the platform backup tar file. It must be
|
- The backup_filename is the platform backup tar file. It must be
|
||||||
provided using the ``-e`` option on the command line, for example:
|
provided using the ``-e`` option on the command line, for example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
-e backup\_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz
|
-e backup\_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz
|
||||||
|
|
||||||
- The initial\_backup\_dir is the location on the Ansible control
|
- The initial_backup_dir is the location on the Ansible control
|
||||||
machine where the platform backup tar file is placed to restore the
|
machine where the platform backup tar file is placed to restore the
|
||||||
platform. It must be provided using ``-e`` option on the command line.
|
platform. It must be provided using ``-e`` option on the command line.
|
||||||
|
|
||||||
- The :command:`admin\_password`, :command:`ansible\_become\_pass`,
|
- The :command:`admin_password`, :command:`ansible_become_pass`,
|
||||||
and :command:`ansible\_ssh\_pass` need to be set correctly using
|
and :command:`ansible_ssh_pass` need to be set correctly using
|
||||||
the ``-e`` option on the command line or in the Ansible secret file.
|
the ``-e`` option on the command line or in the Ansible secret file.
|
||||||
:command:`ansible\_ssh\_pass` is the password to the sysadmin user
|
:command:`ansible_ssh_pass` is the password to the sysadmin user
|
||||||
on controller-0.
|
on controller-0.
|
||||||
|
|
||||||
- The :command:`ansible\_remote\_tmp` should be set to a new
|
- The :command:`ansible_remote_tmp` should be set to a new
|
||||||
directory \(not required to create it ahead of time\) under
|
directory \(not required to create it ahead of time\) under
|
||||||
/home/sysadmin on controller-0 using the ``-e`` option on the command
|
/home/sysadmin on controller-0 using the ``-e`` option on the command
|
||||||
line.
|
line.
|
||||||
@ -106,7 +106,7 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
the patches and prompt you to reboot the system. Then you will need to
|
the patches and prompt you to reboot the system. Then you will need to
|
||||||
re-run Ansible Restore playbook.
|
re-run Ansible Restore playbook.
|
||||||
|
|
||||||
#. After running the restore\_platform.yml playbook, you can restore the local
|
#. After running the restore_platform.yml playbook, you can restore the local
|
||||||
registry images.
|
registry images.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -119,33 +119,33 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
|
|
||||||
where optional-extra-vars can be:
|
where optional-extra-vars can be:
|
||||||
|
|
||||||
- The backup\_filename is the local registry backup tar file. It
|
- The backup_filename is the local registry backup tar file. It
|
||||||
must be provided using the ``-e`` option on the command line, for
|
must be provided using the ``-e`` option on the command line, for
|
||||||
example:
|
example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
-e backup\_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz
|
-e backup_filename= localhost_docker_local_registry_backup_2020_07_15_21_24_22.tgz
|
||||||
|
|
||||||
- The initial\_backup\_dir is the location on the Ansible control
|
- The initial_backup_dir is the location on the Ansible control
|
||||||
machine where the platform backup tar file is located. It must be
|
machine where the platform backup tar file is located. It must be
|
||||||
provided using ``-e`` option on the command line.
|
provided using ``-e`` option on the command line.
|
||||||
|
|
||||||
- The :command:`ansible\_become\_pass`, and
|
- The :command:`ansible_become_pass`, and
|
||||||
:command:`ansible\_ssh\_pass` need to be set correctly using the
|
:command:`ansible_ssh_pass` need to be set correctly using the
|
||||||
``-e`` option on the command line or in the Ansible secret file.
|
``-e`` option on the command line or in the Ansible secret file.
|
||||||
:command:`ansible\_ssh\_pass` is the password to the sysadmin user
|
:command:`ansible_ssh_pass` is the password to the sysadmin user
|
||||||
on controller-0.
|
on controller-0.
|
||||||
|
|
||||||
- The backup\_dir should be set to a directory on controller-0.
|
- The backup_dir should be set to a directory on controller-0.
|
||||||
The directory must have sufficient space for local registry backup
|
The directory must have sufficient space for local registry backup
|
||||||
to be copied. The backup\_dir is set using the ``-e`` option on the
|
to be copied. The backup_dir is set using the ``-e`` option on the
|
||||||
command line.
|
command line.
|
||||||
|
|
||||||
- The :command:`ansible\_remote\_tmp` should be set to a new
|
- The :command:`ansible_remote_tmp` should be set to a new
|
||||||
directory on controller-0. Ansible will use this directory to copy
|
directory on controller-0. Ansible will use this directory to copy
|
||||||
files, and the directory must have sufficient space for local
|
files, and the directory must have sufficient space for local
|
||||||
registry backup to be copied. The :command:`ansible\_remote\_tmp`
|
registry backup to be copied. The :command:`ansible_remote_tmp`
|
||||||
is set using the ``-e`` option on the command line.
|
is set using the ``-e`` option on the command line.
|
||||||
|
|
||||||
For example, run the local registry restore playbook, where
|
For example, run the local registry restore playbook, where
|
||||||
|
@ -33,13 +33,13 @@ your server is isolated from the public Internet.
|
|||||||
username: <your_my-registry.io_username>
|
username: <your_my-registry.io_username>
|
||||||
password: <your_my-registry.io_password>
|
password: <your_my-registry.io_password>
|
||||||
|
|
||||||
Where ``<your\_my-registry.io\_username>`` and
|
Where ``<your_my-registry.io_username>`` and
|
||||||
``<your\_my-registry.io\_password>`` are your login credentials for the
|
``<your_my-registry.io_password>`` are your login credentials for the
|
||||||
``<my-registry.io>`` private Docker registry.
|
``<my-registry.io>`` private Docker registry.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
|
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
|
||||||
configured in the ``dns\_servers:`` structure of the ansible bootstrap
|
configured in the ``dns_servers:`` structure of the ansible bootstrap
|
||||||
override file /home/sysadmin/localhost.yml.
|
override file /home/sysadmin/localhost.yml.
|
||||||
|
|
||||||
#. For any additional local registry images required, use the full image name
|
#. For any additional local registry images required, use the full image name
|
||||||
|
@ -33,13 +33,13 @@ your server is isolated from the public Internet.
|
|||||||
username: <your_my-registry.io_username>
|
username: <your_my-registry.io_username>
|
||||||
password: <your_my-registry.io_password>
|
password: <your_my-registry.io_password>
|
||||||
|
|
||||||
Where ``<your\_my-registry.io\_username>`` and
|
Where ``<your_my-registry.io_username>`` and
|
||||||
``<your\_my-registry.io\_password>`` are your login credentials for the
|
``<your_my-registry.io_password>`` are your login credentials for the
|
||||||
``<my-registry.io>`` private Docker registry.
|
``<my-registry.io>`` private Docker registry.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
|
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
|
||||||
configured in the ``dns\_servers:`` structure of the ansible bootstrap
|
configured in the ``dns_servers:`` structure of the ansible bootstrap
|
||||||
override file /home/sysadmin/localhost.yml.
|
override file /home/sysadmin/localhost.yml.
|
||||||
|
|
||||||
#. For any additional local registry images required, use the full image name
|
#. For any additional local registry images required, use the full image name
|
||||||
|
@ -138,13 +138,13 @@ controller for access by subclouds. For example:
|
|||||||
|
|
||||||
**--subcloud-apply-type**
|
**--subcloud-apply-type**
|
||||||
Determines whether the subclouds are upgraded in parallel, or serially. If
|
Determines whether the subclouds are upgraded in parallel, or serially. If
|
||||||
this is not specified using the CLI, the values for subcloud\_update\_type
|
this is not specified using the CLI, the values for subcloud_update_type
|
||||||
defined for each subcloud group will be used by default.
|
defined for each subcloud group will be used by default.
|
||||||
|
|
||||||
**--max-parallel-subclouds**
|
**--max-parallel-subclouds**
|
||||||
Sets the maximum number of subclouds that can be upgraded in parallel
|
Sets the maximum number of subclouds that can be upgraded in parallel
|
||||||
\(default 20\). If this is not specified using the CLI, the values for
|
\(default 20\). If this is not specified using the CLI, the values for
|
||||||
max\_parallel\_subclouds defined for each subcloud group will be used by
|
max_parallel_subclouds defined for each subcloud group will be used by
|
||||||
default.
|
default.
|
||||||
|
|
||||||
**--stop-on-failure**
|
**--stop-on-failure**
|
||||||
|
@ -35,8 +35,8 @@ You must be in **SystemController** mode. To change the mode, see
|
|||||||
tab.
|
tab.
|
||||||
|
|
||||||
.. image:: figures/vhy1525122582274.png
|
.. image:: figures/vhy1525122582274.png
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. On the Cloud Patching Orchestration tab, click **Create Strategy**.
|
#. On the Cloud Patching Orchestration tab, click **Create Strategy**.
|
||||||
|
|
||||||
@ -49,7 +49,7 @@ You must be in **SystemController** mode. To change the mode, see
|
|||||||
parallel or serially.
|
parallel or serially.
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`subcloud\_update\_type` defined for each subcloud group will
|
:command:`subcloud_update_type` defined for each subcloud group will
|
||||||
be used by default.
|
be used by default.
|
||||||
|
|
||||||
**max-parallel-subclouds**
|
**max-parallel-subclouds**
|
||||||
@ -57,7 +57,7 @@ You must be in **SystemController** mode. To change the mode, see
|
|||||||
\(default 20\).
|
\(default 20\).
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`max\_parallel\_subclouds` defined for each subcloud group
|
:command:`max_parallel_subclouds` defined for each subcloud group
|
||||||
will be used by default.
|
will be used by default.
|
||||||
|
|
||||||
**stop-on-failure**
|
**stop-on-failure**
|
||||||
@ -82,7 +82,7 @@ You must be in **SystemController** mode. To change the mode, see
|
|||||||
To change the update strategy settings, you must delete the update
|
To change the update strategy settings, you must delete the update
|
||||||
strategy and create a new one.
|
strategy and create a new one.
|
||||||
|
|
||||||
.. seealso::
|
.. seealso::
|
||||||
|
|
||||||
:ref:`Customizing the Update Configuration for Distributed Cloud Update
|
:ref:`Customizing the Update Configuration for Distributed Cloud Update
|
||||||
Orchestration <customizing-the-update-configuration-for-distributed-cloud-update-orchestration>`
|
Orchestration <customizing-the-update-configuration-for-distributed-cloud-update-orchestration>`
|
||||||
|
@ -54,7 +54,7 @@ device image updates, including |FPGA| updates.
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system host-device-list <hostname_or_id>
|
~(keystone_admin)]$ system host-device-list <hostname_or_id>
|
||||||
|
|
||||||
To list devices from the central cloud, run:
|
To list devices from the central cloud, run:
|
||||||
|
|
||||||
@ -143,7 +143,7 @@ device image updates, including |FPGA| updates.
|
|||||||
parallel, or serially.
|
parallel, or serially.
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`subcloud\_update\_type` defined for each subcloud group will
|
:command:`subcloud_update_type` defined for each subcloud group will
|
||||||
be used by default.
|
be used by default.
|
||||||
|
|
||||||
**max-parallel-subclouds**
|
**max-parallel-subclouds**
|
||||||
@ -151,7 +151,7 @@ device image updates, including |FPGA| updates.
|
|||||||
\(default 20\).
|
\(default 20\).
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`max\_parallel\_subclouds` defined for each subcloud group
|
:command:`max_parallel_subclouds` defined for each subcloud group
|
||||||
will be used by default.
|
will be used by default.
|
||||||
|
|
||||||
**stop-on-failure**
|
**stop-on-failure**
|
||||||
@ -198,7 +198,7 @@ device image updates, including |FPGA| updates.
|
|||||||
| created_at | 2020-08-11T18:13:40.576659 |
|
| created_at | 2020-08-11T18:13:40.576659 |
|
||||||
| updated_at | 2020-08-11T18:13:56.525459 |
|
| updated_at | 2020-08-11T18:13:56.525459 |
|
||||||
+------------------------+----------------------------+
|
+------------------------+----------------------------+
|
||||||
|
|
||||||
|
|
||||||
#. Monitor progress as the strategy is applied.
|
#. Monitor progress as the strategy is applied.
|
||||||
|
|
||||||
@ -217,7 +217,7 @@ device image updates, including |FPGA| updates.
|
|||||||
+-----------+-------+----------+------------------------------+----------------------------+----------------------------+
|
+-----------+-------+----------+------------------------------+----------------------------+----------------------------+
|
||||||
| subcloud3 | 2 | applying | apply phase is 18% complete | 2020-08-13 14:12:13.457588 | None |
|
| subcloud3 | 2 | applying | apply phase is 18% complete | 2020-08-13 14:12:13.457588 | None |
|
||||||
+-----------+-------+----------+------------------------------+----------------------------+----------------------------+
|
+-----------+-------+----------+------------------------------+----------------------------+----------------------------+
|
||||||
|
|
||||||
|
|
||||||
- To monitor the step currently being performed on a specific subcloud, do the following:
|
- To monitor the step currently being performed on a specific subcloud, do the following:
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ connected to the Central Cloud over L3 networks.
|
|||||||
The Central Cloud has two regions: RegionOne, used to manage the nodes in the
|
The Central Cloud has two regions: RegionOne, used to manage the nodes in the
|
||||||
Central Cloud, and System Controller, used to manage the subclouds in the
|
Central Cloud, and System Controller, used to manage the subclouds in the
|
||||||
|prod-dc| system. You can select RegionOne or System Controller regions from the
|
|prod-dc| system. You can select RegionOne or System Controller regions from the
|
||||||
Horizon Web interface or by setting the <OS\_REGION\_NAME> environment variable
|
Horizon Web interface or by setting the <OS_REGION_NAME> environment variable
|
||||||
if using the CLI.
|
if using the CLI.
|
||||||
|
|
||||||
**Central Cloud**
|
**Central Cloud**
|
||||||
|
@ -42,7 +42,7 @@ following conditions:
|
|||||||
require remote install using Redfish.
|
require remote install using Redfish.
|
||||||
|
|
||||||
- Redfish |BMC| is required for orchestrated subcloud upgrades. The install
|
- Redfish |BMC| is required for orchestrated subcloud upgrades. The install
|
||||||
values, and :command:`bmc\_password` for each |AIO-SX| subcloud controller
|
values, and :command:`bmc_password` for each |AIO-SX| subcloud controller
|
||||||
must be provided using the following |CLI| command on the System Controller.
|
must be provided using the following |CLI| command on the System Controller.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -84,7 +84,7 @@ following conditions:
|
|||||||
each subcloud, use the following command to remove the previous upgrade
|
each subcloud, use the following command to remove the previous upgrade
|
||||||
data:
|
data:
|
||||||
|
|
||||||
:command:`sudo rm /opt/platform-backup/upgrade\_data\*`
|
:command:`sudo rm /opt/platform-backup/upgrade_data\*`
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
@ -93,7 +93,7 @@ following conditions:
|
|||||||
#. Review the upgrade status for the subclouds.
|
#. Review the upgrade status for the subclouds.
|
||||||
|
|
||||||
After the System Controller upgrade is completed, wait for 10 minutes for
|
After the System Controller upgrade is completed, wait for 10 minutes for
|
||||||
the **load\_sync\_status** of all subclouds to be updated.
|
the **load_sync_status** of all subclouds to be updated.
|
||||||
|
|
||||||
To identify which subclouds are upgrade-current \(in-sync\), use the
|
To identify which subclouds are upgrade-current \(in-sync\), use the
|
||||||
:command:`subcloud list` command. For example:
|
:command:`subcloud list` command. For example:
|
||||||
@ -169,7 +169,7 @@ following conditions:
|
|||||||
upgraded in parallel, or serially.
|
upgraded in parallel, or serially.
|
||||||
|
|
||||||
If this is not specified using the CLI, the values for
|
If this is not specified using the CLI, the values for
|
||||||
:command:`subcloud\_update\_type` defined for each subcloud group will
|
:command:`subcloud_update_type` defined for each subcloud group will
|
||||||
be used by default.
|
be used by default.
|
||||||
|
|
||||||
**max-parallel-subclouds**
|
**max-parallel-subclouds**
|
||||||
@ -177,7 +177,7 @@ following conditions:
|
|||||||
\(default 20\).
|
\(default 20\).
|
||||||
|
|
||||||
If this is not specified using the CLI, the values for
|
If this is not specified using the CLI, the values for
|
||||||
:command:`max\_parallel\_subclouds` defined for each subcloud group
|
:command:`max_parallel_subclouds` defined for each subcloud group
|
||||||
will be used by default.
|
will be used by default.
|
||||||
|
|
||||||
**stop-on-failure**
|
**stop-on-failure**
|
||||||
|
@ -31,8 +31,8 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
After a successful remote installation of a subcloud in a Distributed Cloud
|
After a successful remote installation of a subcloud in a Distributed Cloud
|
||||||
system, a subsequent remote reinstallation fails because of an existing ssh
|
system, a subsequent remote reinstallation fails because of an existing ssh
|
||||||
key entry in the /root/.ssh/known\_hosts on the System Controller. In this
|
key entry in the /root/.ssh/known_hosts on the System Controller. In this
|
||||||
case, delete the host key entry, if present, from /root/.ssh/known\_hosts
|
case, delete the host key entry, if present, from /root/.ssh/known_hosts
|
||||||
on the System Controller before doing reinstallations.
|
on the System Controller before doing reinstallations.
|
||||||
|
|
||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
@ -203,7 +203,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
password: <sysinv_password>
|
password: <sysinv_password>
|
||||||
type: docker
|
type: docker
|
||||||
|
|
||||||
Where <sysinv\_password> can be found by running the following command as
|
Where <sysinv_password> can be found by running the following command as
|
||||||
'sysadmin' on the Central Cloud:
|
'sysadmin' on the Central Cloud:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -230,7 +230,7 @@ subcloud, the subcloud installation has these phases:
|
|||||||
|
|
||||||
If you prefer to install container images from the default WRS |AWS| ECR
|
If you prefer to install container images from the default WRS |AWS| ECR
|
||||||
external registries, make the following substitutions for the
|
external registries, make the following substitutions for the
|
||||||
**docker\_registries** sections of the file.
|
**docker_registries** sections of the file.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -332,9 +332,9 @@ subcloud, the subcloud installation has these phases:
|
|||||||
bootstrapping and deployment by monitoring the following log files on the
|
bootstrapping and deployment by monitoring the following log files on the
|
||||||
active controller in the Central Cloud.
|
active controller in the Central Cloud.
|
||||||
|
|
||||||
/var/log/dcmanager/<subcloud\_name>\_install\_<date\_stamp>.log.
|
/var/log/dcmanager/<subcloud_name>_install_<date_stamp>.log.
|
||||||
|
|
||||||
/var/log/dcmanager/<subcloud\_name>\_bootstrap\_<date\_stamp>.log.
|
/var/log/dcmanager/<subcloud_name>_bootstrap_<date_stamp>.log.
|
||||||
|
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
@ -29,7 +29,7 @@ configuration file.
|
|||||||
|
|
||||||
To send user data when calling nova boot, use the ``--user-data
|
To send user data when calling nova boot, use the ``--user-data
|
||||||
/path/to/filename`` option, or use the Heat service and set the
|
/path/to/filename`` option, or use the Heat service and set the
|
||||||
``user\_data`` property and ``user\_data\_format`` to RAW.
|
``user_data`` property and ``user_data_format`` to RAW.
|
||||||
|
|
||||||
On initialization, the |VM| queries the metadata service through either
|
On initialization, the |VM| queries the metadata service through either
|
||||||
the EC2 compatibility API. For example:
|
the EC2 compatibility API. For example:
|
||||||
|
@ -82,7 +82,7 @@ They can only be modified to use different physical ports when required.
|
|||||||
``ifname``
|
``ifname``
|
||||||
The name of the interface.
|
The name of the interface.
|
||||||
|
|
||||||
``ip\_address``
|
``ip_address``
|
||||||
An IPv4 or IPv6 address.
|
An IPv4 or IPv6 address.
|
||||||
|
|
||||||
``prefix``
|
``prefix``
|
||||||
|
@ -49,7 +49,7 @@ Remarks
|
|||||||
|
|
||||||
:command:`system modify --description="A TEST"` is logged to sysinv-api.log because it issues a REST POST call
|
:command:`system modify --description="A TEST"` is logged to sysinv-api.log because it issues a REST POST call
|
||||||
|
|
||||||
:command:`system snmp-comm-delete "TEST\_COMMUNITY1"` - is logged to sysinv-api.log because it issues a REST DELETE call
|
:command:`system snmp-comm-delete "TEST_COMMUNITY1"` - is logged to sysinv-api.log because it issues a REST DELETE call
|
||||||
|
|
||||||
- If the :command:`sysinv` command only issues a REST GET call, it is not logged.
|
- If the :command:`sysinv` command only issues a REST GET call, it is not logged.
|
||||||
|
|
||||||
|
@ -86,12 +86,12 @@ configuration is required in order to use :command:`helm`.
|
|||||||
.. note::
|
.. note::
|
||||||
In order for your remote host to trust the certificate used by
|
In order for your remote host to trust the certificate used by
|
||||||
the |prod-long| K8S API, you must ensure that the
|
the |prod-long| K8S API, you must ensure that the
|
||||||
``k8s\_root\_ca\_cert`` specified at install time is a trusted
|
``k8s_root_ca_cert`` specified at install time is a trusted
|
||||||
|CA| certificate by your host. Follow the instructions for adding
|
|CA| certificate by your host. Follow the instructions for adding
|
||||||
a trusted |CA| certificate for the operating system distribution
|
a trusted |CA| certificate for the operating system distribution
|
||||||
of your particular host.
|
of your particular host.
|
||||||
|
|
||||||
If you did not specify a ``k8s\_root\_ca\_cert`` at install
|
If you did not specify a ``k8s_root_ca_cert`` at install
|
||||||
time, then specify ``--insecure-skip-tls-verify``, as shown below.
|
time, then specify ``--insecure-skip-tls-verify``, as shown below.
|
||||||
|
|
||||||
The following example configures the default ~/.kube/config. See the
|
The following example configures the default ~/.kube/config. See the
|
||||||
|
@ -85,7 +85,7 @@ procedure.
|
|||||||
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
|
.. _configuring-an-external-netapp-deployment-as-the-storage-backend-mod-localhost:
|
||||||
|
|
||||||
#. Configure Netapps configurable parameters and run the provided
|
#. Configure Netapps configurable parameters and run the provided
|
||||||
install\_netapp\_backend.yml ansible playbook to enable connectivity to
|
install_netapp_backend.yml ansible playbook to enable connectivity to
|
||||||
Netapp as a storage backend for |prod|.
|
Netapp as a storage backend for |prod|.
|
||||||
|
|
||||||
#. Provide Netapp backend configurable parameters in an overrides yaml
|
#. Provide Netapp backend configurable parameters in an overrides yaml
|
||||||
@ -98,11 +98,11 @@ procedure.
|
|||||||
|
|
||||||
The following parameters are mandatory:
|
The following parameters are mandatory:
|
||||||
|
|
||||||
``ansible\_become\_pass``
|
``ansible_become_pass``
|
||||||
Provide the admin password.
|
Provide the admin password.
|
||||||
|
|
||||||
``netapp\_backends``
|
``netapp_backends``
|
||||||
``name``
|
**name**
|
||||||
A name for the storage class.
|
A name for the storage class.
|
||||||
|
|
||||||
``provisioner``
|
``provisioner``
|
||||||
@ -136,18 +136,18 @@ procedure.
|
|||||||
|
|
||||||
The following parameters are optional:
|
The following parameters are optional:
|
||||||
|
|
||||||
``trident\_setup\_dir``
|
``trident_setup_dir``
|
||||||
Set a staging directory for generated configuration files. The
|
Set a staging directory for generated configuration files. The
|
||||||
default is /tmp/trident.
|
default is /tmp/trident.
|
||||||
|
|
||||||
``trident\_namespace``
|
``trident_namespace``
|
||||||
Set this option to use an alternate Kubernetes namespace.
|
Set this option to use an alternate Kubernetes namespace.
|
||||||
|
|
||||||
``trident\_rest\_api\_port``
|
``trident_rest_api_port``
|
||||||
Use an alternate port for the Trident REST API. The default is
|
Use an alternate port for the Trident REST API. The default is
|
||||||
8000.
|
8000.
|
||||||
|
|
||||||
``trident\_install\_extra\_params``
|
``trident_install_extra_params``
|
||||||
Add extra space-separated parameters when installing trident.
|
Add extra space-separated parameters when installing trident.
|
||||||
|
|
||||||
For complete listings of available parameters, see
|
For complete listings of available parameters, see
|
||||||
@ -190,8 +190,8 @@ procedure.
|
|||||||
username: "admin"
|
username: "admin"
|
||||||
password: "secret"
|
password: "secret"
|
||||||
|
|
||||||
This file is sectioned into ``netapp\_k8s\_storageclass``,
|
This file is sectioned into ``netapp_k8s_storageclass``,
|
||||||
``netapp\_k8s\_snapshotstorageclasses``, and ``netapp\_backends``.
|
``netapp_k8s_snapshotstorageclasses``, and ``netapp_backends``.
|
||||||
You can add multiple backends and/or storage classes.
|
You can add multiple backends and/or storage classes.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -18,7 +18,7 @@ complement on a host.
|
|||||||
|
|
||||||
In the Horizon Web interface and in |CLI| output, both identifications are
|
In the Horizon Web interface and in |CLI| output, both identifications are
|
||||||
shown. For example, the output of the :command:`system host-disk-show`
|
shown. For example, the output of the :command:`system host-disk-show`
|
||||||
command includes both the **device\_node** and the **device\_path**.
|
command includes both the **device_node** and the **device_path**.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ This procedure uses standard Helm mechanisms to install a second
|
|||||||
|
|
||||||
#. Capture a list of monitors.
|
#. Capture a list of monitors.
|
||||||
|
|
||||||
This will be stored in the environment variable ``<MON\_LIST>`` and
|
This will be stored in the environment variable ``<MON_LIST>`` and
|
||||||
used in the following step.
|
used in the following step.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -24,7 +24,7 @@ You can change the space allotted for the Ceph monitor, if required.
|
|||||||
|
|
||||||
~(keystone_admin)]$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
~(keystone_admin)]$ system ceph-mon-modify <controller> ceph_mon_gib=<size>
|
||||||
|
|
||||||
where ``<partition\_size>`` is the size in GiB to use for the Ceph monitor.
|
where ``<partition_size>`` is the size in GiB to use for the Ceph monitor.
|
||||||
The value must be between 21 and 40 GiB.
|
The value must be between 21 and 40 GiB.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -140,10 +140,10 @@ The following are optional arguments:
|
|||||||
For a Ceph backend, this is a user-assigned name for the backend. The
|
For a Ceph backend, this is a user-assigned name for the backend. The
|
||||||
default is **ceph-store** for a Ceph backend.
|
default is **ceph-store** for a Ceph backend.
|
||||||
|
|
||||||
**-t,** ``--tier\_uuid``
|
**-t,** ``--tier_uuid``
|
||||||
For a Ceph backend, is the UUID of a storage tier to back.
|
For a Ceph backend, is the UUID of a storage tier to back.
|
||||||
|
|
||||||
**-c,** ``--ceph\_conf``
|
**-c,** ``--ceph_conf``
|
||||||
Location of the Ceph configuration file used for provisioning an
|
Location of the Ceph configuration file used for provisioning an
|
||||||
external backend.
|
external backend.
|
||||||
|
|
||||||
@ -152,7 +152,7 @@ The following are optional arguments:
|
|||||||
reversible.
|
reversible.
|
||||||
|
|
||||||
``--ceph-mon-gib``
|
``--ceph-mon-gib``
|
||||||
For a Ceph backend, this is the space in gibibytes allotted for the
|
For a Ceph backend, this is the space in GB allotted for the
|
||||||
Ceph monitor.
|
Ceph monitor.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
Loading…
Reference in New Issue
Block a user