Update B&R docs

Add note and removed unnecessary text.
Fix optional modes items.
Move otpional item to correct place.
Fix indentation and rewrite item about on_box_data.
Remove duplicated items.
Move parameters to be under optional-extra-vars item.
Add list of restored items.
Fix typo.
Reworded sentences to improve understanding.

Closes-bug: 1994052

Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
Change-Id: I72af1baa22dafa1f69488d0caad6241e5686bd0b
This commit is contained in:
Elisamara Aoki Goncalves 2022-08-23 14:54:38 -03:00
parent 3ac3d95ff0
commit f8badacec0
3 changed files with 137 additions and 76 deletions

View File

@ -7,26 +7,31 @@
Restore Platform System Data and Storage
========================================
You can perform a system restore \(controllers, workers, including or
excluding storage nodes\) of a |prod| cluster from available system data and
bring it back to the operational state it was when the backup procedure took
place.
You can perform a system restore \(controllers, workers, including or excluding
storage nodes\) of a |prod| cluster from a previous system backup and bring it
back to the operational state it was when the backup procedure took place.
.. rubric:: |context|
This procedure takes a snapshot of the etcd database at the time of backup,
stores it in the system data backup, and then uses it to initialize the
Kubernetes cluster during a restore. Kubernetes configuration will be
restored and pods that are started from repositories accessible from the
internet or from external repositories will start immediately. |prod|
specific applications must be re-applied once a storage cluster is configured.
Kubernetes configuration will be restored and pods that are started from
repositories accessible from the internet or from external repositories will
start immediately. |prod| specific applications must be re-applied once a
storage cluster is configured.
Everything is restored as it was when the backup was created, except for
optional data if not defined.-
See :ref:`Back Up System Data <backing-up-starlingx-system-data>` for more
details on the backup.
.. warning::
The system data backup file can only be used to restore the system from
which the backup was made. You cannot use this backup file to restore
the system to different hardware.
To restore the data, use the same version of the boot image \(ISO\) that
The system backup file can only be used to restore the system from which
the backup was made. You cannot use this backup file to restore the system
to different hardware.
To restore the backup, use the same version of the boot image \(ISO\) that
was used at the time of the original installation.
The |prod| restore supports the following optional modes:
@ -42,21 +47,13 @@ The |prod| restore supports the following optional modes:
wipe_ceph_osds=false
- To wipe the Ceph cluster entirely \(true\), where the Ceph cluster will
need to be recreated, use the following parameter:
need to be recreated, or if the Ceph partition was wiped somehow before or
during reinstall, use the following parameter:
.. code-block:: none
wipe_ceph_osds=true
- To indicate that the backup data file is under /opt/platform-backup
directory on the local machine, use the following parameter:
.. code-block:: none
on_box_data=true
If this parameter is set to **false**, the Ansible Restore playbook expects
both the **initial_backup_dir** and **backup_filename** to be specified.
Restoring a |prod| cluster from a backup file is done by re-installing the
ISO on controller-0, running the Ansible Restore Playbook, applying updates
@ -128,7 +125,7 @@ conditions are in place:
#. Install network connectivity required for the subcloud.
#. Ensure that the backup file are available on the controller. Run both
#. Ensure that the backup files are available on the controller. Run both
Ansible Restore playbooks, restore_platform.yml and restore_user_images.yml.
For more information on restoring the back up file, see :ref:`Run Restore
Playbook Locally on the Controller
@ -137,8 +134,12 @@ conditions are in place:
<system-backup-running-ansible-restore-playbook-remotely>`.
.. note::
The backup files contain the system data and updates.
The restore operation will pull images from the Upstream registry, they
are not part of the backup.
#. If the backup file contains patches, Ansible Restore playbook
restore_platform.yml will apply the patches and prompt you to reboot the
system, you will need to re-run Ansible Restore playbook.
@ -163,6 +164,11 @@ conditions are in place:
Rerun the Ansible Playbook if there were patches applied and you were
prompted to reboot the system.
.. note::
After restore is completed it is not possible to restart (or rerun) the
restore playbook.
#. Restore the local registry using the file restore_user_images.yml.
This must be done before unlocking controller-0.

View File

@ -11,54 +11,85 @@ Run Restore Playbook Locally on the Controller
To run restore on the controller, you need to download the backup to the
active controller.
.. rubric:: |context|
You can use an external storage device, for example, a USB drive. Use the
following command to run the Ansible Restore playbook:
.. code-block:: none
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=<location_of_tarball ansible_become_pass=<admin_password> admin_password=<admin_password backup_filename=<backup_filename> wipe_ceph_osds=<true/false>"
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=<location_of_tarball ansible_become_pass=<admin_password> admin_password=<admin_password backup_filename=<backup_filename> <optional-restore-mode>"
The |prod| restore supports the following optional modes, keeping the Ceph
cluster data intact or wiping the Ceph cluster.
.. _running-restore-playbook-locally-on-the-controller-steps-usl-2c3-pmb:
- To keep the Ceph cluster data intact \(false - default option\), use the
following parameter:
- **Optional**: You can select one of the following restore modes:
.. code-block:: none
- To keep the Ceph cluster data intact \(false - default option\), use the
following parameter:
wipe_ceph_osds=false
:command:`wipe_ceph_osds=false`
- To wipe the Ceph cluster entirely \(true\), where the Ceph cluster will
need to be recreated, use the following parameter:
- To wipe the Ceph cluster entirely \(true\), where the Ceph cluster will
need to be recreated, use the following parameter:
.. code-block:: none
:command:`wipe_ceph_osds=true`
wipe_ceph_osds=true
- To define a convinient place to store the backup files, defined by
``initial-backup_dir``, on the system (such as the home folder for
sysadmin, or /tmp, or even a mounted USB device), use the following
parameter:
Example of a backup file in /home/sysadmin
:command:`on_box_data=true/false`
.. code-block:: none
If this parameter is set to true, Ansible Restore playbook will look
for the backup file provided on the target server. The parameter
``initial_backup_dir`` can be ommited from the command line. In this
case, the backup file will be under ``/opt/platform-backup`` directory.
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=St0rlingX* admin_password=St0rlingX* backup_filename=localhost_platform_backup_2020_07_27_07_48_48.tgz wipe_ceph_osds=true"
If this parameter is set to false, the Ansible Restore playbook will
look for backup file provided where is the Ansible controller. In this
case, both the ``initial_backup_dir`` and ``backup_filename`` must be
specified in the command.
.. note::
If the backup contains patches, Ansible Restore playbook will apply
the patches and prompt you to reboot the system. Then you will need to
re-run Ansible Restore playbook.
Example of a backup file in /home/sysadmin
- To indicate that the backup data file is under /opt/platform-backup
directory on the local machine, use the following parameter:
.. code-block:: none
.. code-block:: none
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=St0rlingX* admin_password=St0rlingX* backup_filename=localhost_platform_backup_2020_07_27_07_48_48.tgz wipe_ceph_osds=true"
on_box_data=true
.. note::
If this parameter is set to **false**, the Ansible Restore playbook expects
both the **initial_backup_dir** and **backup_filename** to be specified.
If the backup contains patches, Ansible Restore playbook will apply
the patches and prompt you to reboot the system. Then you will need
to re-run Ansible Restore playbook.
- The ``ssl_ca_certificate_file`` defines the ssl_ca certificate that will be
installed during the restore. It will replace the ``ssl_ca`` certificate
from the backup tar file.
.. code-block:: none
ssl_ca_certificate_file=<complete path>/<ssl_ca certificate file>
This parameter depends on ``on_box_data`` value.
When ``on_box_data=true`` or not defined, the ``ssl_ca_certificate_file``
will be the location of ``ssl_ca`` certificate file in the target host.
This is the default case.
When ``on_box_data=false``, the ``ssl_ca_certificate_file`` will be the
location of ``ssl_ca`` certificate file where the Ansible controller is
running. This is useful for remote play.
.. note::
To use this option on local restore mode, you need to download the
``ssl_ca`` certificate file to the active controller.
.. note::
After restore is completed it is not possible to restart (or rerun) the
restore playbook.
.. rubric:: |postreq|

View File

@ -52,39 +52,49 @@ In this method you can run Ansible Restore playbook and point to controller-0.
~(keystone_admin)]$ ansible-playbook path-to-restore-platform-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars
where optional-extra-vars can be:
where ``optional-extra-vars`` can be:
- **Optional**: You can select one of the following restore modes:
- To keep Ceph data intact \(false - default option\), use the
following parameter:
- To keep Ceph data intact \(false - default option\), use the
following parameter:
:command:`wipe_ceph_osds=false`
:command:`wipe_ceph_osds=false`
- To start with an empty Ceph cluster \(true\), where the Ceph
cluster will need to be recreated, use the following parameter:
- To start with an empty Ceph cluster \(true\), where the Ceph
cluster will need to be recreated, use the following parameter:
:command:`wipe_ceph_osds=true`
:command:`wipe_ceph_osds=true`
- To define a convinient place to store the backup files, defined by
``initial-backup_dir``, on the system (such as the home folder for
sysadmin, or /tmp, or even a mounted USB device), use the following
parameter:
- To indicate that the backup data file is under /opt/platform-backup
directory on the local machine, use the following parameter:
:command:`on_box_data=true/false`
:command:`on_box_data=true`
If this parameter is set to true, Ansible Restore playbook will look
for the backup file provided on the target server. The parameter
``initial_backup_dir`` can be ommited from the command line. In this
case, the backup file will be under ``/opt/platform-backup`` directory.
If this parameter is set to **false**, the Ansible Restore playbook
expects both the **initial_backup_dir** and **backup_filename**
to be specified.
If this parameter is set to false, the Ansible Restore playbook will
look for backup file provided where is the Ansible controller. In this
case, both the ``initial_backup_dir`` and ``backup_filename`` must be
specified in the command.
- The backup_filename is the platform backup tar file. It must be
- The ``backup_filename`` is the platform backup tar file. It must be
provided using the ``-e`` option on the command line, for example:
.. code-block:: none
-e backup_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz
- The initial_backup_dir is the location on the Ansible control
machine where the platform backup tar file is placed to restore the
platform. It must be provided using ``-e`` option on the command line.
- The ``initial_backup_dir`` is the location where the platform backup
tar file is placed to restore the platform. It must be provided using
``-e`` option on the command line.
.. note::
When ``on_box_data=false``, ``initial_backup_dir`` must be defined.
- The :command:`admin_password`, :command:`ansible_become_pass`,
and :command:`ansible_ssh_pass` need to be set correctly using
@ -97,19 +107,33 @@ In this method you can run Ansible Restore playbook and point to controller-0.
/home/sysadmin on controller-0 using the ``-e`` option on the command
line.
For example:
For example:
.. parsed-literal::
.. parsed-literal::
~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_platform.yml --limit |prefix|\_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* admin_password=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= |prefix|\_Cluster_system_backup_2019_08_08_15_25_36.tgz ansible_remote_tmp=/home/sysadmin/ansible-restore"
~(keystone_admin)]$ ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_platform.yml --limit |prefix|\_Cluster -i $HOME/br_test/hosts -e "ansible_become_pass=St0rlingX* admin_password=St0rlingX* ansible_ssh_pass=St0rlingX* initial_backup_dir=$HOME/br_test backup_filename= |prefix|\_Cluster_system_backup_2019_08_08_15_25_36.tgz ansible_remote_tmp=/home/sysadmin/ansible-restore"
- The :command:`ssl_ca_certificate_file` indicates that a ``ssl_ca``
certificate defined will replace the ``ssl_ca`` certificate from the
platform backup tar file.
.. code-block:: none
-e "ssl_ca_certificate_file=<complete path>/<ssl_ca certificate file>"
.. note::
If the backup contains patches, Ansible Restore playbook will apply
the patches and prompt you to reboot the system. Then you will need to
re-run Ansible Restore playbook.
#. After running the restore_platform.yml playbook, you can restore the local
registry images.
.. note::
After restore is completed it is not possible to restart (or rerun) the
restore playbook.
#. After running the ``restore_platform.yml`` playbook, you can restore the
local registry images.
.. note::
The backup file of the local registry may be large. Restore the
@ -119,9 +143,9 @@ In this method you can run Ansible Restore playbook and point to controller-0.
~(keystone_admin)]$ ansible-playbook path-to-restore-user-images-playbook-entry-file --limit host-name -i inventory-file -e optional-extra-vars
where optional-extra-vars can be:
where ``optional-extra-vars`` can be:
- The backup_filename is the local registry backup tar file. It
- The ``backup_filename`` is the local registry backup tar file. It
must be provided using the ``-e`` option on the command line, for
example: