From cde21698bde876023232ac6aedc54ed7c93ce867 Mon Sep 17 00:00:00 2001 From: Dmitry Tantsur Date: Tue, 10 Jan 2017 13:17:26 +0100 Subject: [PATCH] Document using manual_cleaning workflow to wipe hard drives Related to blueprint re-enable-cleaning Change-Id: Idfe1c7bce3910b9a0f7d84418d1bba259b10ed82 --- .../basic_deployment/basic_deployment_cli.rst | 4 ++ doc/source/mistral-api/mistral-api.rst | 51 ++++++++++++++++--- 2 files changed, 49 insertions(+), 6 deletions(-) diff --git a/doc/source/basic_deployment/basic_deployment_cli.rst b/doc/source/basic_deployment/basic_deployment_cli.rst index ea51d34c..209258cd 100644 --- a/doc/source/basic_deployment/basic_deployment_cli.rst +++ b/doc/source/basic_deployment/basic_deployment_cli.rst @@ -607,6 +607,10 @@ The overcloud can be redeployed when desired. # This command should show no stack once the Delete has completed heat stack-list +#. It is recommended that you delete existing partitions from all nodes before + redeploying. Starting with TripleO Ocata, you can use existing workflows - + see :ref:`cleaning` for details. + #. Deploy the Overcloud again:: openstack overcloud deploy --templates diff --git a/doc/source/mistral-api/mistral-api.rst b/doc/source/mistral-api/mistral-api.rst index b3b5e789..dbff336e 100644 --- a/doc/source/mistral-api/mistral-api.rst +++ b/doc/source/mistral-api/mistral-api.rst @@ -169,17 +169,18 @@ it can be changed if they are all consistent. This will be the plan name. '{"container":"my_cloud"}' -Register and Introspect Baremetal Nodes ---------------------------------------- +Working with Bare Metal Nodes +----------------------------- -Baremetal nodes can be registered with Ironic via Mistral. This functionality -is provided by the ``tripleo.baremetal`` workflows. +Some functionality for dealing with bare metal nodes is provided by the +``tripleo.baremetal`` workflows. Register Nodes ^^^^^^^^^^^^^^ -The input for this workflow is a bit larger, so this time we will store it in -a file and pass it in, rather than working inline. +Baremetal nodes can be registered with Ironic via Mistral. The input for this +workflow is a bit larger, so this time we will store it in a file and pass it +in, rather than working inline. .. code-block:: bash @@ -259,6 +260,44 @@ nodes to be in the "manageable" state. $ openstack workflow execution create tripleo.baremetal.v1.introspect \ '{"nodes_uuids": ["UUID1", "UUID2"]}' +.. _cleaning: + +Cleaning Nodes +^^^^^^^^^^^^^^ + +It is recommended to clean previous information from all disks on the bare +metal nodes before new deployments. As TripleO disables automated cleaning, it +has to be done manually via the ``manual_clean`` workflow. A node has to be in +the ``manageable`` state for it to work. + +.. note:: + See `Ironic cleaning documentation + `_ for + more details. + +To remove partitions from all disks on a given node, use the following +command: + +.. code-block:: bash + + $ openstack workflow execution create tripleo.baremetal.v1.manual_cleaning \ + '{"node_uuid": "UUID", "clean_steps": [{"step": "erase_devices_metadata", "interface": "deploy"}]}' + +To remove all data from all disks (either by ATA secure erase or by shredding +them), use the following command: + +.. code-block:: bash + + $ openstack workflow execution create tripleo.baremetal.v1.manual_cleaning \ + '{"node_uuid": "UUID", "clean_steps": [{"step": "erase_devices", "interface": "deploy"}]}' + +The node state is set back to ``manageable`` after successful cleaning and to +``clean failed`` after a failure. Inspect node's ``last_error`` field for the +cause of the failure. + +.. warning:: + Shredding disks can take really long, up to several hours. + Provide Nodes ^^^^^^^^^^^^^