Support RAID configuration for BM via iRMC driver
This is OOB solution which using create/delete raid config via Fujitsu iRMC driver. In addition, This commit will enable raid interface via iRMC driver. Tested successfully on TX2540 M1 along with eLCM license, SDcard and SP(Service Platform) available. Change-Id: Iacaf213f76abf130d5570fc13704b1d1bfcf49d7 Story: #1699695 Task: #10597
This commit is contained in:
parent
08e70c4528
commit
84ae0c323c
@ -18,7 +18,7 @@ Prerequisites
|
||||
* Install `python-scciclient <https://pypi.org/project/python-scciclient>`_
|
||||
and `pysnmp <https://pypi.org/project/pysnmp>`_ packages::
|
||||
|
||||
$ pip install "python-scciclient>=0.6.0" pysnmp
|
||||
$ pip install "python-scciclient>=0.7.0" pysnmp
|
||||
|
||||
Hardware Type
|
||||
=============
|
||||
@ -65,6 +65,10 @@ hardware interfaces:
|
||||
Supports ``irmc``, which enables power control via ServerView Common
|
||||
Command Interface (SCCI), by default. Also supports ``ipmitool``.
|
||||
|
||||
* raid
|
||||
Supports ``irmc``, ``no-raid`` and ``agent``.
|
||||
The default is ``no-raid``.
|
||||
|
||||
For other hardware interfaces, ``irmc`` hardware type supports the
|
||||
Bare Metal reference interfaces. For more details about the hardware
|
||||
interfaces and how to enable the desired ones, see
|
||||
@ -84,7 +88,7 @@ interfaces enabled for ``irmc`` hardware type.
|
||||
enabled_management_interfaces = irmc
|
||||
enabled_network_interfaces = flat,neutron
|
||||
enabled_power_interfaces = irmc
|
||||
enabled_raid_interfaces = no-raid
|
||||
enabled_raid_interfaces = no-raid,irmc
|
||||
enabled_storage_interfaces = noop,cinder
|
||||
enabled_vendor_interfaces = no-vendor,ipmitool
|
||||
|
||||
@ -93,10 +97,10 @@ Here is a command example to enroll a node with ``irmc`` hardware type.
|
||||
.. code-block:: console
|
||||
|
||||
openstack baremetal node create --os-baremetal-api-version=1.31 \
|
||||
--driver irmc \
|
||||
--boot-interface irmc-pxe \
|
||||
--deploy-interface direct \
|
||||
--inspect-interface irmc
|
||||
--inspect-interface irmc \
|
||||
--raid-interface irmc
|
||||
|
||||
Node configuration
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
@ -384,6 +388,99 @@ example::
|
||||
|
||||
See :ref:`capabilities-discovery` for more details and examples.
|
||||
|
||||
RAID configuration Support
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``irmc`` hardware type provides the iRMC RAID configuration with ``irmc``
|
||||
raid interface.
|
||||
|
||||
.. note::
|
||||
|
||||
* RAID implementation for ``irmc`` hardware type is based on eLCM license
|
||||
and SDCard. Otherwise, SP(Service Platform) in lifecycle management
|
||||
must be available.
|
||||
* RAID implementation only supported for RAIDAdapter 0 in Fujitsu Servers.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The RAID configuration Support in the iRMC drivers requires the following
|
||||
configuration:
|
||||
|
||||
* It is necessary to set ironic configuration into Node with
|
||||
JSON file option::
|
||||
|
||||
$ openstack baremetal node set <node-uuid-or-name> \
|
||||
--target-raid-config <JSON file containing target RAID configuration>
|
||||
|
||||
Here is some sample values for JSON file::
|
||||
|
||||
{
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": 1000,
|
||||
"raid_level": "1"
|
||||
]
|
||||
}
|
||||
|
||||
or::
|
||||
|
||||
{
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": 1000,
|
||||
"raid_level": "1",
|
||||
"controller": "FTS RAID Ctrl SAS 6G 1GB (D3116C) (0)",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
RAID 1+0 and 5+0 in iRMC driver does not support property ``physical_disks``
|
||||
in ``target_raid_config`` during create raid configuration yet. See
|
||||
following example::
|
||||
|
||||
{
|
||||
"logical_disks":
|
||||
[
|
||||
{
|
||||
"size_gb": "Max",
|
||||
"raid_level": "1+0"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
See :ref:`raid` for more details and examples.
|
||||
|
||||
Supported properties
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The RAID configuration using iRMC driver supports following parameters in
|
||||
JSON file:
|
||||
|
||||
* ``size_gb``: is mandatory properties in Ironic.
|
||||
* ``raid_level``: is mandatory properties in Ironic. Currently, iRMC Server
|
||||
supports following RAID levels: 0, 1, 5, 6, 1+0 and 5+0.
|
||||
* ``controller``: is name of the controller as read by the RAID interface.
|
||||
* ``physical_disks``: are specific values for each raid array in
|
||||
LogicalDrive which operator want to set them along with ``raid_level``.
|
||||
|
||||
The RAID configuration is supported as a manual cleaning step.
|
||||
|
||||
.. note::
|
||||
|
||||
* iRMC server will power-on after create/delete raid configuration is
|
||||
applied, FGI (Foreground Initialize) will process raid configuration in
|
||||
iRMC server, thus the operation will completed upon power-on and power-off
|
||||
when created RAID on iRMC server.
|
||||
|
||||
See :ref:`raid` for more details and examples.
|
||||
|
||||
Supported platforms
|
||||
===================
|
||||
This driver supports FUJITSU PRIMERGY BX S4 or RX S8 servers and above.
|
||||
@ -397,3 +494,8 @@ Power Off (Graceful Power Off) are only available if
|
||||
`ServerView agents <http://manuals.ts.fujitsu.com/index.php?id=5406-5873-5925-5945-16159>`_
|
||||
are installed. See `iRMC S4 Manual <http://manuals.ts.fujitsu.com/index.php?id=5406-5873-5925-5988>`_
|
||||
for more details.
|
||||
|
||||
RAID configuration feature supports FUJITSU PRIMERGY servers with
|
||||
RAID-Ctrl-SAS-6G-1GB(D3116C) controller and above.
|
||||
For detail supported controller with OOB-RAID configuration, please see
|
||||
`the whitepaper for iRMC RAID configuration <https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-SVS-ooB-RAID-HDD-en.pdf>`_.
|
||||
|
@ -8,7 +8,7 @@ proliantutils>=2.5.0
|
||||
pysnmp
|
||||
python-ironic-inspector-client>=1.5.0
|
||||
python-oneviewclient<3.0.0,>=2.5.2
|
||||
python-scciclient>=0.6.0
|
||||
python-scciclient>=0.7.0
|
||||
python-ilorest-library>=2.1.0
|
||||
hpOneView>=4.4.0
|
||||
UcsSdk==0.8.2.2
|
||||
|
@ -89,6 +89,14 @@ opts = [
|
||||
'this option is not defined, then leave out '
|
||||
'pci_gpu_devices in capabilities property. '
|
||||
'Sample gpu_ids value: 0x1000/0x0079,0x2100/0x0080')),
|
||||
cfg.IntOpt('query_raid_config_fgi_status_interval',
|
||||
min=1,
|
||||
default=300,
|
||||
help=_('Interval (in seconds) between periodic RAID status '
|
||||
'checks to determine whether the asynchronous RAID '
|
||||
'configuration was successfully finished or not. '
|
||||
'Foreground Initialization (FGI) will start 5 minutes '
|
||||
'after creating virtual drives.')),
|
||||
]
|
||||
|
||||
|
||||
|
@ -17,12 +17,14 @@ of FUJITSU PRIMERGY servers, and above servers.
|
||||
"""
|
||||
|
||||
from ironic.drivers import generic
|
||||
from ironic.drivers.modules import agent
|
||||
from ironic.drivers.modules import inspector
|
||||
from ironic.drivers.modules import ipmitool
|
||||
from ironic.drivers.modules.irmc import boot
|
||||
from ironic.drivers.modules.irmc import inspect
|
||||
from ironic.drivers.modules.irmc import management
|
||||
from ironic.drivers.modules.irmc import power
|
||||
from ironic.drivers.modules.irmc import raid
|
||||
from ironic.drivers.modules import noop
|
||||
from ironic.drivers.modules import pxe
|
||||
|
||||
@ -63,3 +65,8 @@ class IRMCHardware(generic.GenericHardware):
|
||||
def supported_power_interfaces(self):
|
||||
"""List of supported power interfaces."""
|
||||
return [power.IRMCPower, ipmitool.IPMIPower]
|
||||
|
||||
@property
|
||||
def supported_raid_interfaces(self):
|
||||
"""List of supported raid interfaces."""
|
||||
return [noop.NoRAID, raid.IRMCRAID, agent.AgentRAID]
|
||||
|
@ -21,6 +21,8 @@ import six
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common.i18n import _
|
||||
from ironic.common import raid as raid_common
|
||||
from ironic.conductor import utils as manager_utils
|
||||
from ironic.conf import CONF
|
||||
|
||||
scci = importutils.try_import('scciclient.irmc.scci')
|
||||
@ -219,3 +221,9 @@ def set_secure_boot_mode(node, enable):
|
||||
raise exception.IRMCOperationError(
|
||||
operation=_("setting secure boot mode"),
|
||||
error=irmc_exception)
|
||||
|
||||
|
||||
def resume_cleaning(task):
|
||||
raid_common.update_raid_info(
|
||||
task.node, task.node.raid_config)
|
||||
manager_utils.notify_conductor_resume_clean(task)
|
||||
|
502
ironic/drivers/modules/irmc/raid.py
Normal file
502
ironic/drivers/modules/irmc/raid.py
Normal file
@ -0,0 +1,502 @@
|
||||
# Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Irmc RAID specific methods
|
||||
"""
|
||||
from futurist import periodics
|
||||
from ironic_lib import metrics_utils
|
||||
from oslo_log import log as logging
|
||||
from oslo_utils import importutils
|
||||
import six
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.common import raid as raid_common
|
||||
from ironic.common import states
|
||||
from ironic.conductor import task_manager
|
||||
from ironic import conf
|
||||
from ironic.drivers import base
|
||||
from ironic.drivers.modules.irmc import common as irmc_common
|
||||
|
||||
client = importutils.try_import('scciclient.irmc')
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
CONF = conf.CONF
|
||||
|
||||
METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
RAID_LEVELS = {
|
||||
'0': {
|
||||
'min_disks': 1,
|
||||
'max_disks': 1000,
|
||||
'factor': 0,
|
||||
},
|
||||
'1': {
|
||||
'min_disks': 2,
|
||||
'max_disks': 2,
|
||||
'factor': 1,
|
||||
},
|
||||
'5': {
|
||||
'min_disks': 3,
|
||||
'max_disks': 1000,
|
||||
'factor': 1,
|
||||
},
|
||||
'6': {
|
||||
'min_disks': 4,
|
||||
'max_disks': 1000,
|
||||
'factor': 2,
|
||||
},
|
||||
'10': {
|
||||
'min_disks': 4,
|
||||
'max_disks': 1000,
|
||||
'factor': 2,
|
||||
},
|
||||
'50': {
|
||||
'min_disks': 6,
|
||||
'max_disks': 1000,
|
||||
'factor': 2,
|
||||
}
|
||||
}
|
||||
|
||||
RAID_COMPLETING = 'completing'
|
||||
RAID_COMPLETED = 'completed'
|
||||
RAID_FAILED = 'failed'
|
||||
|
||||
|
||||
def _get_raid_adapter(node):
|
||||
"""Get the RAID adapter info on a RAID controller.
|
||||
|
||||
:param node: an ironic node object.
|
||||
:returns: RAID adapter dictionary, None otherwise.
|
||||
:raises: IRMCOperationError on an error from python-scciclient.
|
||||
"""
|
||||
irmc_info = node.driver_info
|
||||
LOG.info('iRMC driver is gathering RAID adapter info for node %s',
|
||||
node.uuid)
|
||||
try:
|
||||
return client.elcm.get_raid_adapter(irmc_info)
|
||||
except client.elcm.ELCMProfileNotFound:
|
||||
reason = ('Cannot find any RAID profile in "%s"' % node.uuid)
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
|
||||
|
||||
def _get_fgi_status(report, node_uuid):
|
||||
"""Get a dict FGI(Foreground initialization) status on a RAID controller.
|
||||
|
||||
:param report: SCCI report information.
|
||||
:returns: FGI status on success, None if SCCIInvalidInputError and
|
||||
waiting status if SCCIRAIDNotReady.
|
||||
"""
|
||||
try:
|
||||
return client.scci.get_raid_fgi_status(report)
|
||||
except client.scci.SCCIInvalidInputError:
|
||||
LOG.warning('ServerViewRAID not available in %(node)s',
|
||||
{'node': node_uuid})
|
||||
except client.scci.SCCIRAIDNotReady:
|
||||
return RAID_COMPLETING
|
||||
|
||||
|
||||
def _get_physical_disk(node):
|
||||
"""Get physical disks info on a RAID controller.
|
||||
|
||||
This method only support to create the RAID configuration
|
||||
on the RAIDAdapter 0.
|
||||
|
||||
:param node: an ironic node object.
|
||||
:returns: dict of physical disks on RAID controller.
|
||||
"""
|
||||
|
||||
physical_disk_dict = {}
|
||||
raid_adapter = _get_raid_adapter(node)
|
||||
physical_disks = raid_adapter['Server']['HWConfigurationIrmc'][
|
||||
'Adapters']['RAIDAdapter'][0]['PhysicalDisks']
|
||||
|
||||
if physical_disks:
|
||||
for disks in physical_disks['PhysicalDisk']:
|
||||
physical_disk_dict.update({disks['Slot']: disks['Type']})
|
||||
|
||||
return physical_disk_dict
|
||||
|
||||
|
||||
def _create_raid_adapter(node):
|
||||
"""Create RAID adapter info on a RAID controller.
|
||||
|
||||
:param node: an ironic node object.
|
||||
:raises: IRMCOperationError on an error from python-scciclient.
|
||||
"""
|
||||
|
||||
irmc_info = node.driver_info
|
||||
target_raid_config = node.target_raid_config
|
||||
|
||||
try:
|
||||
return client.elcm.create_raid_configuration(irmc_info,
|
||||
target_raid_config)
|
||||
except client.elcm.ELCMProfileNotFound as exc:
|
||||
LOG.error('iRMC driver failed with profile not found for node '
|
||||
'%(node_uuid)s. Reason: %(error)s.',
|
||||
{'node_uuid': node.uuid, 'error': exc})
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=exc)
|
||||
except client.scci.SCCIClientError as exc:
|
||||
LOG.error('iRMC driver failed to create raid adapter info for node '
|
||||
'%(node_uuid)s. Reason: %(error)s.',
|
||||
{'node_uuid': node.uuid, 'error': exc})
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=exc)
|
||||
|
||||
|
||||
def _delete_raid_adapter(node):
|
||||
"""Delete the RAID adapter info on a RAID controller.
|
||||
|
||||
:param node: an ironic node object.
|
||||
:raises: IRMCOperationError if SCCI failed from python-scciclient.
|
||||
"""
|
||||
|
||||
irmc_info = node.driver_info
|
||||
|
||||
try:
|
||||
client.elcm.delete_raid_configuration(irmc_info)
|
||||
except client.scci.SCCIClientError as exc:
|
||||
LOG.error('iRMC driver failed to delete RAID configuration '
|
||||
'for node %(node_uuid)s. Reason: %(error)s.',
|
||||
{'node_uuid': node.uuid, 'error': exc})
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=exc)
|
||||
|
||||
|
||||
def _commit_raid_config(task):
|
||||
"""Perform to commit RAID config into node."""
|
||||
|
||||
node = task.node
|
||||
node_uuid = task.node.uuid
|
||||
raid_config = {'logical_disks': []}
|
||||
|
||||
raid_adapter = _get_raid_adapter(node)
|
||||
|
||||
raid_adapter_info = raid_adapter['Server']['HWConfigurationIrmc'][
|
||||
'Adapters']['RAIDAdapter'][0]
|
||||
controller = raid_adapter_info['@AdapterId']
|
||||
raid_config['logical_disks'].append({'controller': controller})
|
||||
|
||||
logical_drives = raid_adapter_info['LogicalDrives']['LogicalDrive']
|
||||
for logical_drive in logical_drives:
|
||||
raid_config['logical_disks'].append({'irmc_raid_info': {
|
||||
'logical_drive_number': logical_drive['@Number'], 'raid_level':
|
||||
logical_drive['RaidLevel'], 'name': logical_drive['Name'],
|
||||
' size': logical_drive['Size']}})
|
||||
for physical_drive in \
|
||||
raid_adapter_info['PhysicalDisks']['PhysicalDisk']:
|
||||
raid_config['logical_disks'].append({'physical_drives': {
|
||||
'physical_drive': physical_drive}})
|
||||
node.raid_config = raid_config
|
||||
|
||||
raid_common.update_raid_info(node, node.raid_config)
|
||||
LOG.info('RAID config is created successfully on node %s',
|
||||
node_uuid)
|
||||
|
||||
return states.CLEANWAIT
|
||||
|
||||
|
||||
def _validate_logical_drive_capacity(disk, valid_disk_slots):
|
||||
physical_disks = valid_disk_slots['PhysicalDisk']
|
||||
size_gb = {}
|
||||
all_volume_list = []
|
||||
physical_disk_list = []
|
||||
|
||||
for size in physical_disks:
|
||||
size_gb.update({size['@Number']: size['Size']['#text']})
|
||||
all_volume_list.append(size['Size']['#text'])
|
||||
|
||||
factor = RAID_LEVELS[disk['raid_level']]['factor']
|
||||
|
||||
if disk.get('physical_disks'):
|
||||
selected_disks = \
|
||||
[physical_disk for physical_disk in disk['physical_disks']]
|
||||
for volume in selected_disks:
|
||||
physical_disk_list.append(size_gb[volume])
|
||||
if disk['raid_level'] == '10':
|
||||
valid_capacity = \
|
||||
min(physical_disk_list) * (len(physical_disk_list) / 2)
|
||||
else:
|
||||
valid_capacity = \
|
||||
min(physical_disk_list) * (len(physical_disk_list) - factor)
|
||||
else:
|
||||
valid_capacity = \
|
||||
min(all_volume_list) * \
|
||||
((RAID_LEVELS[disk['raid_level']]['min_disks']) - factor)
|
||||
|
||||
if disk['size_gb'] > valid_capacity:
|
||||
raise exception.InvalidParameterValue(
|
||||
'Insufficient disk capacity with %s GB' % disk['size_gb'])
|
||||
|
||||
if disk['size_gb'] == valid_capacity:
|
||||
disk['size_gb'] = 'MAX'
|
||||
|
||||
|
||||
def _validate_physical_disks(node, logical_disks):
|
||||
"""Validate physical disks on a RAID configuration.
|
||||
|
||||
:param node: an ironic node object.
|
||||
:param logical_disks: RAID info to set RAID configuration
|
||||
:raises: IRMCOperationError on an error.
|
||||
"""
|
||||
raid_adapter = _get_raid_adapter(node)
|
||||
physical_disk_dict = _get_physical_disk(node)
|
||||
if raid_adapter is None:
|
||||
reason = ('Cannot find any raid profile in "%s"' % node.uuid)
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
if physical_disk_dict is None:
|
||||
reason = ('Cannot find any physical disks in "%s"' % node.uuid)
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
valid_disks = raid_adapter['Server']['HWConfigurationIrmc'][
|
||||
'Adapters']['RAIDAdapter'][0]['PhysicalDisks']
|
||||
if valid_disks is None:
|
||||
reason = ('Cannot find any HDD over in the node "%s"' % node.uuid)
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
valid_disk_slots = [slot['Slot'] for slot in valid_disks['PhysicalDisk']]
|
||||
remain_valid_disk_slots = list(valid_disk_slots)
|
||||
number_of_valid_disks = len(valid_disk_slots)
|
||||
used_valid_disk_slots = []
|
||||
|
||||
for disk in logical_disks:
|
||||
# Check raid_level value in the target_raid_config of node
|
||||
if disk.get('raid_level') not in RAID_LEVELS:
|
||||
reason = ('RAID level is not supported: "%s"'
|
||||
% disk.get('raid_level'))
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
|
||||
min_disk_value = RAID_LEVELS[disk['raid_level']]['min_disks']
|
||||
max_disk_value = RAID_LEVELS[disk['raid_level']]['max_disks']
|
||||
remain_valid_disks = number_of_valid_disks - min_disk_value
|
||||
number_of_valid_disks = number_of_valid_disks - min_disk_value
|
||||
|
||||
if remain_valid_disks < 0:
|
||||
reason = ('Physical disks do not enough slots for raid "%s"'
|
||||
% disk['raid_level'])
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
|
||||
if 'physical_disks' in disk:
|
||||
type_of_disks = []
|
||||
number_of_physical_disks = len(disk['physical_disks'])
|
||||
# Check number of physical disks along with raid level
|
||||
if number_of_physical_disks > max_disk_value:
|
||||
reason = ("Too many disks requested for RAID level %(level)s, "
|
||||
"maximum is %(max)s",
|
||||
{'level': disk['raid_level'], 'max': max_disk_value})
|
||||
raise exception.InvalidParameterValue(err=reason)
|
||||
if number_of_physical_disks < min_disk_value:
|
||||
reason = ("Not enough disks requested for RAID level "
|
||||
"%(level)s, minimum is %(min)s ",
|
||||
{'level': disk['raid_level'], 'min': min_disk_value})
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
# Check physical disks in valid disk slots
|
||||
for phys_disk in disk['physical_disks']:
|
||||
if int(phys_disk) not in valid_disk_slots:
|
||||
reason = ("Incorrect physical disk %(disk)s, correct are "
|
||||
"%(valid)s",
|
||||
{'disk': phys_disk, 'valid': valid_disk_slots})
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
type_of_disks.append(physical_disk_dict[int(phys_disk)])
|
||||
if physical_disk_dict[int(phys_disk)] != type_of_disks[0]:
|
||||
reason = ('Cannot create RAID configuration with '
|
||||
'different hard drives type %s'
|
||||
% physical_disk_dict[int(phys_disk)])
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
# Check physical disk values with used disk slots
|
||||
if int(phys_disk) in used_valid_disk_slots:
|
||||
reason = ("Disk %s is already used in a RAID configuration"
|
||||
% disk['raid_level'])
|
||||
raise exception.IRMCOperationError(operation='RAID config',
|
||||
error=reason)
|
||||
|
||||
used_valid_disk_slots.append(int(phys_disk))
|
||||
remain_valid_disk_slots.remove(int(phys_disk))
|
||||
|
||||
if disk['size_gb'] != 'MAX':
|
||||
# Validate size_gb value input
|
||||
_validate_logical_drive_capacity(disk, valid_disks)
|
||||
|
||||
|
||||
class IRMCRAID(base.RAIDInterface):
|
||||
|
||||
def get_properties(self):
|
||||
"""Return the properties of the interface."""
|
||||
return irmc_common.COMMON_PROPERTIES
|
||||
|
||||
@METRICS.timer('IRMCRAID.create_configuration')
|
||||
@base.clean_step(priority=0, argsinfo={
|
||||
'create_root_volume': {
|
||||
'description': ('This specifies whether to create the root volume.'
|
||||
'Defaults to `True`.'
|
||||
),
|
||||
'required': False
|
||||
},
|
||||
'create_nonroot_volumes': {
|
||||
'description': ('This specifies whether to create the non-root '
|
||||
'volumes. '
|
||||
'Defaults to `True`.'
|
||||
),
|
||||
'required': False
|
||||
}
|
||||
})
|
||||
def create_configuration(self, task,
|
||||
create_root_volume=True,
|
||||
create_nonroot_volumes=True):
|
||||
"""Create the RAID configuration.
|
||||
|
||||
This method creates the RAID configuration on the given node.
|
||||
|
||||
:param task: a TaskManager instance containing the node to act on.
|
||||
:param create_root_volume: If True, a root volume is created
|
||||
during RAID configuration. Otherwise, no root volume is
|
||||
created. Default is True.
|
||||
:param create_nonroot_volumes: If True, non-root volumes are
|
||||
created. If False, no non-root volumes are created. Default
|
||||
is True.
|
||||
:returns: states.CLEANWAIT if RAID configuration is in progress
|
||||
asynchronously.
|
||||
:raises: MissingParameterValue, if node.target_raid_config is missing
|
||||
or empty.
|
||||
:raises: IRMCOperationError on an error from scciclient
|
||||
"""
|
||||
|
||||
node = task.node
|
||||
|
||||
if not node.target_raid_config:
|
||||
raise exception.MissingParameterValue(
|
||||
'Missing the target_raid_config in node %s' % node.uuid)
|
||||
|
||||
target_raid_config = node.target_raid_config.copy()
|
||||
|
||||
logical_disks = target_raid_config['logical_disks']
|
||||
for log_disk in logical_disks:
|
||||
if log_disk.get('raid_level'):
|
||||
log_disk['raid_level'] = six.text_type(
|
||||
log_disk['raid_level']).replace('+', '')
|
||||
|
||||
# Validate physical disks on Fujitsu BM Server
|
||||
_validate_physical_disks(node, logical_disks)
|
||||
|
||||
# Executing raid configuration on Fujitsu BM Server
|
||||
_create_raid_adapter(node)
|
||||
|
||||
return _commit_raid_config(task)
|
||||
|
||||
@METRICS.timer('IRMCRAID.delete_configuration')
|
||||
@base.clean_step(priority=0)
|
||||
def delete_configuration(self, task):
|
||||
"""Delete the RAID configuration.
|
||||
|
||||
:param task: a TaskManager instance containing the node to act on.
|
||||
:returns: states.CLEANWAIT if deletion is in progress
|
||||
asynchronously or None if it is complete.
|
||||
"""
|
||||
node = task.node
|
||||
node_uuid = task.node.uuid
|
||||
|
||||
# Default delete everything raid configuration in BM Server
|
||||
_delete_raid_adapter(node)
|
||||
node.raid_config = {}
|
||||
node.save()
|
||||
LOG.info('RAID config is deleted successfully on node %(node_id)s.'
|
||||
'RAID config will clear and return (cfg)s value',
|
||||
{'node_id': node_uuid, 'cfg': node.raid_config})
|
||||
|
||||
@METRICS.timer('IRMCRAID._query_raid_config_fgi_status')
|
||||
@periodics.periodic(
|
||||
spacing=CONF.irmc.query_raid_config_fgi_status_interval)
|
||||
def _query_raid_config_fgi_status(self, manager, context):
|
||||
"""Periodic tasks to check the progress of running RAID config."""
|
||||
|
||||
filters = {'reserved': False, 'provision_state': states.CLEANWAIT,
|
||||
'maintenance': False}
|
||||
fields = ['raid_config']
|
||||
node_list = manager.iter_nodes(fields=fields, filters=filters)
|
||||
for (node_uuid, driver, raid_config) in node_list:
|
||||
try:
|
||||
lock_purpose = 'checking async RAID configuration tasks'
|
||||
with task_manager.acquire(context, node_uuid,
|
||||
purpose=lock_purpose,
|
||||
shared=True) as task:
|
||||
node = task.node
|
||||
node_uuid = task.node.uuid
|
||||
if not isinstance(task.driver.raid, IRMCRAID):
|
||||
continue
|
||||
if task.node.target_raid_config is None:
|
||||
continue
|
||||
if not raid_config or raid_config.get('fgi_status'):
|
||||
continue
|
||||
task.upgrade_lock()
|
||||
if node.provision_state != states.CLEANWAIT:
|
||||
continue
|
||||
# Avoid hitting clean_callback_timeout expiration
|
||||
node.touch_provisioning()
|
||||
|
||||
try:
|
||||
report = irmc_common.get_irmc_report(node)
|
||||
except client.scci.SCCIInvalidInputError:
|
||||
raid_config.update({'fgi_status': RAID_FAILED})
|
||||
raid_common.update_raid_info(node, raid_config)
|
||||
self._set_clean_failed(task, RAID_FAILED)
|
||||
continue
|
||||
except client.scci.SCCIClientError:
|
||||
raid_config.update({'fgi_status': RAID_FAILED})
|
||||
raid_common.update_raid_info(node, raid_config)
|
||||
self._set_clean_failed(task, RAID_FAILED)
|
||||
continue
|
||||
|
||||
fgi_status_dict = _get_fgi_status(report, node_uuid)
|
||||
# Note(trungnv): Allow to check until RAID mechanism to be
|
||||
# completed with RAID information in report.
|
||||
if fgi_status_dict == 'completing':
|
||||
continue
|
||||
if not fgi_status_dict:
|
||||
raid_config.update({'fgi_status': RAID_FAILED})
|
||||
raid_common.update_raid_info(node, raid_config)
|
||||
self._set_clean_failed(task, fgi_status_dict)
|
||||
continue
|
||||
if all(fgi_status == 'Idle' for fgi_status in
|
||||
fgi_status_dict.values()):
|
||||
raid_config.update({'fgi_status': RAID_COMPLETED})
|
||||
raid_common.update_raid_info(node, raid_config)
|
||||
LOG.info('RAID configuration has completed on '
|
||||
'node %(node)s with fgi_status is %(fgi)s',
|
||||
{'node': node_uuid, 'fgi': RAID_COMPLETED})
|
||||
irmc_common.resume_cleaning(task)
|
||||
|
||||
except exception.NodeNotFound:
|
||||
LOG.info('During query_raid_config_job_status, node '
|
||||
'%(node)s was not found raid_config and presumed '
|
||||
'deleted by another process.', {'node': node_uuid})
|
||||
except exception.NodeLocked:
|
||||
LOG.info('During query_raid_config_job_status, node '
|
||||
'%(node)s was already locked by another process. '
|
||||
'Skip.', {'node': node_uuid})
|
||||
|
||||
def _set_clean_failed(self, task, fgi_status_dict):
|
||||
LOG.error('RAID configuration task failed for node %(node)s. '
|
||||
'with FGI status is: %(fgi)s. ',
|
||||
{'node': task.node.uuid, 'fgi': fgi_status_dict})
|
||||
fgi_message = 'ServerViewRAID not available in Baremetal Server'
|
||||
task.node.last_error = fgi_message
|
||||
task.process_event('fail')
|
294
ironic/tests/unit/drivers/modules/irmc/test_periodic_task.py
Normal file
294
ironic/tests/unit/drivers/modules/irmc/test_periodic_task.py
Normal file
@ -0,0 +1,294 @@
|
||||
# Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Test class for iRMC periodic tasks
|
||||
"""
|
||||
|
||||
import mock
|
||||
from oslo_utils import uuidutils
|
||||
|
||||
from ironic.conductor import task_manager
|
||||
from ironic.drivers.modules.irmc import common as irmc_common
|
||||
from ironic.drivers.modules.irmc import raid as irmc_raid
|
||||
from ironic.drivers.modules import noop
|
||||
from ironic.tests.unit.drivers.modules.irmc import test_common
|
||||
from ironic.tests.unit.objects import utils as obj_utils
|
||||
|
||||
|
||||
class iRMCPeriodicTaskTestCase(test_common.BaseIRMCTest):
|
||||
|
||||
def setUp(self):
|
||||
super(iRMCPeriodicTaskTestCase, self).setUp()
|
||||
self.node_2 = obj_utils.create_test_node(
|
||||
self.context, driver='fake-hardware',
|
||||
uuid=uuidutils.generate_uuid())
|
||||
self.driver = mock.Mock(raid=irmc_raid.IRMCRAID())
|
||||
self.raid_config = {
|
||||
'logical_disks': [
|
||||
{'controller': 'RAIDAdapter0'},
|
||||
{'irmc_raid_info':
|
||||
{' size': {'#text': 465, '@Unit': 'GB'},
|
||||
'logical_drive_number': 0,
|
||||
'name': 'LogicalDrive_0',
|
||||
'raid_level': '1'}}]}
|
||||
self.target_raid_config = {
|
||||
'logical_disks': [
|
||||
{
|
||||
'key': 'value'
|
||||
}]}
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
def test__query_raid_config_fgi_status_without_node(
|
||||
self, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
node_list = []
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
raid_object = irmc_raid.IRMCRAID()
|
||||
raid_object._query_raid_config_fgi_status(mock_manager, None)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_without_raid_object(
|
||||
self, mock_acquire, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
task.driver.raid = noop.NoRAID()
|
||||
raid_object = irmc_raid.IRMCRAID()
|
||||
raid_object._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_without_input(
|
||||
self, mock_acquire, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
# Set none target_raid_config input
|
||||
task.node.target_raid_config = None
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_without_raid_config(
|
||||
self, mock_acquire, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = {}
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_without_fgi_status(
|
||||
self, mock_acquire, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = {
|
||||
'logical_disks': [
|
||||
{'controller': 'RAIDAdapter0'},
|
||||
{'irmc_raid_info':
|
||||
{' size': {'#text': 465, '@Unit': 'GB'},
|
||||
'logical_drive_number': 0,
|
||||
'name': 'LogicalDrive_0',
|
||||
'raid_level': '1'}}]}
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_other_clean_state(
|
||||
self, mock_acquire, report_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
# Set provision state value
|
||||
task.node.provision_state = 'cleaning'
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, report_mock.call_count)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status')
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_completing_status(
|
||||
self, mock_acquire, report_mock, fgi_mock, clean_fail_mock):
|
||||
mock_manager = mock.Mock()
|
||||
fgi_mock.return_value = 'completing'
|
||||
node_list = [(self.node.uuid, 'irmc', self.raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
# Set provision state value
|
||||
task.node.provision_state = 'clean wait'
|
||||
task.node.target_raid_config = self.target_raid_config
|
||||
task.node.raid_config = self.raid_config
|
||||
task.node.save()
|
||||
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, clean_fail_mock.call_count)
|
||||
report_mock.assert_called_once_with(task.node)
|
||||
fgi_mock.assert_called_once_with(report_mock.return_value,
|
||||
self.node.uuid)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status')
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_with_clean_fail(
|
||||
self, mock_acquire, report_mock, fgi_mock, clean_fail_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
fgi_mock.return_value = None
|
||||
fgi_status_dict = None
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
# Set provision state value
|
||||
task.node.provision_state = 'clean wait'
|
||||
task.node.target_raid_config = self.target_raid_config
|
||||
task.node.raid_config = self.raid_config
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
clean_fail_mock.assert_called_once_with(task, fgi_status_dict)
|
||||
report_mock.assert_called_once_with(task.node)
|
||||
fgi_mock.assert_called_once_with(report_mock.return_value,
|
||||
self.node.uuid)
|
||||
|
||||
@mock.patch.object(irmc_common, 'resume_cleaning')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status')
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_with_complete_cleaning(
|
||||
self, mock_acquire, report_mock, fgi_mock, clean_fail_mock,
|
||||
clean_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
fgi_mock.return_value = {'0': 'Idle', '1': 'Idle'}
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
# Set provision state value
|
||||
task.node.provision_state = 'clean wait'
|
||||
task.node.target_raid_config = self.target_raid_config
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, clean_fail_mock.call_count)
|
||||
report_mock.assert_called_once_with(task.node)
|
||||
fgi_mock.assert_called_once_with(report_mock.return_value,
|
||||
self.node.uuid)
|
||||
clean_mock.assert_called_once_with(task)
|
||||
|
||||
@mock.patch.object(irmc_common, 'resume_cleaning')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status')
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_with_two_nodes_without_raid_config(
|
||||
self, mock_acquire, report_mock, fgi_mock, clean_fail_mock,
|
||||
clean_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
raid_config_2 = {}
|
||||
fgi_mock.return_value = {'0': 'Idle', '1': 'Idle'}
|
||||
task = mock.Mock(node=self.node, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
node_list = [(self.node_2.uuid, 'irmc', raid_config_2),
|
||||
(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
# Set provision state value
|
||||
task.node.provision_state = 'clean wait'
|
||||
task.node.target_raid_config = self.target_raid_config
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
self.assertEqual(0, clean_fail_mock.call_count)
|
||||
report_mock.assert_called_once_with(task.node)
|
||||
fgi_mock.assert_called_once_with(report_mock.return_value,
|
||||
self.node.uuid)
|
||||
clean_mock.assert_called_once_with(task)
|
||||
|
||||
@mock.patch.object(irmc_common, 'resume_cleaning')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.IRMCRAID._set_clean_failed')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_fgi_status')
|
||||
@mock.patch.object(irmc_common, 'get_irmc_report')
|
||||
@mock.patch.object(task_manager, 'acquire', autospec=True)
|
||||
def test__query_raid_config_fgi_status_with_two_nodes_with_fgi_status_none(
|
||||
self, mock_acquire, report_mock, fgi_mock, clean_fail_mock,
|
||||
clean_mock):
|
||||
mock_manager = mock.Mock()
|
||||
raid_config = self.raid_config
|
||||
raid_config_2 = self.raid_config.copy()
|
||||
fgi_status_dict = {}
|
||||
fgi_mock.side_effect = [{}, {'0': 'Idle', '1': 'Idle'}]
|
||||
node_list = [(self.node_2.uuid, 'fake-hardware', raid_config_2),
|
||||
(self.node.uuid, 'irmc', raid_config)]
|
||||
mock_manager.iter_nodes.return_value = node_list
|
||||
task = mock.Mock(node=self.node_2, driver=self.driver)
|
||||
mock_acquire.return_value = mock.MagicMock(
|
||||
__enter__=mock.MagicMock(return_value=task))
|
||||
task.node.provision_state = 'clean wait'
|
||||
task.node.target_raid_config = self.target_raid_config
|
||||
task.node.save()
|
||||
task.driver.raid._query_raid_config_fgi_status(mock_manager,
|
||||
self.context)
|
||||
report_mock.assert_has_calls(
|
||||
[mock.call(task.node), mock.call(task.node)])
|
||||
fgi_mock.assert_has_calls([mock.call(report_mock.return_value,
|
||||
self.node_2.uuid),
|
||||
mock.call(report_mock.return_value,
|
||||
self.node_2.uuid)])
|
||||
clean_fail_mock.assert_called_once_with(task, fgi_status_dict)
|
||||
clean_mock.assert_called_once_with(task)
|
809
ironic/tests/unit/drivers/modules/irmc/test_raid.py
Normal file
809
ironic/tests/unit/drivers/modules/irmc/test_raid.py
Normal file
@ -0,0 +1,809 @@
|
||||
# Copyright 2018 FUJITSU LIMITED
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
"""
|
||||
Test class for IRMC RAID configuration
|
||||
"""
|
||||
|
||||
import mock
|
||||
|
||||
from ironic.common import exception
|
||||
from ironic.conductor import task_manager
|
||||
from ironic.drivers.modules.irmc import raid
|
||||
from ironic.tests.unit.drivers.modules.irmc import test_common
|
||||
|
||||
|
||||
class IRMCRaidConfigurationInternalMethodsTestCase(test_common.BaseIRMCTest):
|
||||
|
||||
def setUp(self):
|
||||
super(IRMCRaidConfigurationInternalMethodsTestCase, self).setUp()
|
||||
self.raid_adapter_profile = {
|
||||
"Server": {
|
||||
"HWConfigurationIrmc": {
|
||||
"Adapters": {
|
||||
"RAIDAdapter": [
|
||||
{
|
||||
"@AdapterId": "RAIDAdapter0",
|
||||
"@ConfigurationType": "Addressing",
|
||||
"Arrays": None,
|
||||
"LogicalDrives": None,
|
||||
"PhysicalDisks": {
|
||||
"PhysicalDisk": [
|
||||
{
|
||||
"@Number": "0",
|
||||
"@Action": "None",
|
||||
"Slot": 0,
|
||||
},
|
||||
{
|
||||
"@Number": "1",
|
||||
"@Action": "None",
|
||||
"Slot": 1
|
||||
},
|
||||
{
|
||||
"@Number": "2",
|
||||
"@Action": "None",
|
||||
"Slot": 2
|
||||
},
|
||||
{
|
||||
"@Number": "3",
|
||||
"@Action": "None",
|
||||
"Slot": 3
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.valid_disk_slots = {
|
||||
"PhysicalDisk": [
|
||||
{
|
||||
"@Number": "0",
|
||||
"Slot": 0,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "1",
|
||||
"Slot": 1,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "2",
|
||||
"Slot": 2,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "3",
|
||||
"Slot": 3,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "4",
|
||||
"Slot": 4,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "5",
|
||||
"Slot": 5,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "6",
|
||||
"Slot": 6,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
},
|
||||
{
|
||||
"@Number": "7",
|
||||
"Slot": 7,
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 1000
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test___fail_validation_with_none_raid_adapter_profile(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = None
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "0"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test___fail_validation_without_raid_level(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test___fail_validation_with_raid_level_is_none(self,
|
||||
get_raid_adapter_mock,
|
||||
get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": ""
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_without_physical_disks(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = {
|
||||
"Server": {
|
||||
"HWConfigurationIrmc": {
|
||||
"Adapters": {
|
||||
"RAIDAdapter": [
|
||||
{
|
||||
"@AdapterId": "RAIDAdapter0",
|
||||
"@ConfigurationType": "Addressing",
|
||||
"Arrays": None,
|
||||
"LogicalDrives": None,
|
||||
"PhysicalDisks": None
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test___fail_validation_with_raid_level_outside_list(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "2"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch(
|
||||
'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_not_enough_valid_disks(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock,
|
||||
capacity_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "5"
|
||||
},
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_physical_disk_insufficient(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1",
|
||||
"2"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_physical_disk_not_enough_disks(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "5",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_physical_disk_incorrect_valid_disks(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "10",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1",
|
||||
"2",
|
||||
"3",
|
||||
"4"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_physical_disk_outside_valid_disks_1(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1",
|
||||
"physical_disks": [
|
||||
"4",
|
||||
"5"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch(
|
||||
'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_physical_disk_outside_valid_slots_2(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock,
|
||||
capacity_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "5",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1",
|
||||
"2"
|
||||
]
|
||||
},
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "0",
|
||||
"physical_disks": [
|
||||
"4"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch(
|
||||
'ironic.drivers.modules.irmc.raid._validate_logical_drive_capacity')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_physical_disk')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_duplicated_physical_disks(
|
||||
self, get_raid_adapter_mock, get_physical_disk_mock,
|
||||
capacity_mock):
|
||||
get_raid_adapter_mock.return_value = self.raid_adapter_profile
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1"
|
||||
]
|
||||
},
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1",
|
||||
"physical_disks": [
|
||||
"1",
|
||||
"2"
|
||||
]
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._get_raid_adapter')
|
||||
def test__fail_validation_with_difference_physical_disks_type(
|
||||
self, get_raid_adapter_mock):
|
||||
get_raid_adapter_mock.return_value = {
|
||||
"Server": {
|
||||
"HWConfigurationIrmc": {
|
||||
"Adapters": {
|
||||
"RAIDAdapter": [
|
||||
{
|
||||
"@AdapterId": "RAIDAdapter0",
|
||||
"@ConfigurationType": "Addressing",
|
||||
"Arrays": None,
|
||||
"LogicalDrives": None,
|
||||
"PhysicalDisks": {
|
||||
"PhysicalDisk": [
|
||||
{
|
||||
"@Number": "0",
|
||||
"Slot": 0,
|
||||
"Type": "HDD",
|
||||
},
|
||||
{
|
||||
"@Number": "1",
|
||||
"Slot": 1,
|
||||
"Type": "SSD",
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"size_gb": "50",
|
||||
"raid_level": "1",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
self.assertRaises(exception.IRMCOperationError,
|
||||
raid._validate_physical_disks,
|
||||
task.node, target_raid_config['logical_disks'])
|
||||
|
||||
def test__fail_validate_capacity_raid_0(self):
|
||||
disk = {
|
||||
"size_gb": 3000,
|
||||
"raid_level": "0"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_raid_1(self):
|
||||
disk = {
|
||||
"size_gb": 3000,
|
||||
"raid_level": "1"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_raid_5(self):
|
||||
disk = {
|
||||
"size_gb": 3000,
|
||||
"raid_level": "5"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_raid_6(self):
|
||||
disk = {
|
||||
"size_gb": 3000,
|
||||
"raid_level": "6"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_raid_10(self):
|
||||
disk = {
|
||||
"size_gb": 3000,
|
||||
"raid_level": "10"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_raid_50(self):
|
||||
disk = {
|
||||
"size_gb": 5000,
|
||||
"raid_level": "50"
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
def test__fail_validate_capacity_with_physical_disk(self):
|
||||
disk = {
|
||||
"size_gb": 4000,
|
||||
"raid_level": "5",
|
||||
"physical_disks": [
|
||||
"0",
|
||||
"1",
|
||||
"3",
|
||||
"4"
|
||||
]
|
||||
}
|
||||
self.assertRaises(exception.InvalidParameterValue,
|
||||
raid._validate_logical_drive_capacity,
|
||||
disk, self.valid_disk_slots)
|
||||
|
||||
@mock.patch('ironic.common.raid.update_raid_info')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid.client')
|
||||
def test__commit_raid_config_with_logical_drives(self, client_mock,
|
||||
update_raid_info_mock):
|
||||
client_mock.elcm.get_raid_adapter.return_value = {
|
||||
"Server": {
|
||||
"HWConfigurationIrmc": {
|
||||
"Adapters": {
|
||||
"RAIDAdapter": [
|
||||
{
|
||||
"@AdapterId": "RAIDAdapter0",
|
||||
"@ConfigurationType": "Addressing",
|
||||
"Arrays": {
|
||||
"Array": [
|
||||
{
|
||||
"@Number": 0,
|
||||
"@ConfigurationType": "Addressing",
|
||||
"PhysicalDiskRefs": {
|
||||
"PhysicalDiskRef": [
|
||||
{
|
||||
"@Number": "0"
|
||||
},
|
||||
{
|
||||
"@Number": "1"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"LogicalDrives": {
|
||||
"LogicalDrive": [
|
||||
{
|
||||
"@Number": 0,
|
||||
"@Action": "None",
|
||||
"RaidLevel": "1",
|
||||
"Name": "LogicalDrive_0",
|
||||
"Size": {
|
||||
"@Unit": "GB",
|
||||
"#text": 465
|
||||
},
|
||||
"ArrayRefs": {
|
||||
"ArrayRef": [
|
||||
{
|
||||
"@Number": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"PhysicalDisks": {
|
||||
"PhysicalDisk": [
|
||||
{
|
||||
"@Number": "0",
|
||||
"@Action": "None",
|
||||
"Slot": 0,
|
||||
"PDStatus": "Operational"
|
||||
},
|
||||
{
|
||||
"@Number": "1",
|
||||
"@Action": "None",
|
||||
"Slot": 1,
|
||||
"PDStatus": "Operational"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
expected_raid_config = [
|
||||
{'controller': 'RAIDAdapter0'},
|
||||
{'irmc_raid_info': {' size': {'#text': 465, '@Unit': 'GB'},
|
||||
'logical_drive_number': 0,
|
||||
'name': 'LogicalDrive_0',
|
||||
'raid_level': '1'}},
|
||||
{'physical_drives': {'physical_drive': {'@Action': 'None',
|
||||
'@Number': '0',
|
||||
'PDStatus': 'Operational',
|
||||
'Slot': 0}}},
|
||||
{'physical_drives': {'physical_drive': {'@Action': 'None',
|
||||
'@Number': '1',
|
||||
'PDStatus': 'Operational',
|
||||
'Slot': 1}}}]
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
raid._commit_raid_config(task)
|
||||
client_mock.elcm.get_raid_adapter.assert_called_once_with(
|
||||
task.node.driver_info)
|
||||
update_raid_info_mock.assert_called_once_with(
|
||||
task.node, task.node.raid_config)
|
||||
self.assertEqual(task.node.raid_config['logical_disks'],
|
||||
expected_raid_config)
|
||||
|
||||
|
||||
class IRMCRaidConfigurationTestCase(test_common.BaseIRMCTest):
|
||||
|
||||
def setUp(self):
|
||||
super(IRMCRaidConfigurationTestCase, self).setUp()
|
||||
self.config(enabled_raid_interfaces=['irmc'])
|
||||
self.raid_adapter_profile = {
|
||||
"Server": {
|
||||
"HWConfigurationIrmc": {
|
||||
"Adapters": {
|
||||
"RAIDAdapter": [
|
||||
{
|
||||
"@AdapterId": "RAIDAdapter0",
|
||||
"@ConfigurationType": "Addressing",
|
||||
"Arrays": None,
|
||||
"LogicalDrives": None,
|
||||
"PhysicalDisks": {
|
||||
"PhysicalDisk": [
|
||||
{
|
||||
"@Number": "0",
|
||||
"@Action": "None",
|
||||
"Slot": 0,
|
||||
},
|
||||
{
|
||||
"@Number": "1",
|
||||
"@Action": "None",
|
||||
"Slot": 1
|
||||
},
|
||||
{
|
||||
"@Number": "2",
|
||||
"@Action": "None",
|
||||
"Slot": 2
|
||||
},
|
||||
{
|
||||
"@Number": "3",
|
||||
"@Action": "None",
|
||||
"Slot": 3
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
def test_fail_create_raid_without_target_raid_config(self):
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
|
||||
task.node.target_raid_config = {}
|
||||
raid_configuration = raid.IRMCRAID()
|
||||
|
||||
self.assertRaises(exception.MissingParameterValue,
|
||||
raid_configuration.create_configuration, task)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._validate_physical_disks')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._create_raid_adapter')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._commit_raid_config')
|
||||
def test_create_raid_with_raid_1_and_0(self, commit_mock,
|
||||
create_raid_mock, validation_mock):
|
||||
expected_input = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"raid_level": "10"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
task.node.target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"raid_level": "1+0"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
task.driver.raid.create_configuration(task)
|
||||
create_raid_mock.assert_called_once_with(task.node)
|
||||
validation_mock.assert_called_once_with(
|
||||
task.node, expected_input['logical_disks'])
|
||||
commit_mock.assert_called_once_with(task)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._validate_physical_disks')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._create_raid_adapter')
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._commit_raid_config')
|
||||
def test_create_raid_with_raid_5_and_0(self, commit_mock,
|
||||
create_raid_mock, validation_mock):
|
||||
expected_input = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"raid_level": "50"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
task.node.target_raid_config = {
|
||||
"logical_disks": [
|
||||
{
|
||||
"raid_level": "5+0"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
task.driver.raid.create_configuration(task)
|
||||
create_raid_mock.assert_called_once_with(task.node)
|
||||
validation_mock.assert_called_once_with(
|
||||
task.node, expected_input['logical_disks'])
|
||||
commit_mock.assert_called_once_with(task)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._delete_raid_adapter')
|
||||
def test_delete_raid_configuration(self, delete_raid_mock):
|
||||
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
task.driver.raid.delete_configuration(task)
|
||||
delete_raid_mock.assert_called_once_with(task.node)
|
||||
|
||||
@mock.patch('ironic.drivers.modules.irmc.raid._delete_raid_adapter')
|
||||
def test_delete_raid_configuration_return_cleared_raid_config(
|
||||
self, delete_raid_mock):
|
||||
with task_manager.acquire(self.context, self.node.uuid,
|
||||
shared=True) as task:
|
||||
|
||||
expected_raid_config = {}
|
||||
|
||||
task.driver.raid.delete_configuration(task)
|
||||
self.assertEqual(expected_raid_config, task.node.raid_config)
|
||||
delete_raid_mock.assert_called_once_with(task.node)
|
@ -21,6 +21,7 @@ from ironic.drivers import irmc
|
||||
from ironic.drivers.modules import agent
|
||||
from ironic.drivers.modules import inspector
|
||||
from ironic.drivers.modules import ipmitool
|
||||
from ironic.drivers.modules.irmc import raid
|
||||
from ironic.drivers.modules import iscsi_deploy
|
||||
from ironic.drivers.modules import noop
|
||||
from ironic.tests.unit.db import base as db_base
|
||||
@ -40,7 +41,7 @@ class IRMCHardwareTestCase(db_base.DbTestCase):
|
||||
enabled_inspect_interfaces=['irmc'],
|
||||
enabled_management_interfaces=['irmc'],
|
||||
enabled_power_interfaces=['irmc', 'ipmitool'],
|
||||
enabled_raid_interfaces=['no-raid', 'agent'],
|
||||
enabled_raid_interfaces=['no-raid', 'agent', 'irmc'],
|
||||
enabled_rescue_interfaces=['no-rescue', 'agent'])
|
||||
|
||||
def test_default_interfaces(self):
|
||||
@ -132,3 +133,27 @@ class IRMCHardwareTestCase(db_base.DbTestCase):
|
||||
noop.NoRAID)
|
||||
self.assertIsInstance(task.driver.rescue,
|
||||
noop.NoRescue)
|
||||
|
||||
def test_override_with_raid_configuration(self):
|
||||
node = obj_utils.create_test_node(
|
||||
self.context, driver='irmc',
|
||||
deploy_interface='direct',
|
||||
rescue_interface='agent',
|
||||
raid_interface='irmc')
|
||||
with task_manager.acquire(self.context, node.id) as task:
|
||||
self.assertIsInstance(task.driver.boot,
|
||||
irmc.boot.IRMCVirtualMediaBoot)
|
||||
self.assertIsInstance(task.driver.console,
|
||||
ipmitool.IPMISocatConsole)
|
||||
self.assertIsInstance(task.driver.deploy,
|
||||
agent.AgentDeploy)
|
||||
self.assertIsInstance(task.driver.inspect,
|
||||
irmc.inspect.IRMCInspect)
|
||||
self.assertIsInstance(task.driver.management,
|
||||
irmc.management.IRMCManagement)
|
||||
self.assertIsInstance(task.driver.power,
|
||||
irmc.power.IRMCPower)
|
||||
self.assertIsInstance(task.driver.raid,
|
||||
raid.IRMCRAID)
|
||||
self.assertIsInstance(task.driver.rescue,
|
||||
agent.AgentRescue)
|
||||
|
@ -0,0 +1,6 @@
|
||||
---
|
||||
features:
|
||||
- Adds out-of-band RAID configuration solution for the iRMC driver which
|
||||
makes the functionality available via manual cleaning.
|
||||
See `iRMC hardware type documentation <https://docs.openstack.org/ironic/latest/admin/drivers/irmc.html>`_
|
||||
for more details.
|
@ -122,6 +122,7 @@ ironic.hardware.interfaces.raid =
|
||||
agent = ironic.drivers.modules.agent:AgentRAID
|
||||
fake = ironic.drivers.modules.fake:FakeRAID
|
||||
idrac = ironic.drivers.modules.drac.raid:DracRAID
|
||||
irmc = ironic.drivers.modules.irmc.raid:IRMCRAID
|
||||
no-raid = ironic.drivers.modules.noop:NoRAID
|
||||
|
||||
ironic.hardware.interfaces.rescue =
|
||||
|
Loading…
Reference in New Issue
Block a user