Rewrite completely outdated information on the profile matching

Removes AHC and ready state docs which never worked since instack-undercloud
moved upstream.

Also moves away not strictly related introspection data section.

Change-Id: Id16cd019db7c46e1d1c8547bfe37adafdba78c36
This commit is contained in:
Dmitry Tantsur 2015-12-15 13:16:53 +01:00
parent d5b8afe5cb
commit 4edf41ab19
8 changed files with 184 additions and 592 deletions

View File

@ -7,8 +7,7 @@ In this chapter you will find advanced deployment of various |project| areas.
.. toctree::
Advanced Profile Matching <profile_matching>
Ready-State (BIOS, RAID) <ready_state>
Automated Health Check <automated_health_check>
Accessing Introspection Data <introspection_data>
Modifying default node configuration <node_config>
Node customization & third-party integration <extra_config>
Deploying with Heat Templates <template_deploy>

View File

@ -1,290 +0,0 @@
Automated Health Check (AHC)
============================
Start with matching the nodes to profiles as described in
:doc:`profile_matching`.
Enable running benchmarks during introspection
----------------------------------------------
By default, the benchmark tests do not run during the introspection process.
You can enable this feature by setting *inspection_runbench = true* in the
**undercloud.conf** file prior to installing the undercloud.
If you want to enable this feature after installing the undercloud, you can set
*inspection_runbench = true* in **undercloud.conf**, and re-run
``openstack undercloud install``
Analyze the collected benchmark data
------------------------------------
After introspection has completed, we can do analysis on the benchmark data.
* Run the ``ahc-report`` script to see a general overview of the hardware
::
$ source stackrc
$ ahc-report --categories
##### HPA Controller #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[]
########################
##### Megaraid Controller #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[]
#############################
##### AHCI Controller #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[]
#########################
##### IPMI SDR #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[]
##################
##### Firmware #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[(u'firmware', u'bios', u'date', u'01/01/2011'),
(u'firmware', u'bios', u'vendor', u'Seabios'),
(u'firmware', u'bios', u'version', u'0.5.1')]
##################
##### Memory Timing(RAM) #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[]
############################
##### Network Interfaces #####
3 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[(u'network', u'eth0', u'businfo', u'pci@0000:00:04.0'),
(u'network', u'eth0', u'busy-poll', u'off [fixed]'),
(u'network', u'eth0', u'driver', u'virtio_net'),
(u'network', u'eth0', u'fcoe-mtu', u'off [fixed]'),
(u'network', u'eth0', u'generic-receive-offload', u'on'),
(u'network', u'eth0', u'generic-segmentation-offload', u'on'),
(u'network', u'eth0', u'highdma', u'on [fixed]'),
(u'network', u'eth0', u'large-receive-offload', u'off [fixed]'),
(u'network', u'eth0', u'latency', u'0'),
(u'network', u'eth0', u'link', u'yes'),
(u'network', u'eth0', u'loopback', u'off [fixed]'),
(u'network', u'eth0', u'netns-local', u'off [fixed]'),
(u'network', u'eth0', u'ntuple-filters', u'off [fixed]'),
(u'network', u'eth0', u'receive-hashing', u'off [fixed]'),
(u'network', u'eth0', u'rx-all', u'off [fixed]'),
(u'network', u'eth0', u'rx-checksumming', u'on [fixed]'),
(u'network', u'eth0', u'rx-fcs', u'off [fixed]'),
(u'network', u'eth0', u'rx-vlan-filter', u'on [fixed]'),
(u'network', u'eth0', u'rx-vlan-offload', u'off [fixed]'),
(u'network', u'eth0', u'rx-vlan-stag-filter', u'off [fixed]'),
(u'network', u'eth0', u'rx-vlan-stag-hw-parse', u'off [fixed]'),
(u'network', u'eth0', u'scatter-gather', u'on'),
(u'network', u'eth0', u'scatter-gather/tx-scatter-gather', u'on'),
(u'network', u'eth0', u'scatter-gather/tx-scatter-gather-fraglist', u'on'),
(u'network', u'eth0', u'tcp-segmentation-offload', u'on'),
(u'network',
u'eth0',
u'tcp-segmentation-offload/tx-tcp-ecn-segmentation',
u'on'),
(u'network', u'eth0', u'tcp-segmentation-offload/tx-tcp-segmentation', u'on'),
(u'network',
u'eth0',
u'tcp-segmentation-offload/tx-tcp6-segmentation',
u'on'),
(u'network', u'eth0', u'tx-checksumming', u'on'),
(u'network',
u'eth0',
u'tx-checksumming/tx-checksum-fcoe-crc',
u'off [fixed]'),
(u'network', u'eth0', u'tx-checksumming/tx-checksum-ip-generic', u'on'),
(u'network', u'eth0', u'tx-checksumming/tx-checksum-ipv6', u'off [fixed]'),
(u'network', u'eth0', u'tx-checksumming/tx-checksum-sctp', u'off [fixed]'),
(u'network', u'eth0', u'tx-fcoe-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-gre-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-gso-robust', u'off [fixed]'),
(u'network', u'eth0', u'tx-ipip-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-lockless', u'off [fixed]'),
(u'network', u'eth0', u'tx-mpls-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-nocache-copy', u'on'),
(u'network', u'eth0', u'tx-sit-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-udp_tnl-segmentation', u'off [fixed]'),
(u'network', u'eth0', u'tx-vlan-offload', u'off [fixed]'),
(u'network', u'eth0', u'tx-vlan-stag-hw-insert', u'off [fixed]'),
(u'network', u'eth0', u'udp-fragmentation-offload', u'on'),
(u'network', u'eth0', u'vlan-challenged', u'off [fixed]')]
############################
##### Processors #####
1 identical systems :
[u'B9FE637A-5B97-4A52-BFDA-9244CEA65E23']
[(u'cpu', u'logical', u'number', u'2'),
(u'cpu', u'physical', u'number', u'2'),
(u'cpu',
u'physical_0',
u'flags',
u'fpu fpu_exception wp de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx x86-64 rep_good nopl pni cx16 hypervisor lahf_lm'),
(u'cpu', u'physical_0', u'frequency', u'2000000000'),
(u'cpu', u'physical_0', u'physid', u'401'),
(u'cpu', u'physical_0', u'product', u'QEMU Virtual CPU version 1.5.3'),
(u'cpu', u'physical_0', u'vendor', u'Intel Corp.'),
(u'cpu',
u'physical_1',
u'flags',
u'fpu fpu_exception wp de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx x86-64 rep_good nopl pni cx16 hypervisor lahf_lm'),
(u'cpu', u'physical_1', u'frequency', u'2000000000'),
(u'cpu', u'physical_1', u'physid', u'402'),
(u'cpu', u'physical_1', u'product', u'QEMU Virtual CPU version 1.5.3'),
(u'cpu', u'physical_1', u'vendor', u'Intel Corp.')]
2 identical systems :
[u'7F8831F1-0D81-464E-A767-7577DF49AAA5', u'7884BC95-6EF8-4447-BDE5-D19561718B29']
[(u'cpu', u'logical', u'number', u'1'),
(u'cpu', u'physical', u'number', u'1'),
(u'cpu',
u'physical_0',
u'flags',
u'fpu fpu_exception wp de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx x86-64 rep_good nopl pni cx16 hypervisor lahf_lm'),
(u'cpu', u'physical_0', u'frequency', u'2000000000'),
(u'cpu', u'physical_0', u'physid', u'401'),
(u'cpu', u'physical_0', u'product', u'QEMU Virtual CPU version 1.5.3'),
(u'cpu', u'physical_0', u'vendor', u'Intel Corp.')]
In the example above we have two nodes with a single CPU, and one with two CPU's.
* We can also look for performance outliers
::
$ ahc-report --outliers
Group 0 : Checking logical disks perf
standalone_randread_4k_KBps : INFO : sda : Group performance : min=45296.00, mean=53604.67, max=67923.00, stddev=12453.21
standalone_randread_4k_KBps : ERROR : sda : Group's variance is too important : 23.23% of 53604.67 whereas limit is set to 15.00%
standalone_randread_4k_KBps : ERROR : sda : Group performance : UNSTABLE
standalone_read_1M_IOps : INFO : sda : Group performance : min= 1199.00, mean= 1259.00, max= 1357.00, stddev= 85.58
standalone_read_1M_IOps : INFO : sda : Group performance = 1259.00 : CONSISTENT
standalone_randread_4k_IOps : INFO : sda : Group performance : min=11320.00, mean=13397.33, max=16977.00, stddev= 3113.39
standalone_randread_4k_IOps : ERROR : sda : Group's variance is too important : 23.24% of 13397.33 whereas limit is set to 15.00%
standalone_randread_4k_IOps : ERROR : sda : Group performance : UNSTABLE
standalone_read_1M_KBps : INFO : sda : Group performance : min=1231155.00, mean=1292799.67, max=1393152.00, stddev=87661.11
standalone_read_1M_KBps : INFO : sda : Group performance = 1292799.67 : CONSISTENT
Group 0 : Checking CPU perf
bogomips : INFO : logical_0 : Group performance : min= 4199.99, mean= 4199.99, max= 4199.99, stddev= 0.00
bogomips : INFO : logical_0 : Group performance = 4199.99 : CONSISTENT
bogomips : INFO : logical_1 : Group performance : min= 4199.99, mean= 4199.99, max= 4199.99, stddev= nan
bogomips : INFO : logical_1 : Group performance = 4199.99 : CONSISTENT
loops_per_sec : INFO : logical_0 : Group performance : min= 379.00, mean= 398.67, max= 418.00, stddev= 19.50
loops_per_sec : INFO : logical_0 : Group performance = 398.67 : CONSISTENT
loops_per_sec : INFO : logical_1 : Group performance : min= 423.00, mean= 423.00, max= 423.00, stddev= nan
loops_per_sec : INFO : logical_1 : Group performance = 423.00 : CONSISTENT
loops_per_sec : INFO : CPU Effi. : Group performance : min= 99.28, mean= inf, max= inf, stddev= nan
loops_per_sec : INFO : CPU Effi. : Group performance = inf % : CONSISTENT
Group 0 : Checking Memory perf
Memory benchmark 1K : INFO : logical_0 : Group performance : min= 1677.00, mean= 1698.33, max= 1739.00, stddev= 35.23
Memory benchmark 1K : INFO : logical_0 : Group performance = 1698.33 : CONSISTENT
Memory benchmark 1K : INFO : logical_1 : Group performance : min= 1666.00, mean= 1666.00, max= 1666.00, stddev= nan
Memory benchmark 1K : INFO : logical_1 : Group performance = 1666.00 : CONSISTENT
Memory benchmark 1K : INFO : Thread effi. : Group performance : min= 71.54, mean= 71.54, max= 71.54, stddev= nan
Memory benchmark 1K : INFO : Thread effi. : Group performance = 71.54 : CONSISTENT
Memory benchmark 1K : INFO : Forked Effi. : Group performance : min= 101.97, mean= 101.97, max= 101.97, stddev= nan
Memory benchmark 1K : INFO : Forked Effi. : Group performance = 101.97 % : CONSISTENT
Memory benchmark 4K : INFO : logical_0 : Group performance : min= 4262.00, mean= 4318.00, max= 4384.00, stddev= 61.61
Memory benchmark 4K : INFO : logical_0 : Group performance = 4318.00 : CONSISTENT
Memory benchmark 4K : INFO : logical_1 : Group performance : min= 4363.00, mean= 4363.00, max= 4363.00, stddev= nan
Memory benchmark 4K : INFO : logical_1 : Group performance = 4363.00 : CONSISTENT
Memory benchmark 4K : INFO : Thread effi. : Group performance : min= 77.75, mean= 77.75, max= 77.75, stddev= nan
Memory benchmark 4K : INFO : Thread effi. : Group performance = 77.75 : CONSISTENT
Memory benchmark 4K : INFO : Forked Effi. : Group performance : min= 95.98, mean= 95.98, max= 95.98, stddev= nan
Memory benchmark 4K : INFO : Forked Effi. : Group performance = 95.98 % : CONSISTENT
Memory benchmark 1M : INFO : logical_0 : Group performance : min= 7734.00, mean= 7779.00, max= 7833.00, stddev= 50.11
Memory benchmark 1M : INFO : logical_0 : Group performance = 7779.00 : CONSISTENT
Memory benchmark 1M : INFO : logical_1 : Group performance : min= 7811.00, mean= 7811.00, max= 7811.00, stddev= nan
Memory benchmark 1M : INFO : logical_1 : Group performance = 7811.00 : CONSISTENT
Memory benchmark 1M : INFO : Thread effi. : Group performance : min= 101.20, mean= 101.20, max= 101.20, stddev= nan
Memory benchmark 1M : INFO : Thread effi. : Group performance = 101.20 : CONSISTENT
Memory benchmark 1M : INFO : Forked Effi. : Group performance : min= 99.26, mean= 99.26, max= 99.26, stddev= nan
Memory benchmark 1M : INFO : Forked Effi. : Group performance = 99.26 % : CONSISTENT
Memory benchmark 16M : INFO : logical_0 : Group performance : min= 5986.00, mean= 6702.33, max= 7569.00, stddev= 802.14
Memory benchmark 16M : ERROR : logical_0 : Group's variance is too important : 11.97% of 6702.33 whereas limit is set to 7.00%
Memory benchmark 16M : ERROR : logical_0 : Group performance : UNSTABLE
Memory benchmark 16M : INFO : logical_1 : Group performance : min= 7030.00, mean= 7030.00, max= 7030.00, stddev= nan
Memory benchmark 16M : INFO : logical_1 : Group performance = 7030.00 : CONSISTENT
Memory benchmark 16M : INFO : Thread effi. : Group performance : min= 109.94, mean= 109.94, max= 109.94, stddev= nan
Memory benchmark 16M : INFO : Thread effi. : Group performance = 109.94 : CONSISTENT
Memory benchmark 16M : INFO : Forked Effi. : Group performance : min= 93.14, mean= 93.14, max= 93.14, stddev= nan
Memory benchmark 16M : INFO : Forked Effi. : Group performance = 93.14 % : CONSISTENT
Memory benchmark 128M : INFO : logical_0 : Group performance : min= 6021.00, mean= 6387.00, max= 7084.00, stddev= 603.87
Memory benchmark 128M : ERROR : logical_0 : Group's variance is too important : 9.45% of 6387.00 whereas limit is set to 7.00%
Memory benchmark 128M : ERROR : logical_0 : Group performance : UNSTABLE
Memory benchmark 128M : INFO : logical_1 : Group performance : min= 7089.00, mean= 7089.00, max= 7089.00, stddev= nan
Memory benchmark 128M : INFO : logical_1 : Group performance = 7089.00 : CONSISTENT
Memory benchmark 128M : INFO : Thread effi. : Group performance : min= 107.11, mean= 107.11, max= 107.11, stddev= nan
Memory benchmark 128M : INFO : Thread effi. : Group performance = 107.11 : CONSISTENT
Memory benchmark 128M : INFO : Forked Effi. : Group performance : min= 95.55, mean= 95.55, max= 95.55, stddev= nan
Memory benchmark 128M : INFO : Forked Effi. : Group performance = 95.55 % : CONSISTENT
Memory benchmark 256M : WARNING : Thread effi. : Benchmark not run on this group
Memory benchmark 256M : WARNING : Forked Effi. : Benchmark not run on this group
Memory benchmark 1G : INFO : logical_0 : Group performance : min= 6115.00, mean= 6519.67, max= 7155.00, stddev= 557.05
Memory benchmark 1G : ERROR : logical_0 : Group's variance is too important : 8.54% of 6519.67 whereas limit is set to 7.00%
Memory benchmark 1G : ERROR : logical_0 : Group performance : UNSTABLE
Memory benchmark 1G : INFO : logical_1 : Group performance : min= 7136.00, mean= 7136.00, max= 7136.00, stddev= nan
Memory benchmark 1G : INFO : logical_1 : Group performance = 7136.00 : CONSISTENT
Memory benchmark 1G : INFO : Thread effi. : Group performance : min= 104.29, mean= 104.29, max= 104.29, stddev= nan
Memory benchmark 1G : INFO : Thread effi. : Group performance = 104.29 : CONSISTENT
Memory benchmark 1G : INFO : Forked Effi. : Group performance : min= 98.98, mean= 98.98, max= 98.98, stddev= nan
Memory benchmark 1G : INFO : Forked Effi. : Group performance = 98.98 % : CONSISTENT
Memory benchmark 2G : INFO : logical_0 : Group performance : min= 6402.00, mean= 6724.33, max= 7021.00, stddev= 310.30
Memory benchmark 2G : INFO : logical_0 : Group performance = 6724.33 : CONSISTENT
Memory benchmark 2G : INFO : logical_1 : Group performance : min= 7167.00, mean= 7167.00, max= 7167.00, stddev= nan
Memory benchmark 2G : INFO : logical_1 : Group performance = 7167.00 : CONSISTENT
Memory benchmark 2G : WARNING : Thread effi. : Benchmark not run on this group
Memory benchmark 2G : WARNING : Forked Effi. : Benchmark not run on this group
The output above is from a virtual setup, so the benchmarks are not accurate.
However we can see that the variance of the "standalone_randread_4k_KBps"
metric was above the threshold, so the group is marked as unstable.
Exclude outliers from deployment
--------------------------------
We will use the sample reports above to construct some matching rules
for our deployment. Refer to :doc:`profile_matching` for details.
* Add a rule to the **control.specs** file to match the system with two CPUs
::
[
('cpu', 'logical', 'number', 'ge(2)'),
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'),
('memory', 'total', 'size', 'ge(4294967296)'),
]
* Add a rule to the **control.specs** file to exclude systems with below
average disk performance from the control role
::
[
('disk', '$disk', 'standalone_randread_4k_IOps', 'gt(13397)')
('cpu', 'logical', 'number', 'ge(2)'),
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'),
('memory', 'total', 'size', 'ge(4294967296)'),
]
* Now rerun the matching and proceed with remaining steps from
:doc:`profile_matching`.

View File

@ -0,0 +1,28 @@
.. _introspection_data:
Accessing additional introspection data
---------------------------------------
Every introspection run (as described in
:doc:`../basic_deployment/basic_deployment_cli`) collects a lot of additional
facts about the hardware and puts them as JSON in Swift. Swift container name
is ``ironic-inspector`` and can be modified in
**/etc/ironic-inspector/inspector.conf**. Swift object name is stored under
``hardware_swift_object`` key in Ironic node extra field.
As an example, to download the swift data for all nodes to a local directory
and use that to collect a list of node mac addresses::
# You will need the ironic-inspector user password
# from /etc/ironic-inspector/inspector.conf:
export IRONIC_INSPECTOR_PASSWORD=
# Download the extra introspection data from swift:
for node in $(ironic node-list | grep -v UUID| awk '{print $2}');
do swift -U service:ironic -K $IRONIC_INSPECTOR_PASSWORD download ironic-inspector extra_hardware-$node;
done
# Use jq to access the local data - for example gather macs:
for f in extra_hardware-*;
do cat $f | jq -r 'map(select(.[0]=="network" and .[2]=="serial"))';
done

View File

@ -1,222 +1,187 @@
Advanced Profile Matching
=========================
Here are additional setup steps to take advantage of the advanced profile
matching and the AHC features.
Profile matching allows a user to specify precisely which nodes will receive
which flavor. Here are additional setup steps to take advantage of the profile
matching. In this document "profile" is a capability that is assigned to both
ironic node and nova flavor to create a link between them.
Enable advanced profile matching
--------------------------------
After profile is assigned to a flavor, nova will only deploy it on ironic
nodes with the same profile. Deployment will fail if not enough ironic nodes
are tagged with a profile.
* Install the ahc-tools package::
sudo yum install -y ahc-tools
* Add the credentials for Ironic and Swift to the
**/etc/ahc-tools/ahc-tools.conf** file.
These will be the same credentials that ironic-inspector uses,
and can be copied from **/etc/ironic-inspector/inspector.conf**::
$ sudo -i
# mkdir -p /etc/ahc-tools
# sed 's/\[discoverd/\[ironic/' /etc/ironic-inspector/inspector.conf > /etc/ahc-tools/ahc-tools.conf
# chmod 0600 /etc/ahc-tools/ahc-tools.conf
# exit
Example::
[ironic]
os_auth_url = http://192.0.2.1:5000/v2.0
os_username = ironic
os_password = <PASSWORD>
os_tenant_name = service
[swift]
os_auth_url = http://192.0.2.1:5000/v2.0
os_username = ironic
os_password = <PASSWORD>
os_tenant_name = service
Accessing additional introspection data
---------------------------------------
Every introspection run (as described in
:doc:`../basic_deployment/basic_deployment_cli`) collects a lot of additional
facts about the hardware and puts them as JSON in Swift. Swift container name
is ``ironic-inspector`` and can be modified in
**/etc/ironic-inspector/inspector.conf**. Swift object name is stored under
``hardware_swift_object`` key in Ironic node extra field.
As an example, to download the swift data for all nodes to a local directory
and use that to collect a list of node mac addresses::
# You will need the ironic-inspector user password
# from /etc/ironic-inspector/inspector.conf:
export IRONIC_INSPECTOR_PASSWORD=
# Download the extra introspection data from swift:
for node in $(ironic node-list | grep -v UUID| awk '{print $2}');
do swift -U service:ironic -K $IRONIC_INSPECTOR_PASSWORD download ironic-inspector extra_hardware-$node;
done
# Use jq to access the local data - for example gather macs:
for f in extra_hardware-*;
do cat $f | jq -r 'map(select(.[0]=="network" and .[2]=="serial"))';
done
State file
----------
Configuration file **/etc/ahc-tools/edeploy/state** defines how many nodes of
each profile we want to match. This file contains list of tuples with profile
name and number of nodes for this profile. ``*`` symbol can be used to match
any number, but make sure that such tuple will go last.
For example to start with 1 control node and any number of compute ones,
populate this file with the following contents::
[('control', '1'), ('compute', '*')]
Matching rules
--------------
These matching rules will determine what profile gets assigned to each node
and are stored in files named **/etc/ahc-tools/edeploy/PROFILE.specs** for
each profile defined in **/etc/ahc-tools/edeploy/state**.
Open the **/etc/ahc-tools/edeploy/control.specs** file.
This is a JSON-like file that might look like this::
[
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'),
('memory', 'total', 'size', 'ge(4294967296)'),
]
These rules match on the data collected during introspection.
Note that disk size is in GiB, while memory size is in KiB.
There is a set of helper functions to make matching more flexible.
* network() : the network interface shall be in the specified network
* gt(), ge(), lt(), le() : greater than (or equal), lower than (or equal)
* in() : the item to match shall be in a specified set
* regexp() : match a regular expression
* or(), and(), not(): boolean functions. or() and and() take 2 parameters
and not() one parameter.
There are also placeholders, *$disk* and *$eth* in the above example.
These will store the value in that place for later use.
* For example if we had a "fact" from introspection::
('disk', 'sda', 'size', '40')
This would match the first rule in the above control.specs file,
and we would store ``"disk": "sda"``.
Running advanced profile matching
---------------------------------
* After adjusting the matching rules, we are ready to do the matching::
sudo ahc-match
* This will attempt to match all of the available nodes to the roles
we have defined in the **/etc/ahc-tools/edeploy/state** file.
When a node matches a role, the role is added to the node in Ironic in
the form of a capability. We can check this with ``ironic node-show``::
[stack@instack ~]# ironic node-show b73fb5fa-1a2c-49c6-b38e-8de41e3c0532 | grep properties -A2
| properties | {u'memory_mb': u'4096', u'cpu_arch': u'x86_64', u'local_gb': u'40', |
| | u'cpus': u'1', u'capabilities': u'profile:control,boot_option:local'} |
| instance_uuid | None
* In the above output, we can see that the control profile is added
as a capability to the node. Next we will need to create flavors in Nova
that actually map to these profiles.
[Optional] Manually add the profiles to the nodes
-------------------------------------------------
In order to use the matching functionality without using the AHC tools. We can
instead add the profile "tags" manually. The example below will add the
"control" profile to a node::
ironic node-update <UUID> replace properties/capabilities='profile:control,boot_option:local'
There are two ways to assign a profile to a node. You can assign it directly
or specify one or many suitable profiles for the deployment command to choose
from. It can be done either manually or using the introspection rules.
.. note::
Do not miss the "boot_option" part from any commands below,
otherwise your deployment won't work as expected.
We can not update only a single key from the capabilities dictionary, so we
need to specify both the profile and the boot_option above. Otherwise, the
boot_option key will get removed.
Create flavors to use advanced matching
---------------------------------------
Create flavors to use profile matching
--------------------------------------
Default profile flavors should have been created when the undercloud was
installed, and they will be usable without modification in most environments.
However, if custom profile flavors are needed, they can be created as follows.
In order to use the profiles assigned to the Ironic nodes, Nova needs to have
flavors that have the property "capabilities:profile" set to the intended profile.
flavors that have the property ``capabilities:profile`` set to the intended
profile.
For example, with just the compute and control profiles:
* Create the flavors
::
* Create the flavors::
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 control
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 compute
.. note::
The values for ram, disk, and vcpus should be set to a minimal lower bound,
as Nova will still check that the Ironic nodes have at least this much.
The values for ram, disk, and vcpus should be set to a minimal lower bound,
as Nova will still check that the Ironic nodes have at least this much
even if we set lower properties in the **.specs** files.
* Assign the properties
::
* Assign the properties::
openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute
openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control
Manual profile tagging
----------------------
To assign a profile to a node directly, issue the following command::
ironic node-update <UUID OR NAME> replace properties/capabilities=profile:<PROFILE>,boot_option:local
Alternatively, you can provide a number of profiles as capabilities in form of
``<PROFILE>_profile:1``, which later can be automatically converted to one
assigned profile (see `Use the flavors to deploy`_ for details). For example::
ironic node-update <UUID OR NAME> replace properties/capabilities=compute_profile:1,control_profile:1,boot_option:local
Finally, to clean all profile information from an available nodes, use::
ironic node-update <UUID OR NAME> replace properties/capabilities=boot_option:local
.. note::
We can not update only a single key from the capabilities dictionary, so we
need to specify both the profile and the boot_option above. Otherwise, the
boot_option key will get removed.
Also see :ref:`instackenv` for details on how to set profile in the
``instackenv.json`` file.
Automated profile tagging
-------------------------
`Introspection rules`_ can be used to conduct automatic profile assignment
based on data received from the introspection ramdisk. A set of introspection
rules should be created before introspection that either set ``profile`` or
``<PROFILE>_profile`` capabilities on a node.
The exact structure of data received from the ramdisk depends on both ramdisk
implementation and enabled plugins, and on enabled *ironic-inspector*
processing hooks. The most basic properties are ``cpus``, ``cpu_arch``,
``local_gb`` and ``memory_mb``, which represent CPU number, architecture,
local hard drive size in GiB and RAM size in MiB. See
:ref:`introspection_data` for more details on what our current ramdisk
provides.
For example, imagine we have the following hardware: with disk sizes > 1 TiB
for object storage and with smaller disks for compute and controller nodes.
We also need to make sure that no hardware with seriously insufficient
properties gets to the fleet at all.
Create a JSON file, for example ``rules.json``, with the following contents::
[
{
"description": "Fail introspection for unexpected nodes",
"conditions": [
{"op": "lt", "field": "memory_mb", "value": 4096}
],
"actions": [
{"action": "fail", "message": "Memory too low, expected at least 4 GiB"}
]
},
{
"description": "Assign profile for object storage",
"conditions": [
{"op": "ge", "field": "local_gb", "value": 1024}
],
"actions": [
{"action": "set-capability", "name": "profile", "value": "swift"}
]
},
{
"description": "Assign possible profiles for compute and controller",
"conditions": [
{"op": "lt", "field": "local_gb", "value": 1024},
{"op": "ge", "field": "local_gb", "value": 40}
],
"actions": [
{"action": "set-capability", "name": "compute_profile", "value": "1"},
{"action": "set-capability", "name": "control_profile", "value": "1"}
]
}
]
.. note::
This example may need to be adjusted to work on a virtual environment.
Before introspection load this file into *ironic-inspector*::
openstack baremetal introspection rule import /path/to/rules.json
Then (re)start the introspection. Check assigned profiles or possible profiles
using command::
openstack overcloud profiles list
If you've made a mistake in introspection rules, you can delete all them::
openstack baremetal introspection rule purge
Then reupload the updates rules file and restart introspection.
Use the flavors to deploy
-------------------------
By default, all nodes are deployed to the **baremetal** flavor.
The |project| CLI has options to support more advanced role matching.
To use profile matching you have to `Create flavors to use profile matching`_
first, then use specific flavors for deployment. For each node role set
``--ROLE-flavor`` to the name of the flavor and ``--ROLE-scale`` to the number
of nodes you want to end up with for this role.
Continuing with the example with only a control and compute profile:
After profiles and possible profiles are tagged either manually or during
the introspection, we need to turn possible profiles into an appropriate
number of profiles and validate the result. Continuing with the example with
only control and compute profiles::
* Deploy the overcloud
openstack overcloud profiles match --control-flavor control --control-scale 1 --compute-flavor compute --compute-scale 1
::
* This command first tries to find enough nodes with ``profile`` capability.
openstack overcloud deploy --control-flavor control --compute-flavor compute --templates
* If there are not enough such nodes, it then looks at available nodes with
``PROFILE_profile`` capabilities. If enough of such nodes is found, then
their ``profile`` capabilities are update to make the choice permanent.
This command should exit without errors (and optionally without warnings).
You can see the resulting profiles in the node list provided by
::
$ openstack overcloud profiles list
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| Node UUID | Node Name | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| 581c0aca-64f0-48a8-9881-bba3c2882d6a | | available | control | compute, control |
| ace8ae8d-d18f-4122-b6cf-e8418c7bb04b | | available | compute | compute, control |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
Make sure to provide the same arguments for deployment later on::
openstack overcloud deploy --control-flavor control --control-scale 1 --compute-flavor compute --compute-scale 1 --templates
Use the flavors to scale
-------------------------
The process to scale an overcloud that uses our advanced profiles is the same
as the process used when we only have the **baremetal** flavor.
.. note::
The original overcloud must have been deployed as above in order to scale
using advanced profiles, as the flavor to role mapping happens then.
* Update the **/etc/ahc-tools/edeploy/state** file to match the number
of nodes we want to match to each role.
* Run `sudo ahc-match` to match available nodes to the defined roles.
* Scale the overcloud (example below adds two more nodes to the compute role)
::
openstack overcloud scale stack overcloud overcloud -r Compute-1 -n 2
.. _Introspection rules: http://docs.openstack.org/developer/ironic-inspector/usage.html#introspection-rules

View File

@ -1,112 +0,0 @@
Ready-State (BIOS, RAID)
========================
.. note:: Ready-state configuration currently works only with Dell DRAC
machines.
Ready-state configuration can be used to prepare bare-metal resources for
deployment. It includes BIOS and RAID configuration based on a predefined
profile.
To define the target BIOS and RAID configuration for a deployment profile, you
need to create a JSON-like ``<profile>.cmdb`` file in
``/etc/ahc-tools/edeploy``. The configuration will be applied only to nodes
that match the ``<profile>.specs`` rules.
Define the target BIOS configuration
------------------------------------
To define a BIOS setting, list the name of the setting and its target
value::
[
{
'bios_settings': {'ProcVirtualization': 'Enabled'}
}
]
Define the target RAID configuration
------------------------------------
The RAID configuration can be defined in 2 ways: either by listing the IDs
of the physical disks, or letting Ironic assign physical disks to the
RAID volume.
By providing a list of physical disk IDs the following attributes are required:
``controller``, ``size_gb``, ``raid_level`` and the list of ``physical_disks``.
``controller`` should be the FQDD of the RAID controller assigned by the DRAC
card. Similarly, the list of ``physical_disks`` should be the FQDDs of physical
disks assigned by the DRAC card. An example::
[
{
'logical_disks': [
{'controller': 'RAID.Integrated.1-1',
'size_gb': 100,
'physical_disks': [
'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1',
'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1',
'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'],
'raid_level': '5'},
]
}
]
By letting Ironic assign physical disks to the RAID volume, the following
attributes are required: ``controller``, ``size_gb``, ``raid_level`` and the
``number_of_physical_disks``. ``controller`` should be the FQDD of the RAID
controller assigned by the DRAC card. An example::
[
{
'logical_disks': [
{'controller': 'RAID.Integrated.1-1',
'size_gb': 50,
'raid_level': '1',
'number_of_physical_disks': 2},
]
}
]
Complete example for a ``control.cmdb``
---------------------------------------
::
[
{
'bios_settings': {'ProcVirtualization': 'Enabled'},
'logical_disks': [
{'controller': 'RAID.Integrated.1-1',
'size_gb': 50,
'raid_level': '1',
'number_of_physical_disks': 2,
'disk_type': 'hdd',
'interface_type': 'sas',
'volume_name': 'root_volume',
'is_root_volume': True},
{'controller': 'RAID.Integrated.1-1',
'size_gb': 100,
'physical_disks': [
'Disk.Bay.0:Enclosure.Internal.0-1:RAID.Integrated.1-1',
'Disk.Bay.1:Enclosure.Internal.0-1:RAID.Integrated.1-1',
'Disk.Bay.2:Enclosure.Internal.0-1:RAID.Integrated.1-1'],
'raid_level': '5',
'volume_name': 'data_volume1'}
]
}
]
Trigger the ready-state configuration
-------------------------------------
Continue with matching the nodes to profiles as described in
:doc:`profile_matching`.
Then trigger the BIOS and RAID configuration based on the matched deployment
profile::
instack-ironic-deployment --configure-nodes

View File

@ -273,8 +273,8 @@ desired. By default, all overcloud instances will be booted with the
memory, disk, and cpu as that flavor.
In addition, there are profile-specific flavors created which can be used with
the profile-matching feature of Ironic. For more details on deploying with
profiles, see :doc:`../advanced_deployment/profile_matching`.
the profile-matching feature. For more details on deploying with profiles,
see :doc:`../advanced_deployment/profile_matching`.
Configure a nameserver for the Overcloud
----------------------------------------

View File

@ -124,6 +124,8 @@ Setting Up The Undercloud Machine
Configuration Files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. _instackenv:
instackenv.json
^^^^^^^^^^^^^^^

View File

@ -257,8 +257,8 @@ the capacity of the node. Nodes without a matching flavor are effectively
unusable.
This second mode allows users to ensure that their different hardware types end
up running their intended role, though requires manual configuration of the role
definitions and role matching via the ahc-match tool (see
up running their intended role, though requires either manual node tagging or
using introspection rules to tag nodes (see
:doc:`../advanced_deployment/profile_matching`).