FEC Operator Support for ACC200

Deleted additional command for ACC200
Updated ACC100/200 inputs based on Patchset 2 comments
Updated ACC100/200 inputs based on Patchset 1 comments
Replaced Mount Bryce with ACC100/ACC200

Story: 2010341
Task: 46543

Signed-off-by: Juanita-Balaraj <juanita.balaraj@windriver.com>
Change-Id: I794b4def21eec9d8ba81b9a31aac90534ecf8130
This commit is contained in:
Juanita-Balaraj 2022-12-09 13:39:54 -05:00
parent 1a7cc09e6f
commit a17149f54c
8 changed files with 260 additions and 229 deletions

View File

@ -214,220 +214,220 @@ following |vRAN| |FEC| accelerators:
#. ACC100 device configuration.
- The maximum number of |VFs| that can be configured for ACC100 is
16 |VFs|.
- The maximum number of |VFs| that can be configured for ACC100 is
16 |VFs|.
- There are 8 queue groups available which can be allocated to any
available operation (4GUL/4GDL/5GUL/5GDL) based on the
``numQueueGroups`` parameter.
- There are 8 queue groups available which can be allocated to any
available operation (4GUL/4GDL/5GUL/5GDL) based on the
``numQueueGroups`` parameter.
- The product of ``numQueueGroups`` x ``numAqsPerGroups`` x
``aqDepthLog2`` x ``numVfBundles`` must be less than 32K.
- The product of ``numQueueGroups`` x ``numAqsPerGroups`` x
``aqDepthLog2`` x ``numVfBundles`` must be less than 32K.
- The following example creates 1 |VF|, configures ACC100's 8 queue
groups; allocating 4 queue groups for 5G Uplink and another 4
queue groups for 5G Downlink.
- The following example creates 1 |VF|, configures ACC100's 8 queue
groups; allocating 4 queue groups for 5G Uplink and another 4
queue groups for 5G Downlink.
.. code-block:: none
.. code-block:: none
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:17:00.0
physicalFunction:
pfDriver: "pci-pf-stub"
vfDriver: "vfio-pci"
vfAmount: 1
bbDevConfig:
acc100:
# pfMode: false = VF Programming, true = PF Programming
pfMode: false
numVfBundles: 1
maxQueueSize: 1024
uplink4G:
numQueueGroups: 0
numAqsPerGroups: 16
aqDepthLog2: 4
downlink4G:
numQueueGroups: 0
numAqsPerGroups: 16
aqDepthLog2: 4
uplink5G:
numQueueGroups: 4
numAqsPerGroups: 16
aqDepthLog2: 4
downlink5G:
numQueueGroups: 4
numAqsPerGroups: 16
aqDepthLog2: 4
drainSkip: true
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:17:00.0
physicalFunction:
pfDriver: "pci-pf-stub"
vfDriver: "vfio-pci"
vfAmount: 1
bbDevConfig:
acc100:
# pfMode: false = VF Programming, true = PF Programming
pfMode: false
numVfBundles: 1
maxQueueSize: 1024
uplink4G:
numQueueGroups: 0
numAqsPerGroups: 16
aqDepthLog2: 4
downlink4G:
numQueueGroups: 0
numAqsPerGroups: 16
aqDepthLog2: 4
uplink5G:
numQueueGroups: 4
numAqsPerGroups: 16
aqDepthLog2: 4
downlink5G:
numQueueGroups: 4
numAqsPerGroups: 16
aqDepthLog2: 4
drainSkip: true
- The following example creates 2 |VFs|, configures ACC100's 8 queue
groups; allocating 2 queue groups each for 4G Uplink, 4G downlink,
5G Uplink and 5G downlink.
- The following example creates 2 |VFs|, configures ACC100's 8 queue
groups; allocating 2 queue groups each for 4G Uplink, 4G downlink,
5G Uplink and 5G downlink.
.. code-block:: none
.. code-block:: none
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:17:00.0
physicalFunction:
pfDriver: "pci-pf-stub"
vfDriver: "vfio-pci"
vfAmount: 2
bbDevConfig:
acc100:
# pfMode: false = VF Programming, true = PF Programming
pfMode: false
numVfBundles: 2
maxQueueSize: 1024
uplink4G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
downlink4G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
uplink5G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
downlink5G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
drainSkip: true
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:17:00.0
physicalFunction:
pfDriver: "pci-pf-stub"
vfDriver: "vfio-pci"
vfAmount: 2
bbDevConfig:
acc100:
# pfMode: false = VF Programming, true = PF Programming
pfMode: false
numVfBundles: 2
maxQueueSize: 1024
uplink4G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
downlink4G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
uplink5G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
downlink5G:
numQueueGroups: 2
numAqsPerGroups: 16
aqDepthLog2: 4
drainSkip: true
#. N3000 device configuration.
- The maximum number of |VFs| that can be configured for N3000 is 8
|VFs|.
- The maximum number of |VFs| that can be configured for N3000 is 8
|VFs|.
- The maximum number of queues that can be mapped to each VF for uplink
or downlink is 32.
- The maximum number of queues that can be mapped to each VF for uplink
or downlink is 32.
- The following configuration for N3000 creates 1 |VF| with 32
queues each for 5G uplink and 5G downlink.
- The following configuration for N3000 creates 1 |VF| with 32
queues each for 5G uplink and 5G downlink.
.. code-block:: none
.. code-block:: none
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:1c:00.0
physicalFunction:
pfDriver: pci-pf-stub
vfDriver: vfio-pci
vfAmount: 1
bbDevConfig:
n3000:
# Network Type: either "FPGA_5GNR" or "FPGA_LTE"
networkType: "FPGA_5GNR"
# Pf mode: false = VF Programming, true = PF Programming
pfMode: false
flrTimeout: 610
downlink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 32
vf1: 0
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
uplink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 32
vf1: 0
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
drainSkip: true
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:1c:00.0
physicalFunction:
pfDriver: pci-pf-stub
vfDriver: vfio-pci
vfAmount: 1
bbDevConfig:
n3000:
# Network Type: either "FPGA_5GNR" or "FPGA_LTE"
networkType: "FPGA_5GNR"
# Pf mode: false = VF Programming, true = PF Programming
pfMode: false
flrTimeout: 610
downlink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 32
vf1: 0
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
uplink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 32
vf1: 0
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
drainSkip: true
- The following configuration for N3000 creates 2 |VFs| with 16
queues each, mapping 32 queues with 2 |VFs| for 5G uplink and
another 32 queues with 2 |VFs| for 5G downlink.
- The following configuration for N3000 creates 2 |VFs| with 16
queues each, mapping 32 queues with 2 |VFs| for 5G uplink and
another 32 queues with 2 |VFs| for 5G downlink.
.. code-block:: none
.. code-block:: none
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:1c:00.0
physicalFunction:
pfDriver: pci-pf-stub
vfDriver: vfio-pci
vfAmount: 2
bbDevConfig:
n3000:
# Network Type: either "FPGA_5GNR" or "FPGA_LTE"
networkType: "FPGA_5GNR"
# Pf mode: false = VF Programming, true = PF Programming
pfMode: false
flrTimeout: 610
downlink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 16
vf1: 16
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
uplink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 16
vf1: 16
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
drainSkip: true
apiVersion: sriovfec.intel.com/v2
kind: SriovFecClusterConfig
metadata:
name: config
namespace: sriov-fec-system
spec:
priority: 1
nodeSelector:
kubernetes.io/hostname: controller-0
acceleratorSelector:
pciAddress: 0000:1c:00.0
physicalFunction:
pfDriver: pci-pf-stub
vfDriver: vfio-pci
vfAmount: 2
bbDevConfig:
n3000:
# Network Type: either "FPGA_5GNR" or "FPGA_LTE"
networkType: "FPGA_5GNR"
# Pf mode: false = VF Programming, true = PF Programming
pfMode: false
flrTimeout: 610
downlink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 16
vf1: 16
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
uplink:
bandwidth: 3
loadBalance: 128
queues:
vf0: 16
vf1: 16
vf2: 0
vf3: 0
vf4: 0
vf5: 0
vf6: 0
vf7: 0
drainSkip: true
#. ACC200 device configuration.
@ -539,6 +539,25 @@ following |vRAN| |FEC| accelerators:
aqDepthLog2: 4
drainSkip: true
#. Lock the host.
.. code-block:: none
$ system host-lock controller-0
#. Enable the ACC200 device to use the vfio-pci base driver.
.. code-block:: none
$ system host-device-modify controller-0 pci_0000_f7_00_0 --driver vfio-pci
#. Unlock the host.
.. code-block:: none
$ system host-unlock controller-0
#. If you need to run the operator on a |prod-long| (|AIO-SX|), then you
should provide ``SriovFecClusterConfig`` with ``spec.drainSkip: True``
to avoid node draining, because it is impossible to drain a node if
@ -558,7 +577,7 @@ following |vRAN| |FEC| accelerators:
.. note::
The ``vfAmount`` and ``numVfBundles`` in ``SriovFecClusterConfig``
must be always equal for ACC100 and ACC200.
must be always equal for ACC100/ACC200.
#. Verify that the |FEC| configuration is applied.
@ -819,12 +838,11 @@ following |vRAN| |FEC| accelerators:
:command:`system application-remove`.
#. Apply the configuration using :command:`system host-device-modify`,
see :ref:`Enable Mount Bryce HW Accelerator for Hosted vRAN
Containerized Workloads <enabling-mount-bryce-hw-accelerator-for-hosted-vram-containerized-workloads>`.
see :ref:`Enable ACC100/ACC200 Hardware Accelerators for Hosted vRAN Containerized Workloads <enabling-mount-bryce-hw-accelerator-for-hosted-vram-containerized-workloads>`.
.. rubric:: |postreq|
- See :ref:`Set Up Pods to Use SRIOV to Access Mount Bryce HW Accelerator
- See :ref:`Set Up Pods to Use SRIOV to Access ACC100/ACC200 HW Accelerators
<set-up-pods-to-use-sriov>`.
- The resource name for |FEC| |VFs| configured with |SRIOV| |FEC| operator
@ -832,7 +850,7 @@ following |vRAN| |FEC| accelerators:
``intel.com/intel_fec_5g`` for N3000 and ``intel.com/intel_fec_acc200``
for ACC200 when requested in a pod spec.
- ACC100.
- ACC100
.. code-block:: none

View File

@ -2,21 +2,21 @@
.. zad1611611564761
.. _enabling-mount-bryce-hw-accelerator-for-hosted-vram-containerized-workloads:
===========================================================================
Enable Mount Bryce HW Accelerator for Hosted vRAN Containerized Workloads
===========================================================================
==================================================================================
Enable ACC100/ACC200 Hardware Accelerators for Hosted vRAN Containerized Workloads
==================================================================================
You can enable and access Mount Bryce ACC100 eASIC card from Intel® such that
it can be used as a HW accelerator by hosted vRAN containerized workloads on
|prod-long|.
You can enable and access ACC100/ACC200 eASIC card from Intel® such
that it can be used as a HW accelerator by hosted vRAN containerized workloads
on |prod-long|.
.. rubric:: |context|
The following procedure shows an example of configuring an |AIO-SX| system such
that it can support hosting a |DPDK| FlexRAN-reference-architecture container
image that uses the Mount Bryce HW accelerator. The procedure enables the
image that uses the ACC100/ACC200 HW accelerator. The procedure enables the
required |SRIOV| drivers, CPU policies and memory of controller-0, and then
enables the Mount Bryce device.
enables the ACC100/ACC200 device.
.. rubric:: |prereq|
@ -68,8 +68,9 @@ enables the Mount Bryce device.
~(keystone_admin)$ system host-memory-modify controller-0 0 -1G 12
#. List all devices on controller-0 and identify the name of the Mount Bryce
Hardware Accelerator (i.e. device id = 0d5c).
#. List all devices on controller-0 and identify the name of the Hardware
Accelerator (i.e. ACC100 device id - 0d5c, ACC200 device id - 57c0). For
example:
.. code-block:: none
@ -89,6 +90,7 @@ enables the Mount Bryce device.
| pci_0000_05_00_0 | 0000:05:00.0 | 30200 | 10de |
| pci_0000_0a_00_0 | 0000:0a:00.0 | 30000 | 102b |
| pci_0000_85_00_0 | 0000:85:00.0 | 120001 | 8086 |
| pci_0000_f7_00_0 | 0000:f7:00.0 | 120000 | 8086 |
+------------------+--------------+----------+-----------+..
+-----------+---------------------------------+---------------------+..
| device id | class name | vendor name |
@ -105,6 +107,7 @@ enables the Mount Bryce device.
| 1eb8 | 3D controller | NVIDIA Corporation |
| 0522 | VGA compatible controller | Matrox Electronics..|
| 0d5c | Processing accelerators | Intel Corporation |
| 57c0 | Processing accelerators | Intel Corporation |
+-----------+---------------------------------+---------------------+..
+----------------------------------------------------------+-----------+---------+
| device name | numa_node | enabled |
@ -121,15 +124,25 @@ enables the Mount Bryce device.
| Device 1eb8 | 0 | False |
| MGA G200e [Pilot] ServerEngines (SEP1) | 0 | False |
| Device 0d5c | 1 | True |
| Device 57c0 | 1 | True |
+----------------------------------------------------------+-----------+---------+
#. Modify the Mount Bryce device to enable it, specify the base driver and
vf driver, and configure it for 16 |VFs|.
#. Modify the device to enable it, specify the base driver and vf driver and
configure it for 16 VFs.
.. code-block:: none
For ACC100:
~(keystone_admin)$ system host-device-modify controller-0 pci_0000_85_00_0 --driver igb_uio --vf-driver vfio -N 16
#. To modify the ACC200 device to enable it, specify the base driver and
vf driver and configure it for 16 VFs.
.. code-block:: none
~(keystone_admin)$ system host-device-modify controller-0 pci_0000_f7_00_0 --driver vfio-pci --vf-driver vfio -N 16
#. Unlock the host.
.. code-block:: none
@ -138,9 +151,10 @@ enables the Mount Bryce device.
.. note::
For Mount Bryce ACC100 device, the number of |VF| bundles (``num_vf_bundles``)
For ACC100/ACC200 device, the number of |VF| bundles (``num_vf_bundles``)
field is automatically changed in ``/usr/share/pf-bb-config/acc100/acc100_config_1vf_4g5g.cfg``
configuration file by updating the value of the ``-N`` parameter via the
or ``/usr/share/pf-bb-config/acc200/acc200_config_16vf.cfg`` configuration
file by updating the value of the ``-N`` parameter via the
:command:`system host-device-modify` command.
In addition to the automatic mode, if additional configuration is needed in
@ -152,6 +166,5 @@ enables the Mount Bryce device.
.. rubric:: |result|
To set up pods using |SRIOV|, see :ref:`Setting Up Pods to Use SRIOV to Access
Mount Bryce HW Accelerator <set-up-pods-to-use-sriov>`.
To set up pods using |SRIOV|, see :ref:`Set Up Pods to Use SRIOV to Access ACC100/ACC200 HW Accelerators <set-up-pods-to-use-sriov>`.

View File

@ -2,17 +2,17 @@
.. ggs1611608368857
.. _set-up-pods-to-use-sriov:
=============================================================
Set Up Pods to Use SRIOV to Access Mount Bryce HW Accelerator
=============================================================
================================================================
Set Up Pods to Use SRIOV to Access ACC100/ACC200 HW Accelerators
================================================================
You can configure pods with |SRIOV| access to a Mount Bryce device by adding the
You can configure pods with |SRIOV| access to a ACC100/ACC200 devices by adding the
appropriate 'resources' request in the pod specification.
.. rubric:: |context|
The following procedure shows an example of launching a container image with
'resources' request for a |VF| to the Mount Bryce device.
'resources' request for a |VF| to the ACC100/ACC200 devices.
.. rubric:: |proc|
@ -22,7 +22,7 @@ The following procedure shows an example of launching a container image with
$ source /etc/platform/openrc ~(keystone_admin)$
#. Create a pod.yml file that requests 16 Mount Bryce VFs
#. Create a pod.yml file that requests 16 ACC100/ACC200 VFs
\(i.e. intel.com/intel_acc100_fec: '16'\)
.. code-block:: none

View File

@ -327,7 +327,7 @@ Common device management tasks
cli-commands-for-managing-pci-devices
***********************************************
vRAN Accelerator ACC100 Adapter \(Mount Bryce\)
vRAN Accelerator Adapters (ACC100/ACC200/N3000)
***********************************************
.. toctree::

View File

@ -172,7 +172,7 @@ Verified and approved hardware components for use with |prod| are listed here.
| | |
| | - Intel E810-CQDA2T (Columbiaville - Logan Beach) 100G |
+--------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Hardware Accelerator Devices Verified for PCI SR-IOV Access | - ACC100 Adapter \(Mount Bryce\) - SRIOV only |
| Hardware Accelerator Devices Verified for PCI SR-IOV Access | - ACC100/ACC200 Adapters - SRIOV only |
| | |
| | - Maclaren Summit Intel® vRAN Accelerator ACC100 ; see `<https://networkbuilders.intel.com/solutionslibrary/virtual-ran-vran-with-hardware-acceleration?wapkw=acc100>`__ |
| | |

View File

@ -136,7 +136,7 @@ platform: **Wilson City** (housing ICX-SP).
system host-unlock $NODE
#. After the system has been unlocked and available for the first time,
configure ACC100 / Mount Bryce:
configure ACC100/ACC200:
.. code:: bash

View File

@ -142,7 +142,7 @@ platform: **Coyote Pass** (housing ICX-SP).
system host-unlock $NODE
#. After the system has been unlocked and available for the first time,
configure ACC100 / Mount Bryce:
configure ACC100/ACC200:
.. code:: bash

View File

@ -142,7 +142,7 @@ platform: **Coyote Pass** (housing ICX-SP).
system host-unlock $NODE
#. After the system has been unlocked and available for the first time,
configure ACC100 / Mount Bryce:
configure ACC100/ACC200 :
.. code:: bash