Made paths to include files relative to doc source. This is needed for downstream builds and is good practice for reusability. Fixed some include statements with broken syntax. Added missing section labels to include file. Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: I0423c2008445b49e0bb60b9690925041d7acd6f6 Signed-off-by: Ron Stone <ronald.stone@windriver.com>
2.8 KiB
Resource Placement
starlingx
For requiring maximum determinism and throughput, the must be placed in the same NUMA node as all of its resources, including memory, , and any other resource such as or -Passthrough devices.
VNF 1 and VNF 2 in the example figure are examples of deployed for maximum throughput with .
starlingx
A such as VNF 6 in NUMA-REF will not have the same performance as VNF 1 and VNF 2. There are multiple ways to maximize performance for VNF 6 in this case:
partner
partner
partner
If accessing devices directly from a using -Passthrough or , maximum performance can only be achieved by pinning the cores to the same NUMA node as the device. For example, VNF1 and VNF2 will have optimum SR-IOV performance if deployed on NUMA node 0 and VNF6 will have maximum -Passthrough performance if deployed in NUMA node 1. Options for controlling access to devices are:
- Use pci_numa_affinity flavor extra specs to force VNF6 to be scheduled on NUMA nodes where the device is running. This is the recommended option because it does not require prior knowledge of which socket a device resides on. The affinity may be strict or prefer:
- Strict affinity guarantees scheduling on the same NUMA node as a Device or the VM will not be scheduled.
- Prefer affinity uses best effort so it will only schedule the VM on a NUMA node if no NUMA nodes with that device are available. Note that prefer mode does not provide the same performance or determinism guarantees as strict, but may be good enough for some applications.
- Pin the VM to the NUMA node 0 with the device using flavor extra specs or image properties. This will force the scheduler to schedule the VM on NUMA node 0. However, this requires knowledge of which cores the applicable devices run on and does not work well unless all nodes have that type of node attached to the same socket.