diff --git a/doc/source/admintasks/isolating-cpu-cores-to-enhance-application-performance.rst b/doc/source/admintasks/isolating-cpu-cores-to-enhance-application-performance.rst index 3b7102598..2d8a72bf0 100644 --- a/doc/source/admintasks/isolating-cpu-cores-to-enhance-application-performance.rst +++ b/doc/source/admintasks/isolating-cpu-cores-to-enhance-application-performance.rst @@ -12,16 +12,10 @@ which are completely isolated from the host process scheduler. This allows you to customize Kubernetes CPU management when policy is set to static so that low-latency applications run with optimal efficiency. -The following restrictions apply when using application-isolated cores in the -Horizon Web interface and sysinv: +The following restriction applies when using application-isolated cores: - There must be at least one platform and one application core on each host. - .. warning:: - The presence of an application core on the node and nodes missing this - configuration will fail. - - For example: .. code-block:: none @@ -32,9 +26,10 @@ For example: ~(keystone)admin)$ system host-cpu-modify -f application-isolated -p1 15 worker-1 ~(keystone)admin)$ system host-unlock worker-1 -All SMT siblings on a core will have the same assigned function. On host boot, -any CPUs designated as isolated will be specified as part of the isolcpu kernel -boot argument, which will isolate them from the process scheduler. +All |SMT| siblings (hyperthreads, if enabled) on a core will have the same +assigned function. On host boot, any CPUs designated as isolated will be +specified as part of the isolcpus kernel boot argument, which will isolate them +from the process scheduler. .. only:: partner diff --git a/doc/source/admintasks/kubernetes-cpu-manager-policies.rst b/doc/source/admintasks/kubernetes-cpu-manager-policies.rst index f5e4ef9e5..92a783b51 100644 --- a/doc/source/admintasks/kubernetes-cpu-manager-policies.rst +++ b/doc/source/admintasks/kubernetes-cpu-manager-policies.rst @@ -12,6 +12,14 @@ or the CLI to set the Kubernetes CPU Manager policy. The **kube-cpu-mgr-policy** host label supports the values ``none`` and ``static``. +For example: + +.. code-block:: none + + ~(keystone)admin)$ system host-lock worker-1 + ~(keystone)admin)$ system host-label-assign --overwrite worker-1 kube-cpu-mgr-policy=static + ~(keystone)admin)$ system host-unlock worker-1 + Setting either of these values results in kubelet on the host being configured with the policy of the same name as described at `https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies `__, but with the following differences: @@ -26,7 +34,7 @@ Static policy customizations throttling for Guaranteed QoS pods is disabled. - When using the static policy, improved performance can be achieved if - one also uses the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance `. + you also use the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance `. - For Kubernetes pods with a **Guaranteed** QoS \(see `https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ `__ for background information\), CFS quota throttling is disabled as it @@ -45,9 +53,20 @@ Static policy customizations .. xreflink and |node-doc|: :ref:`Configuring Node Labels from the CLI `. ------------ -Limitations ------------ +--------------- +Recommendations +--------------- |org| recommends using the static policy. +-------- +See also +-------- + +See |usertasks-doc|: :ref:`Use Kubernetes CPU Manager Static Policy’s +Guaranteed QoS class with exclusive CPUs +` for an example of how to +configure a Pod in the ‘Guaranteed QOS’ class with exclusive (or +dedicated/pinned) cpus. + +See |usertasks-doc|: :ref:`Use Kubernetes CPU Manager Static Policy with application-isolated cores ` for an example of how to configure a Pod with cores that are both ‘isolated from the host process scheduler’ and exclusive/dedicated/pinned cpus. diff --git a/doc/source/shared/abbrevs.txt b/doc/source/shared/abbrevs.txt index 2dc6ec813..408d8fe15 100755 --- a/doc/source/shared/abbrevs.txt +++ b/doc/source/shared/abbrevs.txt @@ -106,6 +106,7 @@ .. |SLA| replace:: :abbr:`SLA (Service Level Agreement)` .. |SLAs| replace:: :abbr:`SLAs (Service Level Agreements)` .. |SM| replace:: :abbr:`SM (Service Manager)` +.. |SMT| replace:: :abbr:`SMT (Simultaneous Multithreading)` .. |SNAT| replace:: :abbr:`SNAT (Source Network Address Translation)` .. |SNMP| replace:: :abbr:`SNMP (Simple Network Management Protocol)` .. |SRIOV| replace:: :abbr:`SR-IOV (Single Root I/O Virtualization)` diff --git a/doc/source/usertasks/kubernetes/index.rst b/doc/source/usertasks/kubernetes/index.rst index 4f4722aa8..c534e40ff 100644 --- a/doc/source/usertasks/kubernetes/index.rst +++ b/doc/source/usertasks/kubernetes/index.rst @@ -140,6 +140,7 @@ Optimize application performance :maxdepth: 1 using-kubernetes-cpu-manager-static-policy + use-application-isolated-cores ---------------------------------------- Adding an SRIOV interface to a container diff --git a/doc/source/usertasks/kubernetes/use-application-isolated-cores.rst b/doc/source/usertasks/kubernetes/use-application-isolated-cores.rst new file mode 100644 index 000000000..b6935ab2a --- /dev/null +++ b/doc/source/usertasks/kubernetes/use-application-isolated-cores.rst @@ -0,0 +1,136 @@ + +.. klf1569260954795 +.. _use-application-isolated-cores: + +======================================================================== +Use Kubernetes CPU Manager Static Policy with application-isolated cores +======================================================================== + +|prod| supports running the most critical low-latency applications on host CPUs +which are completely isolated from the host process scheduler and exclusive (or +dedicated/pinned) to the pod. + +.. rubric:: |prereq| + +- You will need to enable the Kubernetes CPU Manager’s Static Policy for the + target worker node(s). + See |admintasks-doc|: :ref:`Kubernetes CPU Manager Policies + ` for details on how to enable this CPU + management mechanism. + +- You will need to configure application-isolated cores for the target worker + node(s). + See |admintasks-doc|: :ref:`Isolate the CPU Cores to Enhance Application + Performance ` for + details on how to configure application-isolated cores. + +.. rubric:: |proc| + +#. Create your pod with and + according to + `https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies `__ + in order to select **Best Effort** or **Burstable** class; use of + application-isolated cores are NOT supported with **Guaranteed + QoS** class. Specifically this requires either: + + - for **Best Effort**, do NOT specify resources:limits:cpu/memory> + + or + + - for **Burstable**, and + are specified but NOT equal to each other. + + Then, to add ‘application-isolated’ CPUs to the pod, configure + '' and + '' equal to each other, in the pod + spec. These cores will be exclusive (dedicated/pinned) to the pod. If there + are multiple processes within the pod/container, they can be individually + affined to separate application-isolated CPUs if the pod/container requests + multiple windriver.com/isolcpus resources. This is highly recommended as the + Linux kernel does not load balance across application-isolated cores. + Start-up code in the container can determine the available CPUs by running + sched_getaffinity(), by looking for files of the form /dev/cpu/ where + is a number, or by parsing /sys/fs/cgroup/cpuset/cpuset.cpus within the + container. + + For example: + + .. code-block:: none + + % cat < stress-cpu-pinned.yaml + apiVersion: v1 + kind: Pod + metadata: + name: stress-ng-cpu + spec: + containers: + - name: stress-ng-app + image: alexeiled/stress-ng + imagePullPolicy: IfNotPresent + command: ["/stress-ng"] + args: ["--cpu", "10", "--metrics-brief", "-v"] + resources: + requests: + cpu: 2 + memory: "1Gi" + windriver.com/isolcpus: 2 + limits: + cpu: 2 + memory: "2Gi" + windriver.com/isolcpus: 2 + nodeSelector: + kubernetes.io/hostname: worker-1 + EOF + + .. note:: + + The nodeSelector is optional and it can be left out entirely. In which + case, it will run in any valid note. + + You will likely need to adjust some values shown above to reflect your + deployment configuration. For example, on an AIO-SX or AIO-DX system. + worker-1 would probably become controller-0 or controller-1. + + The significant addition to this definition in support of + application-isolated CPUs, is the **resources** section, which sets the + windriver.com/isolcpus resource request and limit of 2. + + Limitation: If Hyperthreading is enabled in the BIOS and application-isolated + CPUs are configured, and these CPUs are allocated to more than one container, + the |SMT| siblings may be allocated to different containers and that could + adversely impact the performance of the application. + + Workaround: The suggested workaround is to allocate all application-isolated + CPUs to a single pod. + + +#. Apply the definition. + + .. code-block:: none + + % kubectl apply -f stress-cpu-pinned.yaml + + You can SSH to the worker node and run :command:`top`, and type '1' to see + CPU details per core: + +#. Describe the pod or node to see the CPU Request, CPU Limits and that it is + in the "Guaranteed" QoS Class. + + For example: + + .. code-block:: none + + % kubectl describe + Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits windriver/isolcpus Requests windriver/isolcpus Limits AGE + --------- ---- ------------ ---------- --------------- ------------- --------------------------- ------------------------- --- + default stress-ng-cpu 1 (7%) 2 (15%) 1Gi (3%) 2Gi (7%) 2 (15%) 2 (15%) 9m31s + + % kubectl describe stress-ng-cpu + ... + QoS Class: Burstable + +#. Delete the container. + + .. code-block:: none + + % kubectl delete -f stress-cpu-pinned.yaml diff --git a/doc/source/usertasks/kubernetes/using-kubernetes-cpu-manager-static-policy.rst b/doc/source/usertasks/kubernetes/using-kubernetes-cpu-manager-static-policy.rst index 60555b734..d236fadb3 100644 --- a/doc/source/usertasks/kubernetes/using-kubernetes-cpu-manager-static-policy.rst +++ b/doc/source/usertasks/kubernetes/using-kubernetes-cpu-manager-static-policy.rst @@ -2,36 +2,44 @@ .. klf1569260954792 .. _using-kubernetes-cpu-manager-static-policy: -======================================== -Use Kubernetes CPU Manager Static Policy -======================================== +=================================================================================== +Use Kubernetes CPU Manager Static Policy’s Guaranteed QoS class with exclusive CPUs +=================================================================================== -You can launch a container pinned to a particular set of CPU cores using a -Kubernetes CPU manager static policy. +You can launch a container pinned to a particular set of CPU cores using the +Kubernetes CPU manager static policy's **Guaranteed QoS** class. .. rubric:: |prereq| -You will need to enable this CPU management mechanism before applying a -policy. +You will need to enable the Kubernetes CPU Manager's Static Policy for the +target worker node(s). -See |admintasks-doc|: :ref:`Optimizing Application Performance ` for details on how to enable this CPU management mechanism. +See |admintasks-doc|: :ref:`Kubernetes CPU Manager Policies +` for details on how to enable this CPU +management mechanism. .. rubric:: |proc| -#. Define a container running a CPU stress command. +#. Create your pod with and + according to + `https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies + `__, + in order to select the Guaranteed QoS class with exclusive CPUs. + Specifically this requires either: - .. note:: + - to be equal to + , and cpu to be an integer value > 1, - - The pod will be pinned to the allocated set of CPUs on the host - and have exclusive use of those CPUs if is - equal to . + or - - Resource memory must also be specified for guaranteed resource - allocation. + - only to be specified, and cpu to be an + integer value > 1. - - Processes within the pod can float across the set of CPUs allocated - to the pod, unless the application in the pod explicitly pins them - to a subset of the CPUs. + The CPUs allocated to the pod will be exclusive (or dedicated/pinned) to + the pod, and taken from the CPUs configured as ‘application’ function for + the host. Processes within the pod can float across the set of CPUs + allocated to the pod, unless the application in the pod explicitly pins the + process(es) to a subset of the CPUs. For example: