docs/doc/source/usertasks/kubernetes/using-kubernetes-cpu-manager-static-policy.rst
Rafael Jardim c7da34243b Change User Tasks
Signed-off-by: Rafael Jardim <rafaeljordao.jardim@windriver.com>
Change-Id: Ifb4fa92be9aaad2a9a78980fc6e922dd56ab3423
2021-03-26 13:30:27 -03:00

3.1 KiB

Use Kubernetes CPU Manager Static Policy

You can launch a container pinned to a particular set of CPU cores using a Kubernetes CPU manager static policy.

You will need to enable this CPU management mechanism before applying a policy.

See : Optimizing Application Performance <kubernetes-cpu-manager-policies> for details on how to enable this CPU management mechanism.

  1. Define a container running a CPU stress command.

    Note

    • The pod will be pinned to the allocated set of CPUs on the host and have exclusive use of those CPUs if <resource:request:cpu> is equal to <resource:cpulimit>.
    • Resource memory must also be specified for guaranteed resource allocation.
    • Processes within the pod can float across the set of CPUs allocated to the pod, unless the application in the pod explicitly pins them to a subset of the CPUs.

    For example:

    % cat <<EOF > stress-cpu-pinned.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: stress-ng-cpu
    spec:
      containers:
      - name: stress-ng-app
        image: alexeiled/stress-ng
        imagePullPolicy: IfNotPresent
        command: ["/stress-ng"]
        args: ["--cpu", "10", "--metrics-brief", "-v"]
        resources:
          requests:
            cpu: 2
            memory: "2Gi"
          limits:
            cpu: 2
            memory: "2Gi"
      nodeSelector:
        kubernetes.io/hostname: worker-1
    EOF

    You will likely need to adjust some values shown above to reflect your deployment configuration. For example, on an AIO-SX or AIO-DX system. worker-1 would probably become controller-0 or controller-1.

    The significant addition to this definition in support of CPU pinning, is the resources section , which sets a CPU resource request and limit of 2.

  2. Apply the definition.

    % kubectl apply -f stress-cpu-pinned.yaml

    You can SSH to the worker node and run top, and type '1' to see CPU details per core:

  3. Describe the pod or node to see the CPU Request, CPU Limits and that it is in the "Guaranteed" QoS Class.

    For example:

    % kubectl describe <node>
    Namespace                  Name           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
    ---------                  ----           ------------  ----------  ---------------  -------------  ---
    default                    stress-ng-cpu  2 (15%)       2 (15%)     2Gi (7%)         2Gi (7%)       9m31s
    
    % kubectl describe <pod> stress-ng-cpu
    ...
    QoS Class: Guaranteed
  4. Delete the container.

    % kubectl delete -f stress-cpu-pinned.yaml