Updates for Kubernetes Memory Manager Policies (r10, dsR10 100/200)

Change-Id: I85d9847e98ccc3382a51d10dadb9fb8535d0b704
Signed-off-by: Suzana Fernandes <Suzana.Fernandes@windriver.com>
This commit is contained in:
Suzana Fernandes 2025-02-13 18:49:32 +00:00
parent d3784712c7
commit 63f77b5264
2 changed files with 55 additions and 0 deletions

View File

@ -47,6 +47,7 @@ Optimize application performance
kubernetes-cpu-manager-policies
isolating-cpu-cores-to-enhance-application-performance
kubernetes-topology-manager-policies
kubernetes-memory-manager-policies-3de9d87855bc
--------------

View File

@ -0,0 +1,54 @@
.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _kubernetes-memory-manager-policies-3de9d87855bc:
==================================
Kubernetes Memory Manager Policies
==================================
Kubernetes memory manager policies manage memory allocation for pods with a
focus on |NUMA| topology and performance optimization. You can define the
policy using the kube-memory-mgr-policy host label via |CLI|, with the supported
values.
The **kube-memory-mgr-policy** host label supports the values ``none`` (default)
and ``static``.
For example:
.. code-block:: none
~(keystone)admin)$ system host-lock worker-1
~(keystone)admin)$ system host-label-assign --overwrite worker-1 kube-memory-mgr-policy=static
~(keystone)admin)$ system host-unlock worker-1
Setting to static, the policy ensures NUMA-aware memory allocation for
guaranteed |QoS| pods, reserving memory to meet their requirements and reduce
latency. Memory for system processes can also be reserved using the
``--reserved-memory`` flag, enhancing node stability. For the ``BestEffort``
and ``Burstable`` pods, no memory is reserved, and the default topology hints
are used.
This approach enables better performance for workloads that require predictable
memory usage, but requires careful configuration to ensure compatibility with
system resources.
For configuration options and detailed examples, consult the Kubernetes
documentation at `https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/ <https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/>`__.
-----------
Limitations
-----------
The interaction between the ``kube-memory-mgr-policy=static`` policy and the
topology manager policy ``restricted`` can cause pods not to be scheduled or
started, even when there is sufficient memory available. This is due to the
restrictive design of the NUMA-aware memory manager, which prevents the same
|NUMA| node from being used for both single and multi-NUMA allocations. It is
important that you understand the implications of these memory management
policies and configure your systems accordingly to avoid unexpected failures.
For detailed configuration options and examples, refer to the Kubernetes
documentation at https://kubernetes.io/docs/tasks/administer-cluster/memory-manager/.