[LSF/MM/BPF TOPIC] Restricting or migrating unmovable kernel allocations from slow tier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Byungchul and I would like to suggest a topic about the performance impact of
kernel allocations on CXL memory.

As CXL-enabled servers and memory devices are being developed, CXL-supported
hardware is expected to continue emerging in the coming years.

The Linux kernel supports hot-plugging CXL memory via dax/kmem functionality.
The hot-plugged memory allows either unmovable kernel allocations
(ZONE_NORMAL), or restricts them to movable allocations (ZONE_MOVABLE)
depending on the hot-plug policy.

Recently, Byungchul and I observed a measurable performance degradation with
memhp_default_state=online compared to memhp_default_state=online_movable
on a server where the ratio of memory capacity between DRAM and CXL is 1:2
when running the llama.cpp workload with the default mempolicy.
The workload performs LLM inference and pressures the memory subsystem
due to its large working set size.

Obviously, allowing kernel allocations from CXL memory degrades performance
because kernel memory like page tables, kernel stacks, and slab allocations,
is accessed frequently and may reside in physical memory with significantly
higher access latency.

However, as far as I can tell there are at least two reasons why we need to
support ZONE_NORMAL for CXL memory (please add if there are more):
  1. When hot-plugging a huge amount of CXL memory, the size of
     the struct page array might not fit into DRAM
     -> This could be relaxed with memmap_on_memory
  2. To hot-unplug CXL memory, pages in CXL memory should be migrated to DRAM,
     which means sometimes some portion of CXL memory should be ZONE_NORMAL.

So, there are certain cases where we want CXL memory to include ZONE_NORMAL,
but this also degrades performance if we allow _all_ kinds of kernel
allocations to be served from CXL memory.

For ideal performance, it would be beneficial to either:
  1) Restrict allocating certain types (e.g. page tables, kernel stacks,
     slabs) of kernel memory from slow tier, or
  2) Allow migrating certain types of kernel memory from slow tier to
     fast tier.

At LSF/MM/BPF, I would like to discuss potential directions for addressing
this problem, ensuring the enablement of CXL memory while minimizing its
performance degradation.

Restricting certain types of kernel allocations from slow tier
==============================================================

We could restrict some kernel allocations to fast tier by passing a
nodemask to __alloc_pages() (with only nodes in fast tier set) or
using a GFP flag like __GFP_FAST_TIER which does the same thing.

This prevents kernel allocations from slow tier and thus avoids
performance degradation due to the high access latency of CXL.
However, binding all leaf page tables to fast tier might not be ideal
due to 1) increased latency from premature reclamation
and 2) premature OOM kill [1].

Migrating certain types of kernel allocations from slow to fast tier
====================================================================

Rather than binding kernel allocations to fast tier and causing premature
reclamation & OOM kill, policies for migrating kernel pages may be more
effective, such as:
  - Migrating page tables to fast tier,
    triggered by data-page promotion [1]
  - Migrating to fast tier when there is low memory pressure:
    - Migrating slab movable objects [2]
    - Migrating kernel stacks (if that's feasible)

although this sounds more intrusive and we need to think about robust policies
that do not degrade existing traditional memory systems.

Any opinions will be appreciated.
Thanks!

[1] https://dl.acm.org/doi/10.1145/3459898.3463907
[2] https://lore.kernel.org/linux-mm/20190411013441.5415-1-tobin@xxxxxxxxxx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux