On 11/02/25 6:00 am, Nico Pache wrote:
The following series provides khugepaged and madvise collapse with the capability to collapse regions to mTHPs. To achieve this we generalize the khugepaged functions to no longer depend on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages (defined by MTHP_MIN_ORDER) that are utilized. This info is tracked using a bitmap. After the PMD scan is done, we do binary recursion on the bitmap to find the optimal mTHP sizes for the PMD range. The restriction on max_ptes_none is removed during the scan, to make sure we account for the whole PMD range. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the collapse. With the default max_ptes_none=511, the code should keep its most of its original behavior. To exercise mTHP collapse we need to set max_ptes_none<=255. With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse "creep" and constantly promote mTHPs to the next available size. Patch 1: Some refactoring to combine madvise_collapse and khugepaged Patch 2: Refactor/rename hpage_collapse Patch 3-5: Generalize khugepaged functions for arbitrary orders Patch 6-9: The mTHP patches --------- Testing --------- - Built for x86_64, aarch64, ppc64le, and s390x - selftests mm - I created a test script that I used to push khugepaged to its limits while monitoring a number of stats and tracepoints. The code is available here[1] (Run in legacy mode for these changes and set mthp sizes to inherit) The summary from my testings was that there was no significant regression noticed through this test. In some cases my changes had better collapse latencies, and was able to scan more pages in the same amount of time/work, but for the most part the results were consistant. - redis testing. I tested these changes along with my defer changes (see followup post for more details). - some basic testing on 64k page size. - lots of general use. These changes have been running in my VM for some time. Changes since V1 [2]: - Minor bug fixes discovered during review and testing - removed dynamic allocations for bitmaps, and made them stack based - Adjusted bitmap offset from u8 to u16 to support 64k pagesize. - Updated trace events to include collapsing order info. - Scaled max_ptes_none by order rather than scaling to a 0-100 scale. - No longer require a chunk to be fully utilized before setting the bit. Use the same max_ptes_none scaling principle to achieve this. - Skip mTHP collapse that requires swapin or shared handling. This helps prevent some of the "creep" that was discovered in v1. [1] - https://gitlab.com/npache/khugepaged_mthp_test [2] - https://lore.kernel.org/lkml/20250108233128.14484-1-npache@xxxxxxxxxx/ Nico Pache (9): introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse khugepaged: rename hpage_collapse_* to khugepaged_* khugepaged: generalize hugepage_vma_revalidate for mTHP support khugepaged: generalize alloc_charge_folio for mTHP support khugepaged: generalize __collapse_huge_page_* for mTHP support khugepaged: introduce khugepaged_scan_bitmap for mTHP support khugepaged: add mTHP support khugepaged: improve tracepoints for mTHP orders khugepaged: skip collapsing mTHP to smaller orders include/linux/khugepaged.h | 4 + include/trace/events/huge_memory.h | 34 ++- mm/khugepaged.c | 422 +++++++++++++++++++---------- 3 files changed, 306 insertions(+), 154 deletions(-)
Does this patchset suffer from the problem described here: https://lore.kernel.org/all/8abd99d5-329f-4f8d-8680-c2d48d4963b6@xxxxxxx/