On Sun, Jan 12, 2025 at 7:12 AM Dev Jain <dev.jain@xxxxxxx> wrote: > > > > On 11/01/25 3:31 am, Nico Pache wrote: > > On Thu, Jan 9, 2025 at 9:56 PM Dev Jain <dev.jain@xxxxxxx> wrote: > >> > >> > >> > >> On 10/01/25 7:57 am, Nico Pache wrote: > >>> On Wed, Jan 8, 2025 at 11:22 PM Dev Jain <dev.jain@xxxxxxx> wrote: > >>>> > >>>> > >>>> On 09/01/25 5:01 am, Nico Pache wrote: > >>>>> The following series provides khugepaged and madvise collapse with the > >>>>> capability to collapse regions to mTHPs. > >>>>> > >>>>> To achieve this we generalize the khugepaged functions to no longer depend > >>>>> on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages > >>>>> (defined by MTHP_MIN_ORDER) that are fully utilized. This info is tracked > >>>>> using a bitmap. After the PMD scan is done, we do binary recursion on the > >>>>> bitmap to find the optimal mTHP sizes for the PMD range. The restriction > >>>>> on max_ptes_none is removed during the scan, to make sure we account for > >>>>> the whole PMD range. max_ptes_none is mapped to a 0-100 range to > >>>>> determine how full a mTHP order needs to be before collapsing it. > >>>>> > >>>>> Some design choices to note: > >>>>> - bitmap structures are allocated dynamically because on some arch's > >>>>> (like PowerPC) the value of MTHP_BITMAP_SIZE cannot be computed at > >>>>> compile time leading to warnings. > >>>>> - The recursion is masked through a stack structure. > >>>>> - A MTHP_MIN_ORDER was added to compress the bitmap, and ensure it was > >>>>> 64bit on x86. This provides some optimization on the bitmap operations. > >>>>> if other arches/configs that have larger than 512 PTEs per PMD want to > >>>>> compress their bitmap further we can change this value per arch. > >>>>> > >>>>> Patch 1-2: Some refactoring to combine madvise_collapse and khugepaged > >>>>> Patch 3: A minor "fix"/optimization > >>>>> Patch 4: Refactor/rename hpage_collapse > >>>>> Patch 5-7: Generalize khugepaged functions for arbitrary orders > >>>>> Patch 8-11: The mTHP patches > >>>>> > >>>>> This series acts as an alternative to Dev Jain's approach [1]. The two > >>>>> series differ in a few ways: > >>>>> - My approach uses a bitmap to store the state of the linear scan_pmd to > >>>>> then determine potential mTHP batches. Devs incorporates his directly > >>>>> into the scan, and will try each available order. > >>>>> - Dev is attempting to optimize the locking, while my approach keeps the > >>>>> locking changes to a minimum. I believe his changes are not safe for > >>>>> uffd. > >>>>> - Dev's changes only work for khugepaged not madvise_collapse (although > >>>>> i think that was by choice and it could easily support madvise) > >>>>> - Dev scales all khugepaged sysfs tunables by order, while im removing > >>>>> the restriction of max_ptes_none and converting it to a scale to > >>>>> determine a (m)THP threshold. > >>>>> - Dev turns on khugepaged if any order is available while mine still > >>>>> only runs if PMDs are enabled. I like Dev's approach and will most > >>>>> likely do the same in my PATCH posting. > >>>>> - mTHPs need their ref count updated to 1<<order, which Dev is missing. > >>>>> > >>>>> Patch 11 was inspired by one of Dev's changes. > >>>>> > >>>>> [1] https://lore.kernel.org/lkml/20241216165105.56185-1-dev.jain@xxxxxxx/ > >>>>> > >>>>> Nico Pache (11): > >>>>> introduce khugepaged_collapse_single_pmd to collapse a single pmd > >>>>> khugepaged: refactor madvise_collapse and khugepaged_scan_mm_slot > >>>>> khugepaged: Don't allocate khugepaged mm_slot early > >>>>> khugepaged: rename hpage_collapse_* to khugepaged_* > >>>>> khugepaged: generalize hugepage_vma_revalidate for mTHP support > >>>>> khugepaged: generalize alloc_charge_folio for mTHP support > >>>>> khugepaged: generalize __collapse_huge_page_* for mTHP support > >>>>> khugepaged: introduce khugepaged_scan_bitmap for mTHP support > >>>>> khugepaged: add mTHP support > >>>>> khugepaged: remove max_ptes_none restriction on the pmd scan > >>>>> khugepaged: skip collapsing mTHP to smaller orders > >>>>> > >>>>> include/linux/khugepaged.h | 4 +- > >>>>> mm/huge_memory.c | 3 +- > >>>>> mm/khugepaged.c | 436 +++++++++++++++++++++++++------------ > >>>>> 3 files changed, 306 insertions(+), 137 deletions(-) > >>>> > >>>> Before I take a proper look at your series, can you please include any testing > >>>> you may have done? > >>> > >>> I Built these changes for the following arches: x86_64, arm64, > >>> arm64-64k, ppc64le, s390x > >>> > >>> x86 testing: > >>> - Selftests mm > >>> - some stress-ng tests > >>> - compile kernel > >>> - I did some tests with my defer [1] set on top. This pushes all the > >>> work to khugepaged, which removes the noise of all the PF allocations. > >>> > >>> I recently got an ARM64 machine and did some simple sanity tests (on > >>> both 4k and 64k) like selftests, stress-ng, and playing around with > >>> the tunables, etc. > >>> > >>> I will also be running all the builds through our CI, and perf testing > >>> environments before posting. > >>> > >>> [1] https://lore.kernel.org/lkml/20240729222727.64319-1-npache@xxxxxxxxxx/ > >>> > >>>> > >>> > >> I tested your series with the program I was using and it is not working; > >> can you please confirm it. > > > > Yes, this is expected because you are not fully filling any 32K chunk > > (MIN_MTHP_ORDER) so no bit is ever set. > > That is weird, because if this is the case, then PMD-collapse should > have also failed, but that succeeded. Do you have some userspace program > I can test with? Not exactly, if max_ptes_none is still 511, the old behavior is kept. I modified your program to set the first 8 pages (32k chunk) in every 64k region. #include <unistd.h> #include <sys/ioctl.h> #include <string.h> #include <stdint.h> #include <stdlib.h> #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/time.h> #include <sys/random.h> #include <assert.h> int main(int argc, char *argv[]) { char *ptr; unsigned long mthp_size = (1UL << 16); // 64 KB chunk size size_t chunk_size = (1UL << 25); // 32 MB total size // mmap() to allocate memory at a specific address (1 GB address) ptr = mmap((void *)(1UL << 30), chunk_size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (((unsigned long)ptr) != (1UL << 30)) { printf("mmap did not work on required address\n"); return 1; } // Touch the first 8 pages in every 64 KB chunk for (int i = 0; i < chunk_size; i += mthp_size) { // Touch the first 8 pages within the 64 KB chunk (8 * 4 KB = 32 KB) for (int j = 0; j < 8; ++j) { ptr[i + j * 4096] = i + j * 4096; // Touch the first byte of each page } } // Use madvise() to advise the kernel to use huge pages for this memory if (madvise(ptr, chunk_size, MADV_HUGEPAGE)) { perror("madvise"); return 1; } sleep(100); // Sleep to allow time for the kernel to process the advice return 0; } There's some rounding errors in how I compute the threshold_bits... I think I will adopt how you do the max_ptes_none shifting for better accuracy. Currently if you run this with max_ptes_none=255 (or even lower values like 200...) it will still collapse to a 64k chunk when in reality it should only do 32k because only half the bitmap is set for this order, and 255 < 50% of 512. I'm adding a threshold to the bitmap_set, and doing better scaling like you do. My next version should handle the example code better. >