Re: [RFC v2 0/9] khugepaged: mTHP support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 13/02/25 1:56 pm, Dev Jain wrote:


On 12/02/25 10:19 pm, Nico Pache wrote:
On Tue, Feb 11, 2025 at 5:50 AM Dev Jain <dev.jain@xxxxxxx> wrote:



On 11/02/25 6:00 am, Nico Pache wrote:
The following series provides khugepaged and madvise collapse with the
capability to collapse regions to mTHPs.

To achieve this we generalize the khugepaged functions to no longer depend on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages
(defined by MTHP_MIN_ORDER) that are utilized. This info is tracked
using a bitmap. After the PMD scan is done, we do binary recursion on the bitmap to find the optimal mTHP sizes for the PMD range. The restriction on max_ptes_none is removed during the scan, to make sure we account for the whole PMD range. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the
collapse.

With the default max_ptes_none=511, the code should keep its most of its original behavior. To exercise mTHP collapse we need to set max_ptes_none<=255. With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse "creep" and
constantly promote mTHPs to the next available size.

Patch 1:     Some refactoring to combine madvise_collapse and khugepaged
Patch 2:     Refactor/rename hpage_collapse
Patch 3-5:   Generalize khugepaged functions for arbitrary orders
Patch 6-9:   The mTHP patches

---------
   Testing
---------
- Built for x86_64, aarch64, ppc64le, and s390x
- selftests mm
- I created a test script that I used to push khugepaged to its limits while      monitoring a number of stats and tracepoints. The code is available      here[1] (Run in legacy mode for these changes and set mthp sizes to inherit)      The summary from my testings was that there was no significant regression      noticed through this test. In some cases my changes had better collapse      latencies, and was able to scan more pages in the same amount of time/work,
     but for the most part the results were consistant.
- redis testing. I tested these changes along with my defer changes
    (see followup post for more details).
- some basic testing on 64k page size.
- lots of general use. These changes have been running in my VM for some time.

Changes since V1 [2]:
- Minor bug fixes discovered during review and testing
- removed dynamic allocations for bitmaps, and made them stack based
- Adjusted bitmap offset from u8 to u16 to support 64k pagesize.
- Updated trace events to include collapsing order info.
- Scaled max_ptes_none by order rather than scaling to a 0-100 scale.
- No longer require a chunk to be fully utilized before setting the bit. Use
     the same max_ptes_none scaling principle to achieve this.
- Skip mTHP collapse that requires swapin or shared handling. This helps prevent
     some of the "creep" that was discovered in v1.

[1] - https://gitlab.com/npache/khugepaged_mthp_test
[2] - https://lore.kernel.org/lkml/20250108233128.14484-1- npache@xxxxxxxxxx/

Nico Pache (9):
    introduce khugepaged_collapse_single_pmd to unify khugepaged and
      madvise_collapse
    khugepaged: rename hpage_collapse_* to khugepaged_*
    khugepaged: generalize hugepage_vma_revalidate for mTHP support
    khugepaged: generalize alloc_charge_folio for mTHP support
    khugepaged: generalize __collapse_huge_page_* for mTHP support
    khugepaged: introduce khugepaged_scan_bitmap for mTHP support
    khugepaged: add mTHP support
    khugepaged: improve tracepoints for mTHP orders
    khugepaged: skip collapsing mTHP to smaller orders

   include/linux/khugepaged.h         |   4 +
   include/trace/events/huge_memory.h |  34 ++-
   mm/khugepaged.c                    | 422 ++++++++++++++++++ +----------
   3 files changed, 306 insertions(+), 154 deletions(-)


Does this patchset suffer from the problem described here:
https://lore.kernel.org/all/8abd99d5-329f-4f8d-8680- c2d48d4963b6@xxxxxxx/
Hi Dev,

Sorry I meant to get back to you about that.

I understand your concern, but like I've mentioned before, the scan
with the read lock was done so we dont have to do the more expensive
locking, and could still gain insight into the state. You are right
that this info could become stale if the state changes dramatically,
but the collapse_isolate function will verify it and not collapse.

If the state changes dramatically, the _isolate function will verify it, and fallback. And this fallback happens after following this costly path: retrieve a large folio from the buddy allocator -> swapin pages from the disk -> mmap_write_lock() -> anon_vma_lock_write() -> TLB flush on all CPUs -> fallback in _isolate(). If you do fail in _isolate(), doesn't it make sense to get the updated state for the next fallback order immediately, because we have prior information that we failed because of PTE state? What your algorithm will do is *still* follow the costly path described above, and again fail in _isolate(), instead of failing in hpage_collapse_scan_pmd() like mine would.

The verification of the PTE state by the _isolate() function is the "no turning back" point of the algorithm. The verification by hpage_collapse_scan_pmd() is the "let us see if proceeding is even worth it, before we do costly operations" point of the algorithm.

 From my testing I found this to rarely happen.

Unfortunately, I am not very familiar with performance testing/load testing, I am fairly new to kernel programming, so I am getting there. But it really depends on the type of test you are running, what actually runs on memory-intensive systems, etc etc. In fact, on loaded systems I would expect the PTE state to dramatically change. But still, no opinion here.


Also, khugepaged, my changes, and your changes are all a victim of
this. Once we drop the read lock (to either allocate the folio, or
right before acquiring the write_lock), the state can change. In your
case, yes, you are gathering more up to date information, but is it
really that important/worth it to retake locks and rescan for each
instance if we are about to reverify with the write lock taken?

You said "reverify": You are removing the verification, so this step won't be reverification, it will be verification. We do not want to verify *after* we have already done 95% of latency-heavy stuff, only to know that we are going to fail.

Algorithms in the kernel, in general, are of the following form: 1) Verify if a condition is true, resulting in taking a control path -> 2) do a lot of stuff -> "no turning back" step, wherein before committing (by taking locks, say), reverify if this is the control path we should be in. You are eliminating step 1).

Therefore, I will have to say that I disagree with your approach.

On top of this, in the subjective analysis in [1], point number 7 (along with point number 1) remains. And, point number 4 remains.

[1] https://lore.kernel.org/all/23023f48-95c6-4a24-ac8b- aba4b1a441b4@xxxxxxx/


So in my eyes, this is not a "problem"

Looks like the kernel scheduled us for a high-priority debate, I hope there's no deadlock :)


Cheers,
-- Nico



In any case, As Andrew notes, the first step is to justify that this functionality is beneficial to get in. The second step would be to compare the implementations. It would be great if someone from the community can jump in to test these things out : ) but in the meantime
I will begin working on the first step.










[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux