Re: [RFC 00/11] khugepaged: mTHP support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/01/2025 18:39, David Hildenbrand wrote:
> On 20.01.25 17:27, Ryan Roberts wrote:
>> On 20/01/2025 13:56, David Hildenbrand wrote:
>>> On 20.01.25 14:37, Ryan Roberts wrote:
>>>> On 20/01/2025 12:54, David Hildenbrand wrote:
>>>>>>> I think the 1 problem that emerged during review of Dev's series, which we
>>>>>>> don't
>>>>>>> have a proper solution to yet, is the issue of "creep", where regions can be
>>>>>>> collapsed to progressively higher orders through iterative scans. At each
>>>>>>> collapse, the required thresholds (e.g. max_ptes_none) are met, and the
>>>>>>> collapse
>>>>>>> effectively adds more non-none ptes so the next scan will then collapse to
>>>>>>> even
>>>>>>> higher order. Does your solution suffer from this (theoretical/edge case)
>>>>>>> issue?
>>>>>>> If not, how did you solve?
>>>>>>
>>>>>> Yes sadly it suffers from the same issue. bringing max_ptes_none much
>>>>>> lower as a default would "help".
>>>>>
>>>>> Can we just keep it simple and only support max_ptes_none = 511 ("pagefault
>>>>> behavior" -- PMD_NR_PAGES - 1) or max_ptes_none = 0 ("deferred behavior") and
>>>>> document that the other weird configurations will make mTHP skip, because
>>>>> "weird
>>>>> and unexpetced" ? :)
>>
>> nit: Rather than values of max_ptes_none other than 0 and max making mTHP skip,
>> perhaps it's better to say we round to closest of 0 and max?
> 
> Maybe. Rounding down always implies doing something not necessarily desired.
> 
> In any case, I assume most setups just have the default values here ... :)
> 
>>
>>>>>
>>>>
>>>> That sounds like a great simplification in principle!
>>>
>>> And certainly a much easier to start with :)
>>>
>>> If we ever get the request to support something else, maybe that's also where we
>>> can learn *why*, and what we would actually want to do with mTHP.
>>>
>>>> We would need to consider
>>>> the swap and shared tunables too though. Perhaps we can pull a similar trick
>>>> with those?
>>>
>>> Swapped and shared are a bit more challenging, because they are set to "/ 2" or
>>> "/ 8" heuristics.
>>>
>>>
>>> One simple starting point here is of course to say "when collapsing mTHP, all
>>> have to be unshared and all have to be swapped in", so to essentially ignore
>>> both tunables (in a memory friendly way, as if they are set to 0) for mTHP
>>> collapse and worry about that later, when really required.
>>
>> For swap, if we assume we start with the whole VMA swapped out, I think setting
>> max_ptes_swap to 0 could still cause the "creep" problem if faulting pages back
>> in sequentially? I guess that's creep due to faulting pattern though, so at
>> least it's not due to collapse. Doesn't feel ideal though.
>> > I'm not sure what the semantic of "shared" is? I'm guessing it's specifically
>> for private COWed pages, and khugepaged will trigger the COW on collapse?
> 
> Yes.
> 
>> So
>> again depending on the pattern of writes we could still end up with creep in a
>> similar way to swap?
> 
> I think in regards of both "yes", so a simple starting point but not necessarily
> what we want long term. The creep is at least "not wasting more memory", because
> we don't collapse where PMD wouldn't have collapsed.
> 
> After all, right now we don't collapse mTHP, now we would collapse mTHP in many
> scenarios, so we don't have to be perfect initially.
> 
> Deriving stuff for small THP sizes when configured for PMD THP sizes is not easy
> to do right.
> 
>>
>>>
>>> Two alternatives I discussed with Nico for these (not sure which is implemented
>>> here) is to calculate it proportionally to the folio order we are collapsing:
>>
>> You're only listing one option here... what's the other one you discussed?
>>
> 
> Ah sorry, reshuffled it and then had to rush.
> 
> The other thing I had in mind is to scan the whole PMD range, and discard skip
> the whole PMD range if it doesn't obey the max_ptes_* stuff. Not perfect, but
> will mean that we behave just like PMD collapse would, unless I am missing
> something.

Hmm that's an interesting idea; If I've understood, we would effectively test
the PMD for collapse as if we were collapsing to PMD-size, but then do the
actual collapse to the "highest allowed order" (dictated by what's enabled +
MADV_HUGEPAGE config).

I'm not so sure this is a good way to go; there would be no way to support VMAs
(or parts of VMAs) that don't span a full PMD. And I can imagine we might see
memory bloat; imagine you have 2M=madvise, 64K=always, max_ptes_none=511, and
let's say we have a 2M (aligned portion of a) VMA that does NOT have
MADV_HUGEPAGE set and has a single page populated. It passes the PMD-size test,
but we opt to collapse to 64K (since 2M=madvise). So now we end up with 32x 64K
folios, 31 of which are all zeros. We have spent the same amount of memory as if
2M=always. Perhaps that's a detail that could be solved by ignoring fully none
64K blocks when collapsing to 64K...

Personally, I think your "enforce simplicifation of the tunables for mTHP
collapse" idea is the best we have so far.

But I'll just push against your pushback of the per-VMA cursor idea briefly. It
strikes me that this could be useful for khugepaged regardless of mTHP support.
Today, it starts scanning a VMA, collapses the first PMD it finds that meets the
requirements, then switches to scanning another VMA. When it eventually gets
back to scanning the first VMA, it starts from the beginning again. Wouldn't a
cursor help reduce the amount of scanning it has to do?

> 
> 
>>>
>>> Assuming max_ptes_swap = 64 (PMD: 512 PTEs) and we are collapsing a 1 MiB mTHP
>>> (256 PTEs), 32 PTEs would be allowed to be swapped out.
>>
>> Yeah this is exactly what Dev's version is doing at the moment. But that's the
>> behaviour that leads to the "creep" problem.
> 
> Right.
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux