On 20.01.25 17:27, Ryan Roberts wrote:
On 20/01/2025 13:56, David Hildenbrand wrote:
On 20.01.25 14:37, Ryan Roberts wrote:
On 20/01/2025 12:54, David Hildenbrand wrote:
I think the 1 problem that emerged during review of Dev's series, which we
don't
have a proper solution to yet, is the issue of "creep", where regions can be
collapsed to progressively higher orders through iterative scans. At each
collapse, the required thresholds (e.g. max_ptes_none) are met, and the
collapse
effectively adds more non-none ptes so the next scan will then collapse to
even
higher order. Does your solution suffer from this (theoretical/edge case)
issue?
If not, how did you solve?
Yes sadly it suffers from the same issue. bringing max_ptes_none much
lower as a default would "help".
Can we just keep it simple and only support max_ptes_none = 511 ("pagefault
behavior" -- PMD_NR_PAGES - 1) or max_ptes_none = 0 ("deferred behavior") and
document that the other weird configurations will make mTHP skip, because "weird
and unexpetced" ? :)
nit: Rather than values of max_ptes_none other than 0 and max making mTHP skip,
perhaps it's better to say we round to closest of 0 and max?
Maybe. Rounding down always implies doing something not necessarily desired.
In any case, I assume most setups just have the default values here ... :)
That sounds like a great simplification in principle!
And certainly a much easier to start with :)
If we ever get the request to support something else, maybe that's also where we
can learn *why*, and what we would actually want to do with mTHP.
We would need to consider
the swap and shared tunables too though. Perhaps we can pull a similar trick
with those?
Swapped and shared are a bit more challenging, because they are set to "/ 2" or
"/ 8" heuristics.
One simple starting point here is of course to say "when collapsing mTHP, all
have to be unshared and all have to be swapped in", so to essentially ignore
both tunables (in a memory friendly way, as if they are set to 0) for mTHP
collapse and worry about that later, when really required.
For swap, if we assume we start with the whole VMA swapped out, I think setting
max_ptes_swap to 0 could still cause the "creep" problem if faulting pages back
in sequentially? I guess that's creep due to faulting pattern though, so at
least it's not due to collapse. Doesn't feel ideal though.
> > I'm not sure what the semantic of "shared" is? I'm guessing it's
specifically
for private COWed pages, and khugepaged will trigger the COW on collapse?
Yes.
So
again depending on the pattern of writes we could still end up with creep in a
similar way to swap?
I think in regards of both "yes", so a simple starting point but not
necessarily what we want long term. The creep is at least "not wasting
more memory", because we don't collapse where PMD wouldn't have collapsed.
After all, right now we don't collapse mTHP, now we would collapse mTHP
in many scenarios, so we don't have to be perfect initially.
Deriving stuff for small THP sizes when configured for PMD THP sizes is
not easy to do right.
Two alternatives I discussed with Nico for these (not sure which is implemented
here) is to calculate it proportionally to the folio order we are collapsing:
You're only listing one option here... what's the other one you discussed?
Ah sorry, reshuffled it and then had to rush.
The other thing I had in mind is to scan the whole PMD range, and
discard skip the whole PMD range if it doesn't obey the max_ptes_*
stuff. Not perfect, but will mean that we behave just like PMD collapse
would, unless I am missing something.
Assuming max_ptes_swap = 64 (PMD: 512 PTEs) and we are collapsing a 1 MiB mTHP
(256 PTEs), 32 PTEs would be allowed to be swapped out.
Yeah this is exactly what Dev's version is doing at the moment. But that's the
behaviour that leads to the "creep" problem.
Right.
--
Cheers,
David / dhildenb