On 24.11.23 21:55, Linus Torvalds wrote:
On Fri, 24 Nov 2023 at 05:26, David Hildenbrand <david@xxxxxxxxxx> wrote:
Are you interested in some made-up math, new locking primitives and
slightly unpleasant performance numbers on first sight? :)
Hi Linus,
first of all -- wow -- thanks for that blazing fast feedback! You really
had to work through quite some text+code to understand what's happening.
Thanks for prioritizing that over Black Friday shopping ;)
Ugh. I'm not loving the "I have a proof, but it's too big to fit in
the margin" model of VM development.
This does seem to be very subtle.
Yes, compared to other kernel subsystems, this level of math in the VM
is really new.
The main reason I excluded the proof in this WIP series is not the size,
though. I wanted to get the implementation out after talking about it
(and optimizing it ...) for way too long and (a) proofs involving
infinite sequences in pure ascii are just horrible to read; (b) I think
the proof can be cleaned-up / simplified, especially after I came up
with the "intuition" in the patch some days ago and decided to use that
one instead for now.
No questions asked, if this ever gets discussed for actual merging, only
with a public, reviewed proof available.
[most of the "magic" goes away once one simply uses one rmap value for
each bit in the mm->mm_rmap_id; 22 bit -> 22 rmap values. Of course, 22
values are undesirable.]
Also, please benchmark what your rmap changes do to just plain regular
pages - it *looks* like maybe all you did was to add some
VM_WARN_ON_FOLIO() for those cases, but I have this strong memory of
that
if (likely(!compound)) {
case being very critical on all the usual cases (and the cleanups by
Hugh last year were nice).
Yes, indeed. I separated small vs. large folio handling cleanly, such
that we always have a pattern like:
+ if (likely(!folio_test_large(folio)))
+ return atomic_add_negative(-1, &page->_mapcount);
So, the fast default path is "small folio".
As stated, I want to do much more benchmarking to better understand all
performance impacts; especially, on top of Ryan's work of having THPs
that are always PTE-mapped and we don't have to "artificially" force a
PTE-mapped THP.
I get the feeling that you are trying to optimize a particular case
that is special enough that some less complicated model might work.
Just by looking at your benchmarks, I *think* the case you actually
want to optimize is "THP -> fork -> child exit/execve -> parent write
COW reuse" where the THP page was really never in more than two VM's,
and the second VM was an almost accidental temporary thing that is
just about the whole "fork->exec/exit" model.
That's the most obvious/important case regarding COW reuse, agreed. And
also where I originally started, because it looked like the low-hanging
fruit (below).
For the benchmarks I have so far, I focused mostly on the
performance/harm of individual operations. Conceptually, with rmap IDs,
there is no performance difference if you end up reusing a THP in the
parent or the child, the performance is the same, so I didn't add it
manually to the micro benchmarks.
Which makes me really feel like your rmap_id is very over-engineered.
It seems to be designed to handle all the generic cases, but it seems
like the main cause for it is a very specific case that I _feel_
should be something that could be tracked with *way* less information
(eg just have a "pointer to owner vma, and a simple counter of
non-owners").
That's precisely where I originally started [1], but quickly wondered
(already in that mail):
(a) How to cleanly and safely stabilize refcount vs. mapcount. Without
playing tricks such that it's just "obvious" how it's just correct
and race-free in the COW reuse path.
(b) How to extend it to !anon folios, where we don't have a clean entry
point like folio_add_new_anon_rmap(); primarily to
get a sane replacement for folio_estimated_shares(), which I just
dislike at this point.
(c) If it's possibly to easily and cleanly change owners (creators in my
mail), without involving locks.
So I started thinking about the possibility of a precise and possibly
more universal/cleaner way of handling it, that doesn't add too much
runtime overhead. A way to just have what we have for small folios.
I was surprised to find an approach that gives a precise answer and
simply changes the owner implicitly, primarily just by
adding/subtracting numbers.
[I'll note that having a universal way to stabilize the mapcount vs.
refcount could be quite valuable. But achieving that also for small
folios would require e.g., having shared, hashed atomic seqcounts. And
I'm not too interested in harming small-folio performance at this point :)]
Now, if we want all that sooner, later, or maybe never, is a different
question. This WIP version primarily tries to show what's possible, at
which price, and what the limitations are.
I dunno. I was cc'd, I looked at the patches, but I suspect I'm not
really the target audience. If Hugh is ok with this kind of
Well, I really value your feedback, and you are always in my CC list
when I'm messing with COW and mapcounts.
Having that said, it's encouraging that you went over the patches
(thanks again!) and nothing immediately jumped at you (well, besides the
proof, but that will be fixed if this ever gets merged).
complexity, I bow to a higher authority. This *does* seem to add a lot
of conceptual complexity to something that is already complicated.
I'll note that while it all sounds complicated, in the end it's "just"
adding/subtracting numbers, and having a clean scheme to detect
concurrent (un)mapping. Further, it's handled without any new rmap hooks.
But yes, there sure is added code and complexity, and personally I
dislike having to go from 3 to 6 rmap values to support arm64 with 512
MiB THP. If we could just squeeze it all into a single rmap value, it
would all look much nicer: one total mapcount, one rmap value.
Before this would get merged a lot more has to happen, and most rmap
batching (and possibly including the exclusive atomic seqcount) could be
beneficial even without the rmap ID handling, so it's natural to start
with that independently.
Thanks again!
[1]
https://lore.kernel.org/all/6cec6f68-248e-63b4-5615-9e0f3f819a0a@xxxxxxxxxx/T/#u
--
Cheers,
David / dhildenb