Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: Ingo Molnar <mingo@xxxxxxxxxx>
- Subject: Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types
- From: Peter Xu <peterx@xxxxxxxxxx>
- Date: Fri, 27 May 2022 10:53:36 -0400
- Cc: linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, Richard Henderson <rth@xxxxxxxxxxxxxxx>, David Hildenbrand <david@xxxxxxxxxx>, Matt Turner <mattst88@xxxxxxxxx>, Albert Ou <aou@xxxxxxxxxxxxxxxxx>, Michal Simek <monstr@xxxxxxxxx>, Russell King <linux@xxxxxxxxxxxxxxx>, Ivan Kokshaysky <ink@xxxxxxxxxxxxxxxxxxxx>, linux-riscv@xxxxxxxxxxxxxxxxxxx, Alexander Gordeev <agordeev@xxxxxxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, Jonas Bonn <jonas@xxxxxxxxxxxx>, Will Deacon <will@xxxxxxxxxx>, "James E . J . Bottomley" <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx>, "H . Peter Anvin" <hpa@xxxxxxxxx>, Andrea Arcangeli <aarcange@xxxxxxxxxx>, openrisc@xxxxxxxxxxxxxxxxxxxx, linux-s390@xxxxxxxxxxxxxxx, Ingo Molnar <mingo@xxxxxxxxxx>, linux-m68k@xxxxxxxxxxxxxxx, Palmer Dabbelt <palmer@xxxxxxxxxxx>, Heiko Carstens <hca@xxxxxxxxxxxxx>, Chris Zankel <chris@xxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Alistair Popple <apopple@xxxxxxxxxx>, linux-csky@xxxxxxxxxxxxxxx, linux-hexagon@xxxxxxxxxxxxxxx, Vlastimil Babka <vbabka@xxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, sparclinux@xxxxxxxxxxxxxxx, Christian Borntraeger <borntraeger@xxxxxxxxxxxxx>, Stafford Horne <shorne@xxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, x86@xxxxxxxxxx, Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx>, Paul Mackerras <paulus@xxxxxxxxx>, linux-arm-kernel@xxxxxxxxxxxxxxxxxxx, Sven Schnelle <svens@xxxxxxxxxxxxx>, Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>, linux-xtensa@xxxxxxxxxxxxxxxx, Nicholas Piggin <npiggin@xxxxxxxxx>, linux-sh@xxxxxxxxxxxxxxx, Vasily Gorbik <gor@xxxxxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, linux-mips@xxxxxxxxxxxxxxx, Max Filippov <jcmvbkbc@xxxxxxxxx>, Helge Deller <deller@xxxxxx>, Vineet Gupta <vgupta@xxxxxxxxxx>, Al Viro <viro@xxxxxxxxxxxxxxxxxx>, Paul Walmsley <paul.walmsley@xxxxxxxxxx>, Johannes Weiner <hannes@xxxxxxxxxxx>, Anton Ivanov <anton.ivanov@xxxxxxxxxxxxxxxxxx>, Catalin Marinas <catalin.marinas@xxxxxxx>, linux-um@xxxxxxxxxxxxxxxxxxx, linux-alpha@xxxxxxxxxxxxxxx, Johannes Berg <johannes@xxxxxxxxxxxxxxxx>, linux-ia64@xxxxxxxxxxxxxxx, Geert Uytterhoeven <geert@xxxxxxxxxxxxxx>, Dinh Nguyen <dinguyen@xxxxxxxxxx>, Guo Ren <guoren@xxxxxxxxxx>, linux-snps-arc@xxxxxxxxxxxxxxxxxxx, Hugh Dickins <hughd@xxxxxxxxxx>, Rich Felker <dalias@xxxxxxxx>, Andy Lutomirski <luto@xxxxxxxxxx>, Richard Weinberger <richard@xxxxxx>, linuxppc-dev@xxxxxxxxxxxxxxxx, Brian Cain <bcain@xxxxxxxxxxx>, Yoshinori Sato <ysato@xxxxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Stefan Kristiansson <stefan.kristiansson@xxxxxxxxxxxxx>, linux-parisc@xxxxxxxxxxxxxxx, "David S . Miller" <davem@xxxxxxxxxxxxx>
- In-reply-to: <YpCsBwFArieTpvg2@gmail.com>
- References: <20220524234531.1949-1-peterx@redhat.com> <YpCsBwFArieTpvg2@gmail.com>
On Fri, May 27, 2022 at 12:46:31PM +0200, Ingo Molnar wrote:
>
> * Peter Xu <peterx@xxxxxxxxxx> wrote:
>
> > This patch provides a ~12% perf boost on my aarch64 test VM with a simple
> > program sequentially dirtying 400MB shmem file being mmap()ed and these are
> > the time it needs:
> >
> > Before: 650.980 ms (+-1.94%)
> > After: 569.396 ms (+-1.38%)
>
> Nice!
>
> > arch/x86/mm/fault.c | 4 ++++
>
> Reviewed-by: Ingo Molnar <mingo@xxxxxxxxxx>
>
> Minor comment typo:
>
> > + /*
> > + * We should do the same as VM_FAULT_RETRY, but let's not
> > + * return -EBUSY since that's not reflecting the reality on
> > + * what has happened - we've just fully completed a page
> > + * fault, with the mmap lock released. Use -EAGAIN to show
> > + * that we want to take the mmap lock _again_.
> > + */
>
> s/reflecting the reality on what has happened
> /reflecting the reality of what has happened
Will fix.
>
> > ret = handle_mm_fault(vma, address, fault_flags, NULL);
> > +
> > + if (ret & VM_FAULT_COMPLETED) {
> > + /*
> > + * NOTE: it's a pity that we need to retake the lock here
> > + * to pair with the unlock() in the callers. Ideally we
> > + * could tell the callers so they do not need to unlock.
> > + */
> > + mmap_read_lock(mm);
> > + *unlocked = true;
> > + return 0;
>
> Indeed that's a pity - I guess more performance could be gained here,
> especially in highly parallel threaded workloads?
Yes I think so.
The patch avoids the page fault retry, including the mmap lock/unlock side.
Now if we retake the lock for fixup_user_fault() we still safe time for
pgtable walks but the lock overhead will be somehow kept, just with smaller
critical sections.
Some fixup_user_fault() callers won't be affected as long as unlocked==NULL
is passed - e.g. the futex code path (fault_in_user_writeable). After all
they never needed to retake the lock before/after this patch.
It's about the other callers, and they may need some more touch-ups case by
case. Examples are follow_fault_pfn() in vfio and hva_to_pfn_remapped() in
KVM: both of them returns -EAGAIN when *unlocked==true. We need to teach
them to know "*unlocked==true" does not necessarily mean a retry attempt.
I think I can look into them if this patch can be accepted as a follow up.
Thanks for taking a look!
--
Peter Xu
[Index of Archives]
[Linux Kernel]
[Sparc Linux]
[DCCP]
[Linux ARM]
[Yosemite News]
[Linux SCSI]
[Linux x86_64]
[Linux for Ham Radio]