Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: Peter Xu <peterx@xxxxxxxxxx>
- Subject: Re: [PATCH v3] mm: Avoid unnecessary page fault retires on shared memory types
- From: Ingo Molnar <mingo@xxxxxxxxxx>
- Date: Fri, 27 May 2022 12:46:31 +0200
- Cc: linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, Richard Henderson <rth@xxxxxxxxxxxxxxx>, David Hildenbrand <david@xxxxxxxxxx>, Matt Turner <mattst88@xxxxxxxxx>, Albert Ou <aou@xxxxxxxxxxxxxxxxx>, Michal Simek <monstr@xxxxxxxxx>, Russell King <linux@xxxxxxxxxxxxxxx>, Ivan Kokshaysky <ink@xxxxxxxxxxxxxxxxxxxx>, linux-riscv@xxxxxxxxxxxxxxxxxxx, Alexander Gordeev <agordeev@xxxxxxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, Jonas Bonn <jonas@xxxxxxxxxxxx>, Will Deacon <will@xxxxxxxxxx>, "James E . J . Bottomley" <James.Bottomley@xxxxxxxxxxxxxxxxxxxxx>, "H . Peter Anvin" <hpa@xxxxxxxxx>, Andrea Arcangeli <aarcange@xxxxxxxxxx>, openrisc@xxxxxxxxxxxxxxxxxxxx, linux-s390@xxxxxxxxxxxxxxx, Ingo Molnar <mingo@xxxxxxxxxx>, linux-m68k@xxxxxxxxxxxxxxx, Palmer Dabbelt <palmer@xxxxxxxxxxx>, Heiko Carstens <hca@xxxxxxxxxxxxx>, Chris Zankel <chris@xxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Alistair Popple <apopple@xxxxxxxxxx>, linux-csky@xxxxxxxxxxxxxxx, linux-hexagon@xxxxxxxxxxxxxxx, Vlastimil Babka <vbabka@xxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, sparclinux@xxxxxxxxxxxxxxx, Christian Borntraeger <borntraeger@xxxxxxxxxxxxx>, Stafford Horne <shorne@xxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, x86@xxxxxxxxxx, Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx>, Paul Mackerras <paulus@xxxxxxxxx>, linux-arm-kernel@xxxxxxxxxxxxxxxxxxx, Sven Schnelle <svens@xxxxxxxxxxxxx>, Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>, linux-xtensa@xxxxxxxxxxxxxxxx, Nicholas Piggin <npiggin@xxxxxxxxx>, linux-sh@xxxxxxxxxxxxxxx, Vasily Gorbik <gor@xxxxxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, linux-mips@xxxxxxxxxxxxxxx, Max Filippov <jcmvbkbc@xxxxxxxxx>, Helge Deller <deller@xxxxxx>, Vineet Gupta <vgupta@xxxxxxxxxx>, Al Viro <viro@xxxxxxxxxxxxxxxxxx>, Paul Walmsley <paul.walmsley@xxxxxxxxxx>, Johannes Weiner <hannes@xxxxxxxxxxx>, Anton Ivanov <anton.ivanov@xxxxxxxxxxxxxxxxxx>, Catalin Marinas <catalin.marinas@xxxxxxx>, linux-um@xxxxxxxxxxxxxxxxxxx, linux-alpha@xxxxxxxxxxxxxxx, Johannes Berg <johannes@xxxxxxxxxxxxxxxx>, linux-ia64@xxxxxxxxxxxxxxx, Geert Uytterhoeven <geert@xxxxxxxxxxxxxx>, Dinh Nguyen <dinguyen@xxxxxxxxxx>, Guo Ren <guoren@xxxxxxxxxx>, linux-snps-arc@xxxxxxxxxxxxxxxxxxx, Hugh Dickins <hughd@xxxxxxxxxx>, Rich Felker <dalias@xxxxxxxx>, Andy Lutomirski <luto@xxxxxxxxxx>, Richard Weinberger <richard@xxxxxx>, linuxppc-dev@xxxxxxxxxxxxxxxx, Brian Cain <bcain@xxxxxxxxxxx>, Yoshinori Sato <ysato@xxxxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Stefan Kristiansson <stefan.kristiansson@xxxxxxxxxxxxx>, linux-parisc@xxxxxxxxxxxxxxx, "David S . Miller" <davem@xxxxxxxxxxxxx>
- In-reply-to: <20220524234531.1949-1-peterx@redhat.com>
- References: <20220524234531.1949-1-peterx@redhat.com>
* Peter Xu <peterx@xxxxxxxxxx> wrote:
> This patch provides a ~12% perf boost on my aarch64 test VM with a simple
> program sequentially dirtying 400MB shmem file being mmap()ed and these are
> the time it needs:
>
> Before: 650.980 ms (+-1.94%)
> After: 569.396 ms (+-1.38%)
Nice!
> arch/x86/mm/fault.c | 4 ++++
Reviewed-by: Ingo Molnar <mingo@xxxxxxxxxx>
Minor comment typo:
> + /*
> + * We should do the same as VM_FAULT_RETRY, but let's not
> + * return -EBUSY since that's not reflecting the reality on
> + * what has happened - we've just fully completed a page
> + * fault, with the mmap lock released. Use -EAGAIN to show
> + * that we want to take the mmap lock _again_.
> + */
s/reflecting the reality on what has happened
/reflecting the reality of what has happened
> ret = handle_mm_fault(vma, address, fault_flags, NULL);
> +
> + if (ret & VM_FAULT_COMPLETED) {
> + /*
> + * NOTE: it's a pity that we need to retake the lock here
> + * to pair with the unlock() in the callers. Ideally we
> + * could tell the callers so they do not need to unlock.
> + */
> + mmap_read_lock(mm);
> + *unlocked = true;
> + return 0;
Indeed that's a pity - I guess more performance could be gained here,
especially in highly parallel threaded workloads?
Thanks,
Ingo
[Index of Archives]
[Linux Kernel]
[Sparc Linux]
[DCCP]
[Linux ARM]
[Yosemite News]
[Linux SCSI]
[Linux x86_64]
[Linux for Ham Radio]