On Sat, May 18, 2024 at 08:20:05AM +0200, Mateusz Guzik wrote: > Execs of dynamically linked binaries at 20-ish cores are bottlenecked on > the i_mmap_rwsem semaphore, while the biggest singular contributor is > free_pgd_range inducing the lock acquire back-to-back for all > consecutive mappings of a given file. > > Tracing the count of said acquires while building the kernel shows: > [1, 2) 799579 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| > [2, 3) 0 | | > [3, 4) 3009 | | > [4, 5) 3009 | | > [5, 6) 326442 |@@@@@@@@@@@@@@@@@@@@@ | This makes sense. A snippet of /proc/self/maps: 7f0a44725000-7f0a4474b000 r--p 00000000 fe:01 100663437 /usr/lib/x86_64-linux-gnu/libc.so.6 7f0a4474b000-7f0a448a0000 r-xp 00026000 fe:01 100663437 /usr/lib/x86_64-linux-gnu/libc.so.6 7f0a448a0000-7f0a448f4000 r--p 0017b000 fe:01 100663437 /usr/lib/x86_64-linux-gnu/libc.so.6 7f0a448f4000-7f0a448f8000 r--p 001cf000 fe:01 100663437 /usr/lib/x86_64-linux-gnu/libc.so.6 7f0a448f8000-7f0a448fa000 rw-p 001d3000 fe:01 100663437 /usr/lib/x86_64-linux-gnu/libc.so.6 so we frequently have the same file mmaped five times in a row. > The lock remains the main bottleneck, I have not looked at other spots > yet. You're not the first to report high contention on this lock. https://lore.kernel.org/all/20240202093407.12536-1-JonasZhou-oc@xxxxxxxxxxx/ for example. > diff --git a/include/linux/mm.h b/include/linux/mm.h > index b6bdaa18b9e9..443d0c55df80 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h I do object to this going into mm.h. mm/internal.h would be better. I haven't reviewed the patch in depth, but I don't have a problem with the idea. I think it's only a stopgap and we really do need a better data structure than this.