TLDR: I think we're going to need to call synchronize_rcu() in munmap(), and I know this can be quite expensive. Is this a reasonable price to pay for avoiding taking the mmap_sem for page faults? We're taking the first tentative steps towards moving the page fault handler from being protected by the mmap_sem to being protected by the RCU lock. The first part is replacing the rbtree with the maple tree, so a walk can be RCU safe. Liam has taken care of that part. The next step is to introduce an RCU user that's perhaps less critical than the page fault handler to see the consequences. So I chose /proc/$pid/maps. This also covers smaps and numa_maps. It's probably the simplest user, and taking the mmap_sem in this path actually affects some important workloads. Here's the tree: https://git.infradead.org/users/willy/linux-maple.git/shortlog/refs/heads/proc-vma-rcu The first of the two patches simply frees VMAs through the RCU mechanism. That means we can now walk the tree and read from the VMA, knowing the VMA will remain a VMA until we drop the RCU lock. It might be removed from the tree by a racing munmap(), but the memory will not be reallocated. The second patch converts the seq_file business of finding the VMAs from being protected by the mmap_sem to the rcu_lock. Problem: We're quite free about modifying VMAs. So vm_start and vm_end might get modified while we're in the middle of the walk. I think that's OK in some circumstances (eg a second mmap causes vm_end to be increased and we miss the increase), but others are less OK. For example mremap(MREMAP_MAYMOVE) may result in us seeing output from /proc/$pid/maps like this: 7f38db534000-7f38db81a000 r--p 00000000 fe:01 55416 /usr/lib/locale/locale-archive 7f38db81a000-7f38db83f000 r--p 00000000 fe:01 123093081 /usr/lib/x86_64-linux-gnu/libc-2.31.so 55d237566000-55d237567000 rw-p 00000000 00:00 0 7f38db98a000-7f38db9d4000 r--p 00170000 fe:01 123093081 /usr/lib/x86_64-linux-gnu/libc-2.31.so 7f38db9d4000-7f38db9d5000 ---p 001ba000 fe:01 123093081 /usr/lib/x86_64-linux-gnu/libc-2.31.so and I'm not sure how comfortable we are with userspace parsers coping with that kind of situation. We should probably allocate a new VMA and free the old one in this situation. Next problem: /proc/$pid/smaps calls walk_page_vma() which starts out by saying: mmap_assert_locked(walk.mm); which made me realise that smaps is also going to walk the page tables. So the page tables have to be pinned by the existence of the VMA. Which means the page tables must be freed by the same RCU callback that frees the VMA. But doing that means that a task which calls mmap(); munmap(); mmap(); must avoid allocating the same address for the second mmap (until the RCU grace period has elapsed), otherwise threads on other CPUs may see the stale PTEs instead of the new ones. Solution 1: Move the page table freeing into the RCU callback, call synchronize_rcu() in munmap(). Solution 2: Refcount the VMA and free the page tables on refcount dropping to zero. This doesn't actually work because the stale PTE problem still exists. Solution 3: When unmapping a VMA, instead of erasing the VMA from the maple tree, put a "dead" entry in its place. Once the RCU freeing and the TLB shootdown has happened, erase the entry and it can then be allocated. If we do that MAP_FIXED will have to synchronize_rcu() if it overlaps a dead entry. Thoughts?