On Fri, Jan 27, 2023 at 3:26 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Fri, Jan 27, 2023 at 02:51:38PM -0800, Andrew Morton wrote: > > On Fri, 27 Jan 2023 11:40:37 -0800 Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > > > > Per-vma locks idea that was discussed during SPF [1] discussion at LSF/MM > > > last year [2], which concluded with suggestion that “a reader/writer > > > semaphore could be put into the VMA itself; that would have the effect of > > > using the VMA as a sort of range lock. There would still be contention at > > > the VMA level, but it would be an improvement.” This patchset implements > > > this suggested approach. > > > > I think I'll await reviewer/tester input for a while. Sure, I don't expect the review to be very quick considering the complexity, however I would appreciate any testing that can be done. > > > > > The patchset implements per-VMA locking only for anonymous pages which > > > are not in swap and avoids userfaultfs as their implementation is more > > > complex. Additional support for file-back page faults, swapped and user > > > pages can be added incrementally. > > > > This is a significant risk. How can we be confident that these as yet > > unimplemented parts are implementable and that the result will be good? > > They don't need to be implementable for this patchset to be evaluated > on its own terms. This patchset improves scalability for anon pages > without making file/swap/uffd pages worse (or if it does, I haven't > seen the benchmarks to prove it). Making it work for all kinds of page faults would require much more time. So, this incremental approach, when we tackle the mmap_lock scalability problem part-by-part seems more doable. Even with anonymous-only support, the patch shows considerable improvements. Therefore I would argue that the patch is viable even if it does not support the above-mentioned cases. > > That said, I'm confident that I have a good handle on how to make > file-backed page faults work under RCU. Looking forward to collaborating on that! Thanks, Suren.