* Michal Hocko <mhocko@xxxxxxxx> wrote: > On Tue 10-01-23 16:44:42, Suren Baghdasaryan wrote: > > On Tue, Jan 10, 2023 at 4:39 PM Davidlohr Bueso <dave@xxxxxxxxxxxx> wrote: > > > > > > On Mon, 09 Jan 2023, Suren Baghdasaryan wrote: > > > > > > >This configuration variable will be used to build the support for VMA > > > >locking during page fault handling. > > > > > > > >This is enabled by default on supported architectures with SMP and MMU > > > >set. > > > > > > > >The architecture support is needed since the page fault handler is called > > > >from the architecture's page faulting code which needs modifications to > > > >handle faults under VMA lock. > > > > > > I don't think that per-vma locking should be something that is user-configurable. > > > It should just be depdendant on the arch. So maybe just remove CONFIG_PER_VMA_LOCK? > > > > Thanks for the suggestion! I would be happy to make that change if > > there are no objections. I think the only pushback might have been the > > vma size increase but with the latest optimization in the last patch > > maybe that's less of an issue? > > Has vma size ever been a real problem? Sure there might be a lot of those > but your patch increases it by rwsem (without the last patch) which is > something like 40B on top of 136B vma so we are talking about 400B in > total which even with wild mapcount limits shouldn't really be > prohibitive. With a default map count limit we are talking about 2M > increase at most (per address space). > > Or are you aware of any specific usecases where vma size is a real > problem? 40 bytes for the rwsem, plus the patch also adds a 32-bit sequence counter: + int vm_lock_seq; + struct rw_semaphore lock; So it's +44 bytes. Thanks, Ingo