On Fri, Apr 07, 2023 at 10:54:00AM -0700, Suren Baghdasaryan wrote: > On Tue, Apr 4, 2023 at 6:59 AM Matthew Wilcox (Oracle) > <willy@xxxxxxxxxxxxx> wrote: > > > > The fault path will immediately fail in handle_mm_fault(), so this > > is the minimal step which allows the per-VMA lock to be taken on > > file-backed VMAs. There may be a small performance reduction as a > > little unnecessary work will be done on each page fault. See later > > patches for the improvement. > > > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > > --- > > mm/memory.c | 9 ++++----- > > 1 file changed, 4 insertions(+), 5 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index fdaec7772fff..f726f85f0081 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -5223,6 +5223,9 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address, > > flags & FAULT_FLAG_REMOTE)) > > return VM_FAULT_SIGSEGV; > > > > + if ((flags & FAULT_FLAG_VMA_LOCK) && !vma_is_anonymous(vma)) > > + return VM_FAULT_RETRY; > > + > > There are count_vm_event(PGFAULT) and count_memcg_event_mm(vma->vm_mm, > PGFAULT) earlier in this function. Returning here and retrying I think > will double-count this page fault. Returning before this accounting > should fix this issue. You're right, but this will be an issue with later patches in the series anyway as we move the check further and further down the call-chain. For that matter, it's an issue in do_swap_page() right now, isn't it? I suppose we don't care too much because it's the rare case where we go into do_swap_page() and so the stats are "correct enough".