On Fri, Apr 14, 2023 at 11:00 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > When page fault is handled under VMA lock protection, all swap page > faults are retried with mmap_lock because folio_lock_or_retry > implementation has to drop and reacquire mmap_lock if folio could > not be immediately locked. > Instead of retrying all swapped page faults, retry only when folio > locking fails. I just realized that the title of the patch is misleading. It's about handling page fault under VMA lock. A better title would be something like: "mm: handle swap page faults under vma lock if page is uncontended" > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > --- > Patch applies cleanly over linux-next and mm-unstable > > mm/filemap.c | 6 ++++++ > mm/memory.c | 5 ----- > 2 files changed, 6 insertions(+), 5 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 6f3a7e53fccf..67b937b0f436 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -1706,6 +1706,8 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) > * mmap_lock has been released (mmap_read_unlock(), unless flags had both > * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in > * which case mmap_lock is still held. > + * If flags had FAULT_FLAG_VMA_LOCK set, meaning the operation is performed > + * with VMA lock only, the VMA lock is still held. > * > * If neither ALLOW_RETRY nor KILLABLE are set, will always return true > * with the folio locked and the mmap_lock unperturbed. > @@ -1713,6 +1715,10 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) > bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, > unsigned int flags) > { > + /* Can't do this if not holding mmap_lock */ > + if (flags & FAULT_FLAG_VMA_LOCK) > + return false; > + > if (fault_flag_allow_retry_first(flags)) { > /* > * CAUTION! In this case, mmap_lock is not released > diff --git a/mm/memory.c b/mm/memory.c > index d88f370eacd1..3301a8d01820 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3715,11 +3715,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > if (!pte_unmap_same(vmf)) > goto out; > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > - ret = VM_FAULT_RETRY; > - goto out; > - } > - > entry = pte_to_swp_entry(vmf->orig_pte); > if (unlikely(non_swap_entry(entry))) { > if (is_migration_entry(entry)) { > -- > 2.40.0.634.g4ca3ef3211-goog >