On Tue, Apr 18, 2023 at 4:35 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > The quilt patch titled > Subject: mm: handle swap page faults under vma lock if page is uncontended > has been removed from the -mm tree. Its filename was > mm-handle-swap-page-faults-if-the-faulting-page-can-be-locked.patch > > This patch was dropped because it was merged into the mm-stable branch > of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Andrew, there are cases which are not properly handled in this patch, outlined here: https://lore.kernel.org/all/87sfczuxkc.fsf@xxxxxxxxxx/ Please drop it for now until I post the next version. It's not stable material yet. Thanks! > > ------------------------------------------------------ > From: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Subject: mm: handle swap page faults under vma lock if page is uncontended > Date: Fri, 14 Apr 2023 11:00:43 -0700 > > When page fault is handled under VMA lock protection, all swap page faults > are retried with mmap_lock because folio_lock_or_retry implementation has > to drop and reacquire mmap_lock if folio could not be immediately locked. > > Instead of retrying all swapped page faults, retry only when folio locking > fails. > > Link: https://lkml.kernel.org/r/20230414180043.1839745-1-surenb@xxxxxxxxxx > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Reviewed-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx> > Cc: Jan Kara <jack@xxxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Josef Bacik <josef@xxxxxxxxxxxxxx> > Cc: Laurent Dufour <ldufour@xxxxxxxxxxxxx> > Cc: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> > Cc: Lorenzo Stoakes <lstoakes@xxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Michel Lespinasse <michel@xxxxxxxxxxxxxx> > Cc: Minchan Kim <minchan@xxxxxxxxxx> > Cc: Punit Agrawal <punit.agrawal@xxxxxxxxxxxxx> > Cc: Vlastimil Babka <vbabka@xxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > mm/filemap.c | 6 ++++++ > mm/memory.c | 5 ----- > 2 files changed, 6 insertions(+), 5 deletions(-) > > --- a/mm/filemap.c~mm-handle-swap-page-faults-if-the-faulting-page-can-be-locked > +++ a/mm/filemap.c > @@ -1706,6 +1706,8 @@ static int __folio_lock_async(struct fol > * mmap_lock has been released (mmap_read_unlock(), unless flags had both > * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in > * which case mmap_lock is still held. > + * If flags had FAULT_FLAG_VMA_LOCK set, meaning the operation is performed > + * with VMA lock only, the VMA lock is still held. > * > * If neither ALLOW_RETRY nor KILLABLE are set, will always return true > * with the folio locked and the mmap_lock unperturbed. > @@ -1713,6 +1715,10 @@ static int __folio_lock_async(struct fol > bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, > unsigned int flags) > { > + /* Can't do this if not holding mmap_lock */ > + if (flags & FAULT_FLAG_VMA_LOCK) > + return false; > + > if (fault_flag_allow_retry_first(flags)) { > /* > * CAUTION! In this case, mmap_lock is not released > --- a/mm/memory.c~mm-handle-swap-page-faults-if-the-faulting-page-can-be-locked > +++ a/mm/memory.c > @@ -3711,11 +3711,6 @@ vm_fault_t do_swap_page(struct vm_fault > if (!pte_unmap_same(vmf)) > goto out; > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > - ret = VM_FAULT_RETRY; > - goto out; > - } > - > entry = pte_to_swp_entry(vmf->orig_pte); > if (unlikely(non_swap_entry(entry))) { > if (is_migration_entry(entry)) { > _ > > Patches currently in -mm which might be from surenb@xxxxxxxxxx are > >