On Mon, Apr 06, 2020 at 11:29:11AM -0700, Yang Shi wrote: > > > On 4/3/20 4:29 AM, Kirill A. Shutemov wrote: > > __collapse_huge_page_isolate() may fail due to extra pin in the LRU add > > pagevec. It's petty common for swapin case: we swap in pages just to > > fail due to the extra pin. > > > > Drain LRU add pagevec on sucessfull swapin. > > > > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > > --- > > mm/khugepaged.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index fdc10ffde1ca..57ff287caf6b 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm, > > } > > vmf.pte--; > > pte_unmap(vmf.pte); > > + > > + /* Drain LRU add pagevec to remove extra pin on the swapped in pages */ > > + if (swapped_in) > > + lru_add_drain(); > > There is already lru_add_drain() called in swap readahead path, please see > swap_vma_readahead() and swap_cluster_readahead(). But not for synchronous case. See SWP_SYNCHRONOUS_IO branch in do_swap_page(). Maybe we should drain it in swap_readpage() or in do_swap_page() after swap_readpage()? I donno. -- Kirill A. Shutemov