On Mon 10-02-20 16:19:58, Minchan Kim wrote: > Basically, fault handler releases mmap_sem before requesting readahead > and then it is supposed to retry lookup the page from page cache with > FAULT_FLAG_TRIED so that it avoids the live lock of infinite retry. > > However, what happens if the fault handler find a page from page > cache and the page has readahead marker but are waiting under > writeback? Plus one more condition, it happens under mm_populate > which repeats faulting unless it encounters error. So let's assemble > conditions below. > > __mm_populate > for (...) > get_user_pages(faluty_address) > handle_mm_fault > filemap_fault > find a page form page(PG_uptodate|PG_readahead|PG_writeback) > it will return VM_FAULT_RETRY > continue with faulty_address > > IOW, it will repeat fault retry logic until the page will be written > back in the long run. It makes big spike latency of several seconds. > > This patch solves the issue by turning off fault retry logic in second > trial. > > Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx> > --- > It was orignated from code review once I have seen several user reports > but didn't confirm yet it's the root cause. Yes, I think the immediate problem is actually elsewhere but I agree that __mm_populate() should follow the general protocol of retrying only once so your change should make it more robust. The patch looks good to me, you can add: Reviewed-by: Jan Kara <jack@xxxxxxx> Honza > > mm/gup.c | 9 +++++++-- > 1 file changed, 7 insertions(+), 2 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 1b521e0ac1de..b3f825092abf 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -1196,6 +1196,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) > struct vm_area_struct *vma = NULL; > int locked = 0; > long ret = 0; > + bool tried = false; > > end = start + len; > > @@ -1226,14 +1227,18 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors) > * double checks the vma flags, so that it won't mlock pages > * if the vma was already munlocked. > */ > - ret = populate_vma_page_range(vma, nstart, nend, &locked); > + ret = populate_vma_page_range(vma, nstart, nend, > + tried ? NULL : &locked); > if (ret < 0) { > if (ignore_errors) { > ret = 0; > continue; /* continue at next VMA */ > } > break; > - } > + } else if (ret == 0) > + tried = true; > + else > + tried = false; > nend = nstart + ret * PAGE_SIZE; > ret = 0; > } > -- > 2.25.0.225.g125e21ebc7-goog > -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR