On Mon, Jun 24, 2019 at 02:42:13PM +0000, Song Liu wrote: > > > > On Jun 24, 2019, at 7:27 AM, Kirill A. Shutemov <kirill@xxxxxxxxxxxxx> wrote: > > > > On Mon, Jun 24, 2019 at 02:01:05PM +0000, Song Liu wrote: > >>>> @@ -1392,6 +1403,23 @@ static void collapse_file(struct mm_struct *mm, > >>>> result = SCAN_FAIL; > >>>> goto xa_unlocked; > >>>> } > >>>> + } else if (!page || xa_is_value(page)) { > >>>> + xas_unlock_irq(&xas); > >>>> + page_cache_sync_readahead(mapping, &file->f_ra, file, > >>>> + index, PAGE_SIZE); > >>>> + lru_add_drain(); > >>> > >>> Why? > >> > >> isolate_lru_page() is likely to fail if we don't drain the pagevecs. > > > > Please add a comment. > > Will do. > > > > >>>> + page = find_lock_page(mapping, index); > >>>> + if (unlikely(page == NULL)) { > >>>> + result = SCAN_FAIL; > >>>> + goto xa_unlocked; > >>>> + } > >>>> + } else if (!PageUptodate(page)) { > >>> > >>> Maybe we should try wait_on_page_locked() here before give up? > >> > >> Are you referring to the "if (!PageUptodate(page))" case? > > > > Yes. > > I think this case happens when another thread is reading the page in. > I could not think of a way to trigger this condition for testing. > > On the other hand, with current logic, we will retry the page on the > next scan, so I guess this is OK. What I meant that calling wait_on_page_locked() on !PageUptodate() page will likely make it up-to-date and we don't need to SCAN_FAIL the attempt. -- Kirill A. Shutemov