On Tue, Aug 11, 2020 at 9:04 PM Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > lru_cache_add_active_or_unevictable() already adds new ksm pages to > active lru. Calling activate_page() isn't really necessary in this > case. > > Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> > --- > mm/swapfile.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 6c26916e95fd..cf115ea26a20 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1913,16 +1913,16 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, > pte_mkold(mk_pte(page, vma->vm_page_prot))); > if (page == swapcache) { > page_add_anon_rmap(page, vma, addr, false); > + /* > + * Move the page to the active list so it is not > + * immediately swapped out again after swapon. > + */ > + activate_page(page); Actually I think we could just remove this activate_page() call with Joonsoo's anonymous page workingset series merged. The active bit will be taken care by workingset_refault(). > } else { /* ksm created a completely new copy */ > page_add_new_anon_rmap(page, vma, addr, false); > lru_cache_add_active_or_unevictable(page, vma); And it looks the latest linus's tree already changed this to lru_cache_add_inactive_or_unevictable() by commit b518154e59 ("mm/vmscan: protect the workingset on anonymous LRU") Added Joonsoo in this loop. > } > swap_free(entry); > - /* > - * Move the page to the active list so it is not > - * immediately swapped out again after swapon. > - */ > - activate_page(page); > out: > pte_unmap_unlock(pte, ptl); > if (page != swapcache) { > -- > 2.28.0.236.gb10cc79966-goog > >