The patch titled Subject: mm: remove activate_page() from unuse_pte() has been added to the -mm tree. Its filename is mm-remove-activate_page-from-unuse_pte.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-remove-activate_page-from-unuse_pte.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-remove-activate_page-from-unuse_pte.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Yu Zhao <yuzhao@xxxxxxxxxx> Subject: mm: remove activate_page() from unuse_pte() We don't initially add anon pages to active lruvec after commit b518154e59aa ("mm/vmscan: protect the workingset on anonymous LRU"). Remove activate_page() from unuse_pte(), which seems to be missed by the commit. And make the function static while we are at it. Before the commit, we called lru_cache_add_active_or_unevictable() to add new ksm pages to active lruvec. Therefore, activate_page() wasn't necessary for them in the first place. Link: http://lkml.kernel.org/r/20200818184704.3625199-1-yuzhao@xxxxxxxxxx Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> Cc: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx> Cc: Qian Cai <cai@xxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/swap.h | 1 - mm/swap.c | 4 ++-- mm/swapfile.c | 5 ----- 3 files changed, 2 insertions(+), 8 deletions(-) --- a/include/linux/swap.h~mm-remove-activate_page-from-unuse_pte +++ a/include/linux/swap.h @@ -340,7 +340,6 @@ extern void lru_note_cost_page(struct pa extern void lru_cache_add(struct page *); extern void lru_add_page_tail(struct page *page, struct page *page_tail, struct lruvec *lruvec, struct list_head *head); -extern void activate_page(struct page *); extern void mark_page_accessed(struct page *); extern void lru_add_drain(void); extern void lru_add_drain_cpu(int cpu); --- a/mm/swap.c~mm-remove-activate_page-from-unuse_pte +++ a/mm/swap.c @@ -348,7 +348,7 @@ static bool need_activate_page_drain(int return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0; } -void activate_page(struct page *page) +static void activate_page(struct page *page) { page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { @@ -368,7 +368,7 @@ static inline void activate_page_drain(i { } -void activate_page(struct page *page) +static void activate_page(struct page *page) { pg_data_t *pgdat = page_pgdat(page); --- a/mm/swapfile.c~mm-remove-activate_page-from-unuse_pte +++ a/mm/swapfile.c @@ -1925,11 +1925,6 @@ static int unuse_pte(struct vm_area_stru lru_cache_add_inactive_or_unevictable(page, vma); } swap_free(entry); - /* - * Move the page to the active list so it is not - * immediately swapped out again after swapon. - */ - activate_page(page); out: pte_unmap_unlock(pte, ptl); if (page != swapcache) { _ Patches currently in -mm which might be from yuzhao@xxxxxxxxxx are mm-remove-activate_page-from-unuse_pte.patch mm-remove-superfluous-__clearpageactive.patch mm-remove-superfluous-__clearpagewaiters.patch