The patch titled Subject: mm/swap.c: don't pass "enum lru_list" to trace_mm_lru_insertion() has been added to the -mm tree. Its filename is mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Yu Zhao <yuzhao@xxxxxxxxxx> Subject: mm/swap.c: don't pass "enum lru_list" to trace_mm_lru_insertion() The parameter is redundant in the sense that it can be extracted from the "struct page" parameter by page_lru() correctly. Link: https://lore.kernel.org/linux-mm/20201207220949.830352-5-yuzhao@xxxxxxxxxx/ Link: https://lkml.kernel.org/r/20210122220600.906146-5-yuzhao@xxxxxxxxxx Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> Reviewed-by: Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/trace/events/pagemap.h | 11 ++++------- mm/swap.c | 5 +---- 2 files changed, 5 insertions(+), 11 deletions(-) --- a/include/trace/events/pagemap.h~mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion +++ a/include/trace/events/pagemap.h @@ -27,24 +27,21 @@ TRACE_EVENT(mm_lru_insertion, - TP_PROTO( - struct page *page, - int lru - ), + TP_PROTO(struct page *page), - TP_ARGS(page, lru), + TP_ARGS(page), TP_STRUCT__entry( __field(struct page *, page ) __field(unsigned long, pfn ) - __field(int, lru ) + __field(enum lru_list, lru ) __field(unsigned long, flags ) ), TP_fast_assign( __entry->page = page; __entry->pfn = page_to_pfn(page); - __entry->lru = lru; + __entry->lru = page_lru(page); __entry->flags = trace_pagemap_flags(page); ), --- a/mm/swap.c~mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion +++ a/mm/swap.c @@ -957,7 +957,6 @@ EXPORT_SYMBOL(__pagevec_release); static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec) { - enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); int nr_pages = thp_nr_pages(page); @@ -993,11 +992,9 @@ static void __pagevec_lru_add_fn(struct smp_mb__after_atomic(); if (page_evictable(page)) { - lru = page_lru(page); if (was_unevictable) __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { - lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) @@ -1005,7 +1002,7 @@ static void __pagevec_lru_add_fn(struct } add_page_to_lru_list(page, lruvec); - trace_mm_lru_insertion(page, lru); + trace_mm_lru_insertion(page); } /* _ Patches currently in -mm which might be from yuzhao@xxxxxxxxxx are mm-swap-dont-setpageworkingset-unconditionally-during-swapin.patch mm-use-add_page_to_lru_list.patch mm-shuffle-lru-list-addition-and-deletion-functions.patch mm-dont-pass-enum-lru_list-to-lru-list-addition-functions.patch mm-dont-pass-enum-lru_list-to-trace_mm_lru_insertion.patch mm-dont-pass-enum-lru_list-to-del_page_from_lru_list.patch mm-add-__clear_page_lru_flags-to-replace-page_off_lru.patch mm-vm_bug_on-lru-page-flags.patch mm-fold-page_lru_base_type-into-its-sole-caller.patch mm-fold-__update_lru_size-into-its-sole-caller.patch mm-make-lruvec_lru_size-static.patch