The cleanups are intended to reduce the verbosity in lru list operations and make them less error-prone. A typical example would be how the patches change __activate_page(): static void __activate_page(struct page *page, struct lruvec *lruvec) { if (!PageActive(page) && !PageUnevictable(page)) { - int lru = page_lru_base_type(page); int nr_pages = thp_nr_pages(page); - del_page_from_lru_list(page, lruvec, lru); + del_page_from_lru_list(page, lruvec); SetPageActive(page); - lru += LRU_ACTIVE; - add_page_to_lru_list(page, lruvec, lru); + add_page_to_lru_list(page, lruvec); trace_mm_lru_activate(page); There are a few more places like __activate_page() and they are unnecessarily repetitive in terms of figuring out which list a page should be added onto or deleted from. And with the duplicated code removed, they are easier to read, IMO. Patch 1 to 5 basically cover the above. Patch 6 and 7 make code more robust by improving bug reporting. Patch 8, 9 and 10 take care of some dangling helpers left in header files. v1 -> v2: dropped the last patch in this series based on the discussion here: https://lore.kernel.org/patchwork/patch/1350552/#1550430 Yu Zhao (10): mm: use add_page_to_lru_list() mm: shuffle lru list addition and deletion functions mm: don't pass "enum lru_list" to lru list addition functions mm: don't pass "enum lru_list" to trace_mm_lru_insertion() mm: don't pass "enum lru_list" to del_page_from_lru_list() mm: add __clear_page_lru_flags() to replace page_off_lru() mm: VM_BUG_ON lru page flags mm: fold page_lru_base_type() into its sole caller mm: fold __update_lru_size() into its sole caller mm: make lruvec_lru_size() static include/linux/mm_inline.h | 113 ++++++++++++++------------------- include/linux/mmzone.h | 2 - include/trace/events/pagemap.h | 11 ++-- mm/compaction.c | 2 +- mm/mlock.c | 3 +- mm/swap.c | 50 ++++++--------- mm/vmscan.c | 21 ++---- 7 files changed, 77 insertions(+), 125 deletions(-) -- 2.30.0.280.ga3ce27912f-goog