The patch titled Subject: mm: migrate LRU_REFS_MASK bits in folio_migrate_flags has been added to the -mm mm-unstable branch. Its filename is mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> Subject: mm: migrate LRU_REFS_MASK bits in folio_migrate_flags Date: Thu, 26 Sep 2024 13:06:47 +0800 Bits of LRU_REFS_MASK are not inherited during migration which lead to new folio start from tier0 when MGLRU enabled. Try to bring as much bits of folio->flags as possible since compaction and alloc_contig_range which introduce migration do happen at times. Link: https://lkml.kernel.org/r/20240926050647.5653-1-zhaoyang.huang@xxxxxxxxxx Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx> Suggested-by: Yu Zhao <yuzhao@xxxxxxxxxx> Acked-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm_inline.h | 10 ++++++++++ mm/migrate.c | 1 + 2 files changed, 11 insertions(+) --- a/include/linux/mm_inline.h~mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags +++ a/include/linux/mm_inline.h @@ -291,6 +291,12 @@ static inline bool lru_gen_del_folio(str return true; } +static inline void folio_migrate_refs(struct folio *new, struct folio *old) +{ + unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK; + + set_mask_bits(&new->flags, LRU_REFS_MASK, refs); +} #else /* !CONFIG_LRU_GEN */ static inline bool lru_gen_enabled(void) @@ -313,6 +319,10 @@ static inline bool lru_gen_del_folio(str return false; } +static inline void folio_migrate_refs(struct folio *new, struct folio *old) +{ + +} #endif /* CONFIG_LRU_GEN */ static __always_inline --- a/mm/migrate.c~mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags +++ a/mm/migrate.c @@ -694,6 +694,7 @@ void folio_migrate_flags(struct folio *n if (folio_test_idle(folio)) folio_set_idle(newfolio); + folio_migrate_refs(newfolio, folio); /* * Copy NUMA information to the new page, to prevent over-eager * future migrations of this same page. _ Patches currently in -mm which might be from zhaoyang.huang@xxxxxxxxxx are mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags.patch mm-optimization-on-page-allocation-when-cma-enabled.patch