[merged mm-stable] mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: mm: migrate LRU_REFS_MASK bits in folio_migrate_flags
has been removed from the -mm tree.  Its filename was
     mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags.patch

This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

------------------------------------------------------
From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
Subject: mm: migrate LRU_REFS_MASK bits in folio_migrate_flags
Date: Thu, 26 Sep 2024 13:06:47 +0800

Bits of LRU_REFS_MASK are not inherited during migration which lead to new
folio start from tier0 when MGLRU enabled.  Try to bring as much bits of
folio->flags as possible since compaction and alloc_contig_range which
introduce migration do happen at times.

Link: https://lkml.kernel.org/r/20240926050647.5653-1-zhaoyang.huang@xxxxxxxxxx
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
Suggested-by: Yu Zhao <yuzhao@xxxxxxxxxx>
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 include/linux/mm_inline.h |   10 ++++++++++
 mm/migrate.c              |    1 +
 2 files changed, 11 insertions(+)

--- a/include/linux/mm_inline.h~mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags
+++ a/include/linux/mm_inline.h
@@ -291,6 +291,12 @@ static inline bool lru_gen_del_folio(str
 	return true;
 }
 
+static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+{
+	unsigned long refs = READ_ONCE(old->flags) & LRU_REFS_MASK;
+
+	set_mask_bits(&new->flags, LRU_REFS_MASK, refs);
+}
 #else /* !CONFIG_LRU_GEN */
 
 static inline bool lru_gen_enabled(void)
@@ -313,6 +319,10 @@ static inline bool lru_gen_del_folio(str
 	return false;
 }
 
+static inline void folio_migrate_refs(struct folio *new, struct folio *old)
+{
+
+}
 #endif /* CONFIG_LRU_GEN */
 
 static __always_inline
--- a/mm/migrate.c~mm-migrate-lru_refs_mask-bits-in-folio_migrate_flags
+++ a/mm/migrate.c
@@ -695,6 +695,7 @@ void folio_migrate_flags(struct folio *n
 	if (folio_test_idle(folio))
 		folio_set_idle(newfolio);
 
+	folio_migrate_refs(newfolio, folio);
 	/*
 	 * Copy NUMA information to the new page, to prevent over-eager
 	 * future migrations of this same page.
_

Patches currently in -mm which might be from zhaoyang.huang@xxxxxxxxxx are

mm-optimization-on-page-allocation-when-cma-enabled.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux