Re: [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> I'm going to rework this patch and integrate it into my series if that's ok with
>> you?
> 
> This is perfect. Please integrate it into your swap-out series which is the
> perfect place for this MADV_PAGEOUT.

BTW, Ryan, while you integrate this into your swap-put series, can you also
add the below one which is addressing one comment of Chris,

From: Barry Song <v-songbaohua@xxxxxxxx>
Date: Tue, 27 Feb 2024 22:03:59 +1300
Subject: [PATCH] mm: madvise: extract common function
 folio_deactivate_or_add_to_reclaim_list

For madvise_cold_or_pageout_pte_range, both pmd-mapped and pte-mapped
normal folios are duplicating the same code right now, and we might
have more such as pte-mapped large folios to use it. It is better
to extract a common function.

Cc: Chris Li <chrisl@xxxxxxxxxx>
Cc: Minchan Kim <minchan@xxxxxxxxxx>
Cc: SeongJae Park <sj@xxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
---
 mm/madvise.c | 52 ++++++++++++++++++++--------------------------------
 1 file changed, 20 insertions(+), 32 deletions(-)

diff --git a/mm/madvise.c b/mm/madvise.c
index 44a498c94158..1812457144ea 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -321,6 +321,24 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma)
 	       file_permission(vma->vm_file, MAY_WRITE) == 0;
 }
 
+static inline void folio_deactivate_or_add_to_reclaim_list(struct folio *folio, bool pageout,
+				struct list_head *folio_list)
+{
+	folio_clear_referenced(folio);
+	folio_test_clear_young(folio);
+
+	if (folio_test_active(folio))
+		folio_set_workingset(folio);
+	if (!pageout)
+		return folio_deactivate(folio);
+	if (folio_isolate_lru(folio)) {
+		if (folio_test_unevictable(folio))
+			folio_putback_lru(folio);
+		else
+			list_add(&folio->lru, folio_list);
+	}
+}
+
 static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 				unsigned long addr, unsigned long end,
 				struct mm_walk *walk)
@@ -394,19 +412,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 		}
 
-		folio_clear_referenced(folio);
-		folio_test_clear_young(folio);
-		if (folio_test_active(folio))
-			folio_set_workingset(folio);
-		if (pageout) {
-			if (folio_isolate_lru(folio)) {
-				if (folio_test_unevictable(folio))
-					folio_putback_lru(folio);
-				else
-					list_add(&folio->lru, &folio_list);
-			}
-		} else
-			folio_deactivate(folio);
+		folio_deactivate_or_add_to_reclaim_list(folio, pageout, &folio_list);
 huge_unlock:
 		spin_unlock(ptl);
 		if (pageout)
@@ -498,25 +504,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 			tlb_remove_tlb_entry(tlb, pte, addr);
 		}
 
-		/*
-		 * We are deactivating a folio for accelerating reclaiming.
-		 * VM couldn't reclaim the folio unless we clear PG_young.
-		 * As a side effect, it makes confuse idle-page tracking
-		 * because they will miss recent referenced history.
-		 */
-		folio_clear_referenced(folio);
-		folio_test_clear_young(folio);
-		if (folio_test_active(folio))
-			folio_set_workingset(folio);
-		if (pageout) {
-			if (folio_isolate_lru(folio)) {
-				if (folio_test_unevictable(folio))
-					folio_putback_lru(folio);
-				else
-					list_add(&folio->lru, &folio_list);
-			}
-		} else
-			folio_deactivate(folio);
+		folio_deactivate_or_add_to_reclaim_list(folio, pageout, &folio_list);
 	}
 
 	if (start_pte) {
-- 
2.34.1

Thanks
Barry




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux