+ mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive
has been added to the -mm mm-unstable branch.  Its filename is
     mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Barry Song <v-songbaohua@xxxxxxxx>
Subject: mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive
Date: Tue, 9 Apr 2024 20:26:29 +1200

Add a boolean argument named any_shared.  If any of the swap entries are
non-exclusive, set any_shared to true.  The function do_swap_page() can
then utilize this information to determine whether the entire large folio
can be reused.

Link: https://lkml.kernel.org/r/20240409082631.187483-4-21cnbao@xxxxxxxxx
Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Chris Li <chrisl@xxxxxxxxxx>
Cc: Chuanhua Han <hanchuanhua@xxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Gao Xiang <xiang@xxxxxxxxxx>
Cc: "Huang, Ying" <ying.huang@xxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Kairui Song <kasong@xxxxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
Cc: Zi Yan <ziy@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/internal.h |    9 ++++++++-
 mm/madvise.c  |    2 +-
 mm/memory.c   |    2 +-
 3 files changed, 10 insertions(+), 3 deletions(-)

--- a/mm/internal.h~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/internal.h
@@ -241,7 +241,8 @@ static inline pte_t pte_next_swp_offset(
  *
  * Return: the number of table entries in the batch.
  */
-static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
+static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte,
+				bool *any_shared)
 {
 	pte_t expected_pte = pte_next_swp_offset(pte);
 	const pte_t *end_ptep = start_ptep + max_nr;
@@ -251,12 +252,18 @@ static inline int swap_pte_batch(pte_t *
 	VM_WARN_ON(!is_swap_pte(pte));
 	VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
 
+	if (any_shared)
+		*any_shared |= !pte_swp_exclusive(pte);
+
 	while (ptep < end_ptep) {
 		pte = ptep_get(ptep);
 
 		if (!pte_same(pte, expected_pte))
 			break;
 
+		if (any_shared)
+			*any_shared |= !pte_swp_exclusive(pte);
+
 		expected_pte = pte_next_swp_offset(expected_pte);
 		ptep++;
 	}
--- a/mm/madvise.c~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/madvise.c
@@ -671,7 +671,7 @@ static int madvise_free_pte_range(pmd_t
 			entry = pte_to_swp_entry(ptent);
 			if (!non_swap_entry(entry)) {
 				max_nr = (end - addr) / PAGE_SIZE;
-				nr = swap_pte_batch(pte, max_nr, ptent);
+				nr = swap_pte_batch(pte, max_nr, ptent, NULL);
 				nr_swap -= nr;
 				free_swap_and_cache_nr(entry, nr);
 				clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
--- a/mm/memory.c~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/memory.c
@@ -1637,7 +1637,7 @@ static unsigned long zap_pte_range(struc
 			folio_put(folio);
 		} else if (!non_swap_entry(entry)) {
 			max_nr = (end - addr) / PAGE_SIZE;
-			nr = swap_pte_batch(pte, max_nr, ptent);
+			nr = swap_pte_batch(pte, max_nr, ptent, NULL);
 			/* Genuine swap entries, hence a private anon pages */
 			if (!should_zap_cows(details))
 				continue;
_

Patches currently in -mm which might be from v-songbaohua@xxxxxxxx are

arm64-mm-swap-support-thp_swap-on-hardware-with-mte.patch
mm-hold-ptl-from-the-first-pte-while-reclaiming-a-large-folio.patch
mm-alloc_anon_folio-avoid-doing-vma_thp_gfp_mask-in-fallback-cases.patch
mm-add-per-order-mthp-anon_alloc-and-anon_alloc_fallback-counters.patch
mm-add-per-order-mthp-anon_alloc-and-anon_alloc_fallback-counters-fix.patch
mm-add-per-order-mthp-anon_swpout-and-anon_swpout_fallback-counters.patch
mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch
mm-add-per-order-mthp-swpin_refault-counter.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux