+ mm-truncate-batch-clear-shadow-entries-v2.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm-truncate-batch-clear-shadow-entries-v2
has been added to the -mm mm-unstable branch.  Its filename is
     mm-truncate-batch-clear-shadow-entries-v2.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-truncate-batch-clear-shadow-entries-v2.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Yu Zhao <yuzhao@xxxxxxxxxx>
Subject: mm-truncate-batch-clear-shadow-entries-v2
Date: Wed, 10 Jul 2024 00:09:33 -0600

restore comment, rename clear_shadow_entry() to clear_shadow_entries()

Link: https://lkml.kernel.org/r/20240710060933.3979380-1-yuzhao@xxxxxxxxxx
Reported-by: Bharata B Rao <bharata@xxxxxxx>
Closes: https://lore.kernel.org/d2841226-e27b-4d3d-a578-63587a3aa4f3@xxxxxxx/
Tested-by: Bharata B Rao <bharata@xxxxxxx>
Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/truncate.c |    9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

--- a/mm/truncate.c~mm-truncate-batch-clear-shadow-entries-v2
+++ a/mm/truncate.c
@@ -39,11 +39,12 @@ static inline void __clear_shadow_entry(
 	xas_store(&xas, NULL);
 }
 
-static void clear_shadow_entry(struct address_space *mapping,
-			       struct folio_batch *fbatch, pgoff_t *indices)
+static void clear_shadow_entries(struct address_space *mapping,
+				 struct folio_batch *fbatch, pgoff_t *indices)
 {
 	int i;
 
+	/* Handled by shmem itself, or for DAX we do nothing. */
 	if (shmem_mapping(mapping) || dax_mapping(mapping))
 		return;
 
@@ -507,7 +508,7 @@ unsigned long mapping_try_invalidate(str
 		}
 
 		if (xa_has_values)
-			clear_shadow_entry(mapping, &fbatch, indices);
+			clear_shadow_entries(mapping, &fbatch, indices);
 
 		folio_batch_remove_exceptionals(&fbatch);
 		folio_batch_release(&fbatch);
@@ -657,7 +658,7 @@ int invalidate_inode_pages2_range(struct
 		}
 
 		if (xa_has_values)
-			clear_shadow_entry(mapping, &fbatch, indices);
+			clear_shadow_entries(mapping, &fbatch, indices);
 
 		folio_batch_remove_exceptionals(&fbatch);
 		folio_batch_release(&fbatch);
_

Patches currently in -mm which might be from yuzhao@xxxxxxxxxx are

mm-truncate-batch-clear-shadow-entries.patch
mm-truncate-batch-clear-shadow-entries-v2.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux