+ mm-ksm-convert-break_ksm-to-use-walk_page_range_vma-fix-2.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/ksm: change break_ksm_pmd|pud_entry() to static
has been added to the -mm mm-unstable branch.  Its filename is
     mm-ksm-convert-break_ksm-to-use-walk_page_range_vma-fix-2.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-ksm-convert-break_ksm-to-use-walk_page_range_vma-fix-2.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Yang Yingliang <yangyingliang@xxxxxxxxxx>
Subject: mm/ksm: change break_ksm_pmd|pud_entry() to static
Date: Thu, 20 Oct 2022 15:59:13 +0800

break_ksm_pmd|pud_entry() is only used in ksm.c now, change them to
static.

Link: https://lkml.kernel.org/r/20221020075913.1046481-1-yangyingliang@xxxxxxxxxx
Fixes: 16ee05ec4698 ("mm/ksm: convert break_ksm() to use walk_page_range_vma()")
Signed-off-by: Yang Yingliang <yangyingliang@xxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/ksm.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/ksm.c~mm-ksm-convert-break_ksm-to-use-walk_page_range_vma-fix-2
+++ a/mm/ksm.c
@@ -420,7 +420,7 @@ static inline bool ksm_test_exit(struct
 	return atomic_read(&mm->mm_users) == 0;
 }
 
-int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
+static int break_ksm_pud_entry(pud_t *pud, unsigned long addr, unsigned long next,
 			struct mm_walk *walk)
 {
 	/* We only care about page tables to walk to a single base page. */
@@ -429,7 +429,7 @@ int break_ksm_pud_entry(pud_t *pud, unsi
 	return 0;
 }
 
-int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
+static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
 			struct mm_walk *walk)
 {
 	bool *ksm_page = walk->private;
_

Patches currently in -mm which might be from yangyingliang@xxxxxxxxxx are

mm-ksm-convert-break_ksm-to-use-walk_page_range_vma-fix-2.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux