The patch titled Subject: mm/memory: pass PTE to copy_present_pte() has been added to the -mm mm-unstable branch. Its filename is mm-memory-pass-pte-to-copy_present_pte.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-memory-pass-pte-to-copy_present_pte.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/memory: pass PTE to copy_present_pte() Date: Mon, 29 Jan 2024 13:46:46 +0100 We already read it, let's just forward it. This patch is based on work by Ryan Roberts. Link: https://lkml.kernel.org/r/20240129124649.189745-13-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Ryan Roberts <ryan.roberts@xxxxxxx> Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx> Cc: Alexander Gordeev <agordeev@xxxxxxxxxxxxx> Cc: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Christian Borntraeger <borntraeger@xxxxxxxxxxxxx> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Dinh Nguyen <dinguyen@xxxxxxxxxx> Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> Cc: Heiko Carstens <hca@xxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx> Cc: Naveen N. Rao <naveen.n.rao@xxxxxxxxxxxxx> Cc: Nicholas Piggin <npiggin@xxxxxxxxx> Cc: Palmer Dabbelt <palmer@xxxxxxxxxxx> Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx> Cc: Russell King (Oracle) <linux@xxxxxxxxxxxxxxx> Cc: Sven Schnelle <svens@xxxxxxxxxxxxx> Cc: Vasily Gorbik <gor@xxxxxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) --- a/mm/memory.c~mm-memory-pass-pte-to-copy_present_pte +++ a/mm/memory.c @@ -959,10 +959,9 @@ static inline void __copy_present_pte(st */ static inline int copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, - pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, - struct folio **prealloc) + pte_t *dst_pte, pte_t *src_pte, pte_t pte, unsigned long addr, + int *rss, struct folio **prealloc) { - pte_t pte = ptep_get(src_pte); struct page *page; struct folio *folio; @@ -1103,7 +1102,7 @@ again: } /* copy_present_pte() will clear `*prealloc' if consumed */ ret = copy_present_pte(dst_vma, src_vma, dst_pte, src_pte, - addr, rss, &prealloc); + ptent, addr, rss, &prealloc); /* * If we need a pre-allocated page for this pte, drop the * locks, allocate, and try again. _ Patches currently in -mm which might be from david@xxxxxxxxxx are arm-pgtable-define-pfn_pte_shift.patch nios2-pgtable-define-pfn_pte_shift.patch powerpc-pgtable-define-pfn_pte_shift.patch riscv-pgtable-define-pfn_pte_shift.patch s390-pgtable-define-pfn_pte_shift.patch sparc-pgtable-define-pfn_pte_shift.patch mm-pgtable-make-pte_next_pfn-independent-of-set_ptes.patch arm-mm-use-pte_next_pfn-in-set_ptes.patch powerpc-mm-use-pte_next_pfn-in-set_ptes.patch mm-memory-factor-out-copying-the-actual-pte-in-copy_present_pte.patch mm-memory-pass-pte-to-copy_present_pte.patch mm-memory-optimize-fork-with-pte-mapped-thp.patch mm-memory-ignore-dirty-accessed-soft-dirty-bits-in-folio_pte_batch.patch mm-memory-ignore-writable-bit-in-folio_pte_batch.patch