On Sun, Mar 9, 2025 at 11:15 AM <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > > > The patch below does not apply to the 6.13-stable tree. Hi Greg, I just posted linux-6.13.y backport [1] for an earlier patch and with that and with 37b338eed10581784e854d4262da05c8d960c748 which you already backported into linux-6.13.y this patch should merge cleanly. Could you please try cherry-picking it again after merging [1] into linux-6.13.y? Thanks, Suren. [1] https://lore.kernel.org/all/20250310184033.1205075-1-surenb@xxxxxxxxxx/ > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. > > To reproduce the conflict and resubmit, you may use the following commands: > > git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.13.y > git checkout FETCH_HEAD > git cherry-pick -x 927e926d72d9155fde3264459fe9bfd7b5e40d28 > # <resolve conflicts, build, test, etc.> > git commit -s > git send-email --to '<stable@xxxxxxxxxxxxxxx>' --in-reply-to '2025030947-disloyal-bust-0d23@gregkh' --subject-prefix 'PATCH 6.13.y' HEAD^.. > > Possible dependencies: > > > > thanks, > > greg k-h > > ------------------ original commit in Linus's tree ------------------ > > From 927e926d72d9155fde3264459fe9bfd7b5e40d28 Mon Sep 17 00:00:00 2001 > From: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Date: Wed, 26 Feb 2025 10:55:09 -0800 > Subject: [PATCH] userfaultfd: fix PTE unmapping stack-allocated PTE copies > > Current implementation of move_pages_pte() copies source and destination > PTEs in order to detect concurrent changes to PTEs involved in the move. > However these copies are also used to unmap the PTEs, which will fail if > CONFIG_HIGHPTE is enabled because the copies are allocated on the stack. > Fix this by using the actual PTEs which were kmap()ed. > > Link: https://lkml.kernel.org/r/20250226185510.2732648-3-surenb@xxxxxxxxxx > Fixes: adef440691ba ("userfaultfd: UFFDIO_MOVE uABI") > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > Reported-by: Peter Xu <peterx@xxxxxxxxxx> > Reviewed-by: Peter Xu <peterx@xxxxxxxxxx> > Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Cc: Barry Song <21cnbao@xxxxxxxxx> > Cc: Barry Song <v-songbaohua@xxxxxxxx> > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > Cc: Jann Horn <jannh@xxxxxxxxxx> > Cc: Kalesh Singh <kaleshsingh@xxxxxxxxxx> > Cc: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> > Cc: Lokesh Gidra <lokeshgidra@xxxxxxxxxx> > Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> > Cc: Matthew Wilcow (Oracle) <willy@xxxxxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index f5c6b3454f76..d06453fa8aba 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -1290,8 +1290,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > spin_unlock(src_ptl); > > if (!locked) { > - pte_unmap(&orig_src_pte); > - pte_unmap(&orig_dst_pte); > + pte_unmap(src_pte); > + pte_unmap(dst_pte); > src_pte = dst_pte = NULL; > /* now we can block and wait */ > folio_lock(src_folio); > @@ -1307,8 +1307,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > /* at this point we have src_folio locked */ > if (folio_test_large(src_folio)) { > /* split_folio() can block */ > - pte_unmap(&orig_src_pte); > - pte_unmap(&orig_dst_pte); > + pte_unmap(src_pte); > + pte_unmap(dst_pte); > src_pte = dst_pte = NULL; > err = split_folio(src_folio); > if (err) > @@ -1333,8 +1333,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > goto out; > } > if (!anon_vma_trylock_write(src_anon_vma)) { > - pte_unmap(&orig_src_pte); > - pte_unmap(&orig_dst_pte); > + pte_unmap(src_pte); > + pte_unmap(dst_pte); > src_pte = dst_pte = NULL; > /* now we can block and wait */ > anon_vma_lock_write(src_anon_vma); > @@ -1352,8 +1352,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > entry = pte_to_swp_entry(orig_src_pte); > if (non_swap_entry(entry)) { > if (is_migration_entry(entry)) { > - pte_unmap(&orig_src_pte); > - pte_unmap(&orig_dst_pte); > + pte_unmap(src_pte); > + pte_unmap(dst_pte); > src_pte = dst_pte = NULL; > migration_entry_wait(mm, src_pmd, src_addr); > err = -EAGAIN; > @@ -1396,8 +1396,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd, > src_folio = folio; > src_folio_pte = orig_src_pte; > if (!folio_trylock(src_folio)) { > - pte_unmap(&orig_src_pte); > - pte_unmap(&orig_dst_pte); > + pte_unmap(src_pte); > + pte_unmap(dst_pte); > src_pte = dst_pte = NULL; > put_swap_device(si); > si = NULL; >