The patch titled Subject: mm/memory.c: allow different return codes for copy_nonpresent_pte() has been added to the -mm tree. Its filename is mm-memoryc-allow-different-return-codes-for-copy_nonpresent_pte.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-memoryc-allow-different-return-codes-for-copy_nonpresent_pte.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-memoryc-allow-different-return-codes-for-copy_nonpresent_pte.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alistair Popple <apopple@xxxxxxxxxx> Subject: mm/memory.c: allow different return codes for copy_nonpresent_pte() Currently if copy_nonpresent_pte() returns a non-zero value it is assumed to be a swap entry which requires further processing outside the loop in copy_pte_range() after dropping locks. This prevents other values being returned to signal conditions such as failure which a subsequent change requires. Instead make copy_nonpresent_pte() return an error code if further processing is required and read the value for the swap entry in the main loop under the ptl. Link: https://lkml.kernel.org/r/20210616105937.23201-7-apopple@xxxxxxxxxx Signed-off-by: Alistair Popple <apopple@xxxxxxxxxx> Reviewed-by: Peter Xu <peterx@xxxxxxxxxx> Cc: Ben Skeggs <bskeggs@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) --- a/mm/memory.c~mm-memoryc-allow-different-return-codes-for-copy_nonpresent_pte +++ a/mm/memory.c @@ -717,7 +717,7 @@ copy_nonpresent_pte(struct mm_struct *ds if (likely(!non_swap_entry(entry))) { if (swap_duplicate(entry) < 0) - return entry.val; + return -EIO; /* make sure dst_mm is on swapoff's mmlist. */ if (unlikely(list_empty(&dst_mm->mmlist))) { @@ -973,12 +973,14 @@ again: continue; } if (unlikely(!pte_present(*src_pte))) { - entry.val = copy_nonpresent_pte(dst_mm, src_mm, - dst_pte, src_pte, - dst_vma, src_vma, - addr, rss); - if (entry.val) + ret = copy_nonpresent_pte(dst_mm, src_mm, + dst_pte, src_pte, + dst_vma, src_vma, + addr, rss); + if (ret == -EIO) { + entry = pte_to_swp_entry(*src_pte); break; + } progress += 8; continue; } @@ -1011,20 +1013,24 @@ again: pte_unmap_unlock(orig_dst_pte, dst_ptl); cond_resched(); - if (entry.val) { + if (ret == -EIO) { + VM_WARN_ON_ONCE(!entry.val); if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { ret = -ENOMEM; goto out; } entry.val = 0; - } else if (ret) { - WARN_ON_ONCE(ret != -EAGAIN); + } else if (ret == -EAGAIN) { prealloc = page_copy_prealloc(src_mm, src_vma, addr); if (!prealloc) return -ENOMEM; - /* We've captured and resolved the error. Reset, try again. */ - ret = 0; + } else if (ret) { + VM_WARN_ON_ONCE(1); } + + /* We've captured and resolved the error. Reset, try again. */ + ret = 0; + if (addr != end) goto again; out: _ Patches currently in -mm which might be from apopple@xxxxxxxxxx are mm-remove-special-swap-entry-functions.patch mm-swapops-rework-swap-entry-manipulation-code.patch mm-rmap-split-try_to_munlock-from-try_to_unmap.patch mm-rmap-split-migration-into-its-own-function.patch mm-rename-migrate_pgmap_owner.patch mm-memoryc-allow-different-return-codes-for-copy_nonpresent_pte.patch mm-device-exclusive-memory-access.patch mm-selftests-for-exclusive-device-memory.patch nouveau-svm-refactor-nouveau_range_fault.patch nouveau-svm-implement-atomic-svm-access.patch