The patch titled Subject: iov_iter: add copy_page_to_iter_nofault() has been added to the -mm mm-unstable branch. Its filename is iov_iter-add-copy_page_to_iter_nofault.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/iov_iter-add-copy_page_to_iter_nofault.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Subject: iov_iter: add copy_page_to_iter_nofault() Date: Wed, 22 Mar 2023 18:57:03 +0000 Provide a means to copy a page to user space from an iterator, aborting if a page fault would occur. This supports compound pages, but may be passed a tail page with an offset extending further into the compound page, so we cannot pass a folio. This allows for this function to be called from atomic context and _try_ to user pages if they are faulted in, aborting if not. The function does not use _copy_to_iter() in order to not specify might_fault(), this is similar to copy_page_from_iter_atomic(). This is being added in order that an iteratable form of vread() can be implemented while holding spinlocks. Link: https://lkml.kernel.org/r/19734729defb0f498a76bdec1bef3ac48a3af3e8.1679511146.git.lstoakes@xxxxxxxxx Signed-off-by: Lorenzo Stoakes <lstoakes@xxxxxxxxx> Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> Cc: Baoquan He <bhe@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Jens Axboe <axboe@xxxxxxxxx> Cc: Jiri Olsa <jolsa@xxxxxxxxxx> Cc: Liu Shixin <liushixin2@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/include/linux/uio.h~iov_iter-add-copy_page_to_iter_nofault +++ a/include/linux/uio.h @@ -173,6 +173,8 @@ static inline size_t copy_folio_to_iter( { return copy_page_to_iter(&folio->page, offset, bytes, i); } +size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, + size_t bytes, struct iov_iter *i); static __always_inline __must_check size_t copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) --- a/lib/iov_iter.c~iov_iter-add-copy_page_to_iter_nofault +++ a/lib/iov_iter.c @@ -172,6 +172,18 @@ static int copyout(void __user *to, cons return n; } +static int copyout_nofault(void __user *to, const void *from, size_t n) +{ + long res; + + if (should_fail_usercopy()) + return n; + + res = copy_to_user_nofault(to, from, n); + + return res < 0 ? n : res; +} + static int copyin(void *to, const void __user *from, size_t n) { size_t res = n; @@ -734,6 +746,42 @@ size_t copy_page_to_iter(struct page *pa } EXPORT_SYMBOL(copy_page_to_iter); +size_t copy_page_to_iter_nofault(struct page *page, unsigned offset, size_t bytes, + struct iov_iter *i) +{ + size_t res = 0; + + if (!page_copy_sane(page, offset, bytes)) + return 0; + if (WARN_ON_ONCE(i->data_source)) + return 0; + if (unlikely(iov_iter_is_pipe(i))) + return copy_page_to_iter_pipe(page, offset, bytes, i); + page += offset / PAGE_SIZE; // first subpage + offset %= PAGE_SIZE; + while (1) { + void *kaddr = kmap_local_page(page); + size_t n = min(bytes, (size_t)PAGE_SIZE - offset); + + iterate_and_advance(i, n, base, len, off, + copyout_nofault(base, kaddr + offset + off, len), + memcpy(base, kaddr + offset + off, len) + ) + kunmap_local(kaddr); + res += n; + bytes -= n; + if (!bytes || !n) + break; + offset += n; + if (offset == PAGE_SIZE) { + page++; + offset = 0; + } + } + return res; +} +EXPORT_SYMBOL(copy_page_to_iter_nofault); + size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes, struct iov_iter *i) { _ Patches currently in -mm which might be from lstoakes@xxxxxxxxx are mm-prefer-xxx_page-alloc-free-functions-for-order-0-pages.patch mm-refactor-do_fault_around.patch mm-pefer-fault_around_pages-to-fault_around_bytes.patch maintainers-add-myself-as-vmalloc-reviewer.patch mm-remove-unused-vmf_insert_mixed_prot.patch mm-remove-vmf_insert_pfn_xxx_prot-for-huge-page-table-entries.patch drm-ttm-remove-comment-referencing-now-removed-vmf_insert_mixed_prot.patch fs-proc-kcore-avoid-bounce-buffer-for-ktext-data.patch fs-proc-kcore-convert-read_kcore-to-read_kcore_iter.patch iov_iter-add-copy_page_to_iter_nofault.patch mm-vmalloc-convert-vread-to-vread_iter.patch mm-mmap-vma_merge-further-improve-prev-next-vma-naming.patch mm-mmap-vma_merge-fold-curr-next-assignment-logic.patch mm-mmap-vma_merge-explicitly-assign-res-vma-extend-invariants.patch mm-mmap-vma_merge-init-cleanup-be-explicit-about-the-non-mergeable-case.patch