The patch titled Subject: mm/gup: clean up follow_pfn_pte() slightly has been added to the -mm tree. Its filename is mm-gup-clean-up-follow_pfn_pte-slightly.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mm-gup-clean-up-follow_pfn_pte-slightly.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mm-gup-clean-up-follow_pfn_pte-slightly.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: John Hubbard <jhubbard@xxxxxxxxxx> Subject: mm/gup: clean up follow_pfn_pte() slightly Regardless of any FOLL_* flags, get_user_pages() and its variants should handle PFN-only entries by stopping early, if the caller expected **pages to be filled in. This makes for a more reliable API, as compared to the previous approach of skipping over such entries (and thus leaving them silently unwritten). Link: https://lkml.kernel.org/r/20220201101108.306062-3-jhubbard@xxxxxxxxxx Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Suggested-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Alex Williamson <alex.williamson@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- a/mm/gup.c~mm-gup-clean-up-follow_pfn_pte-slightly +++ a/mm/gup.c @@ -439,10 +439,6 @@ static struct page *no_page_table(struct static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { - /* No page to get reference */ - if (flags & (FOLL_GET | FOLL_PIN)) - return -EFAULT; - if (flags & FOLL_TOUCH) { pte_t entry = *pte; @@ -1180,8 +1176,14 @@ retry: } else if (PTR_ERR(page) == -EEXIST) { /* * Proper page table entry exists, but no corresponding - * struct page. + * struct page. If the caller expects **pages to be + * filled in, bail out now, because that can't be done + * for this page. */ + if (pages) { + page = ERR_PTR(-EFAULT); + goto out; + } goto next_page; } else if (IS_ERR(page)) { ret = PTR_ERR(page); _ Patches currently in -mm which might be from jhubbard@xxxxxxxxxx are mm-gup-clean-up-follow_pfn_pte-slightly.patch mm-gup-remove-unused-pin_user_pages_locked.patch mm-gup-remove-get_user_pages_locked.patch