On Sat, 11 Nov 2017, Dan Williams wrote: > Currently only get_user_pages_fast() can safely handle the writable gup > case due to its use of pud_access_permitted() to check whether the pud > entry is writable. In the gup slow path pud_write() is used instead of > pud_access_permitted() and to date it has been unimplemented, just calls > BUG_ON(). > > kernel BUG at ./include/linux/hugetlb.h:244! > [..] > RIP: 0010:follow_devmap_pud+0x482/0x490 > [..] > Call Trace: > follow_page_mask+0x28c/0x6e0 > __get_user_pages+0xe4/0x6c0 > get_user_pages_unlocked+0x130/0x1b0 > get_user_pages_fast+0x89/0xb0 > iov_iter_get_pages_alloc+0x114/0x4a0 > nfs_direct_read_schedule_iovec+0xd2/0x350 > ? nfs_start_io_direct+0x63/0x70 > nfs_file_direct_read+0x1e0/0x250 > nfs_file_read+0x90/0xc0 > > For now this just implements a simple check for the _PAGE_RW bit similar > to pmd_write. However, this implies that the gup-slow-path check is > missing the extra checks that the gup-fast-path performs with > pud_access_permitted. Later patches will align all checks to use the > 'access_permitted' helper if the architecture provides it. Note that the > generic 'access_permitted' helper fallback is the simple _PAGE_RW check > on architectures that do not define the 'access_permitted' helper(s). > > Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages") > Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Cc: "David S. Miller" <davem@xxxxxxxxxxxxx> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx> > Cc: Will Deacon <will.deacon@xxxxxxx> > Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> > Cc: Ingo Molnar <mingo@xxxxxxxxxx> > Cc: Arnd Bergmann <arnd@xxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Cc: <x86@xxxxxxxxxx> > Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> > --- > arch/arm64/include/asm/pgtable.h | 1 + > arch/sparc/include/asm/pgtable_64.h | 1 + > arch/x86/include/asm/pgtable.h | 6 ++++++ For the x86 part: Acked-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>