The patch titled Subject: mm: export follow_pte() has been added to the -mm tree. Its filename is mm-export-follow_pte.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-export-follow_pte.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-export-follow_pte.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: mm: export follow_pte() DAX will need to implement its own version of page_check_address(). To avoid duplicating page table walking code, export follow_pte() which does what we need. Link: http://lkml.kernel.org/r/1479460644-25076-18-git-send-email-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 2 ++ mm/memory.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff -puN include/linux/mm.h~mm-export-follow_pte include/linux/mm.h --- a/include/linux/mm.h~mm-export-follow_pte +++ a/include/linux/mm.h @@ -1210,6 +1210,8 @@ int copy_page_range(struct mm_struct *ds struct vm_area_struct *vma); void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); +int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, + spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, diff -puN mm/memory.c~mm-export-follow_pte mm/memory.c --- a/mm/memory.c~mm-export-follow_pte +++ a/mm/memory.c @@ -3817,8 +3817,8 @@ out: return -EINVAL; } -static inline int follow_pte(struct mm_struct *mm, unsigned long address, - pte_t **ptepp, spinlock_t **ptlp) +int follow_pte(struct mm_struct *mm, unsigned long address, pte_t **ptepp, + spinlock_t **ptlp) { int res; _ Patches currently in -mm which might be from jack@xxxxxxx are mm-join-struct-fault_env-and-vm_fault.patch mm-use-vmf-address-instead-of-of-vmf-virtual_address.patch mm-use-pgoff-in-struct-vm_fault-instead-of-passing-it-separately.patch mm-use-passed-vm_fault-structure-in-__do_fault.patch mm-trim-__do_fault-arguments.patch mm-use-passed-vm_fault-structure-for-in-wp_pfn_shared.patch mm-add-orig_pte-field-into-vm_fault.patch mm-allow-full-handling-of-cow-faults-in-fault-handlers.patch mm-factor-out-functionality-to-finish-page-faults.patch mm-move-handling-of-cow-faults-into-dax-code.patch mm-factor-out-common-parts-of-write-fault-handling.patch mm-pass-vm_fault-structure-into-do_page_mkwrite.patch mm-use-vmf-page-during-wp-faults.patch mm-move-part-of-wp_page_reuse-into-the-single-call-site.patch mm-provide-helper-for-finishing-mkwrite-faults.patch mm-change-return-values-of-finish_mkwrite_fault.patch mm-export-follow_pte.patch dax-make-cache-flushing-protected-by-entry-lock.patch dax-protect-pte-modification-on-wp-fault-by-radix-tree-entry-lock.patch dax-clear-dirty-entry-tags-on-cache-flush.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html