The patch titled Subject: mm: factor out functionality to finish page faults has been added to the -mm tree. Its filename is mm-factor-out-functionality-to-finish-page-faults.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-factor-out-functionality-to-finish-page-faults.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-factor-out-functionality-to-finish-page-faults.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Jan Kara <jack@xxxxxxx> Subject: mm: factor out functionality to finish page faults Introduce finish_fault() as a helper function for finishing page faults. It is rather thin wrapper around alloc_set_pte() but since we'd want to call this from DAX code or filesystems, it is still useful to avoid some boilerplate code. Link: http://lkml.kernel.org/r/1479460644-25076-10-git-send-email-jack@xxxxxxx Signed-off-by: Jan Kara <jack@xxxxxxx> Reviewed-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx> Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mm.h | 1 mm/memory.c | 44 ++++++++++++++++++++++++++++++++++--------- 2 files changed, 36 insertions(+), 9 deletions(-) diff -puN include/linux/mm.h~mm-factor-out-functionality-to-finish-page-faults include/linux/mm.h --- a/include/linux/mm.h~mm-factor-out-functionality-to-finish-page-faults +++ a/include/linux/mm.h @@ -620,6 +620,7 @@ static inline pte_t maybe_mkwrite(pte_t int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, struct page *page); +int finish_fault(struct vm_fault *vmf); #endif /* diff -puN mm/memory.c~mm-factor-out-functionality-to-finish-page-faults mm/memory.c --- a/mm/memory.c~mm-factor-out-functionality-to-finish-page-faults +++ a/mm/memory.c @@ -3074,6 +3074,38 @@ fault_handled: return ret; } + +/** + * finish_fault - finish page fault once we have prepared the page to fault + * + * @vmf: structure describing the fault + * + * This function handles all that is needed to finish a page fault once the + * page to fault in is prepared. It handles locking of PTEs, inserts PTE for + * given page, adds reverse page mapping, handles memcg charges and LRU + * addition. The function returns 0 on success, VM_FAULT_ code in case of + * error. + * + * The function expects the page to be locked and on success it consumes a + * reference of a page being mapped (for the PTE which maps it). + */ +int finish_fault(struct vm_fault *vmf) +{ + struct page *page; + int ret; + + /* Did we COW the page? */ + if ((vmf->flags & FAULT_FLAG_WRITE) && + !(vmf->vma->vm_flags & VM_SHARED)) + page = vmf->cow_page; + else + page = vmf->page; + ret = alloc_set_pte(vmf, vmf->memcg, page); + if (vmf->pte) + pte_unmap_unlock(vmf->pte, vmf->ptl); + return ret; +} + static unsigned long fault_around_bytes __read_mostly = rounddown_pow_of_two(65536); @@ -3213,9 +3245,7 @@ static int do_read_fault(struct vm_fault if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) return ret; - ret |= alloc_set_pte(vmf, NULL, vmf->page); - if (vmf->pte) - pte_unmap_unlock(vmf->pte, vmf->ptl); + ret |= finish_fault(vmf); unlock_page(vmf->page); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) put_page(vmf->page); @@ -3250,9 +3280,7 @@ static int do_cow_fault(struct vm_fault copy_user_highpage(vmf->cow_page, vmf->page, vmf->address, vma); __SetPageUptodate(vmf->cow_page); - ret |= alloc_set_pte(vmf, vmf->memcg, vmf->cow_page); - if (vmf->pte) - pte_unmap_unlock(vmf->pte, vmf->ptl); + ret |= finish_fault(vmf); if (!(ret & VM_FAULT_DAX_LOCKED)) { unlock_page(vmf->page); put_page(vmf->page); @@ -3293,9 +3321,7 @@ static int do_shared_fault(struct vm_fau } } - ret |= alloc_set_pte(vmf, NULL, vmf->page); - if (vmf->pte) - pte_unmap_unlock(vmf->pte, vmf->ptl); + ret |= finish_fault(vmf); if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY))) { unlock_page(vmf->page); _ Patches currently in -mm which might be from jack@xxxxxxx are mm-join-struct-fault_env-and-vm_fault.patch mm-use-vmf-address-instead-of-of-vmf-virtual_address.patch mm-use-pgoff-in-struct-vm_fault-instead-of-passing-it-separately.patch mm-use-passed-vm_fault-structure-in-__do_fault.patch mm-trim-__do_fault-arguments.patch mm-use-passed-vm_fault-structure-for-in-wp_pfn_shared.patch mm-add-orig_pte-field-into-vm_fault.patch mm-allow-full-handling-of-cow-faults-in-fault-handlers.patch mm-factor-out-functionality-to-finish-page-faults.patch mm-move-handling-of-cow-faults-into-dax-code.patch mm-factor-out-common-parts-of-write-fault-handling.patch mm-pass-vm_fault-structure-into-do_page_mkwrite.patch mm-use-vmf-page-during-wp-faults.patch mm-move-part-of-wp_page_reuse-into-the-single-call-site.patch mm-provide-helper-for-finishing-mkwrite-faults.patch mm-change-return-values-of-finish_mkwrite_fault.patch mm-export-follow_pte.patch dax-make-cache-flushing-protected-by-entry-lock.patch dax-protect-pte-modification-on-wp-fault-by-radix-tree-entry-lock.patch dax-clear-dirty-entry-tags-on-cache-flush.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html