On Tue, Apr 16, 2019 at 03:44:56PM +0200, Laurent Dufour wrote: > From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > > When speculating faults (without holding mmap_sem) we need to validate > that the vma against which we loaded pages is still valid when we're > ready to install the new PTE. > > Therefore, replace the pte_offset_map_lock() calls that (re)take the > PTL with pte_map_lock() which can fail in case we find the VMA changed > since we started the fault. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> > > [Port to 4.12 kernel] > [Remove the comment about the fault_env structure which has been > implemented as the vm_fault structure in the kernel] > [move pte_map_lock()'s definition upper in the file] > [move the define of FAULT_FLAG_SPECULATIVE later in the series] > [review error path in do_swap_page(), do_anonymous_page() and > wp_page_copy()] > Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxx> Reviewed-by: Jérôme Glisse <jglisse@xxxxxxxxxx> > --- > mm/memory.c | 87 +++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 58 insertions(+), 29 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index c6ddadd9d2b7..fc3698d13cb5 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2073,6 +2073,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, > } > EXPORT_SYMBOL_GPL(apply_to_page_range); > > +static inline bool pte_map_lock(struct vm_fault *vmf) I am not fan of the name maybe pte_offset_map_lock_if_valid() ? But that just a taste thing. So feel free to ignore this comment. > +{ > + vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, > + vmf->address, &vmf->ptl); > + return true; > +} > + > /* > * handle_pte_fault chooses page fault handler according to an entry which was > * read non-atomically. Before making any commitment, on those architectures