On 25/03/2018 23:50, David Rientjes wrote: > On Tue, 13 Mar 2018, Laurent Dufour wrote: > >> When handling page fault without holding the mmap_sem the fetch of the >> pte lock pointer and the locking will have to be done while ensuring >> that the VMA is not touched in our back. >> >> So move the fetch and locking operations in a dedicated function. >> >> Signed-off-by: Laurent Dufour <ldufour@xxxxxxxxxxxxxxxxxx> >> --- >> mm/memory.c | 15 +++++++++++---- >> 1 file changed, 11 insertions(+), 4 deletions(-) >> >> diff --git a/mm/memory.c b/mm/memory.c >> index 8ac241b9f370..21b1212a0892 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -2288,6 +2288,13 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, >> } >> EXPORT_SYMBOL_GPL(apply_to_page_range); >> >> +static bool pte_spinlock(struct vm_fault *vmf) > > inline? You're right. Indeed this was done in the patch 18 : "mm: Provide speculative fault infrastructure", but this has to be done there too, I'll fix that. > >> +{ >> + vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); >> + spin_lock(vmf->ptl); >> + return true; >> +} >> + >> static bool pte_map_lock(struct vm_fault *vmf) >> { >> vmf->pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, > > Shouldn't pte_unmap_same() take struct vm_fault * and use the new > pte_spinlock()? done in the next patch, but you already acked it..