On Tue, Feb 12, 2019 at 10:56:13AM +0800, Peter Xu wrote: > From: Andrea Arcangeli <aarcange@xxxxxxxxxx> > > There are several cases write protection fault happens. It could be a > write to zero page, swaped page or userfault write protected > page. When the fault happens, there is no way to know if userfault > write protect the page before. Here we just blindly issue a userfault > notification for vma with VM_UFFD_WP regardless if app write protects > it yet. Application should be ready to handle such wp fault. > > v1: From: Shaohua Li <shli@xxxxxx> > > v2: Handle the userfault in the common do_wp_page. If we get there a > pagetable is present and readonly so no need to do further processing > until we solve the userfault. > > In the swapin case, always swapin as readonly. This will cause false > positive userfaults. We need to decide later if to eliminate them with > a flag like soft-dirty in the swap entry (see _PAGE_SWP_SOFT_DIRTY). > > hugetlbfs wouldn't need to worry about swapouts but and tmpfs would > be handled by a swap entry bit like anonymous memory. > > The main problem with no easy solution to eliminate the false > positives, will be if/when userfaultfd is extended to real filesystem > pagecache. When the pagecache is freed by reclaim we can't leave the > radix tree pinned if the inode and in turn the radix tree is reclaimed > as well. > > The estimation is that full accuracy and lack of false positives could > be easily provided only to anonymous memory (as long as there's no > fork or as long as MADV_DONTFORK is used on the userfaultfd anonymous > range) tmpfs and hugetlbfs, it's most certainly worth to achieve it > but in a later incremental patch. > > v3: Add hooking point for THP wrprotect faults. > > CC: Shaohua Li <shli@xxxxxx> > Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx> > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx> Reviewed-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> > --- > mm/memory.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/mm/memory.c b/mm/memory.c > index e11ca9dd823f..00781c43407b 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2483,6 +2483,11 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > > + if (userfaultfd_wp(vma)) { > + pte_unmap_unlock(vmf->pte, vmf->ptl); > + return handle_userfault(vmf, VM_UFFD_WP); > + } > + > vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte); > if (!vmf->page) { > /* > @@ -2800,6 +2805,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); > dec_mm_counter_fast(vma->vm_mm, MM_SWAPENTS); > pte = mk_pte(page, vma->vm_page_prot); > + if (userfaultfd_wp(vma)) > + vmf->flags &= ~FAULT_FLAG_WRITE; > if ((vmf->flags & FAULT_FLAG_WRITE) && reuse_swap_page(page, NULL)) { > pte = maybe_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &= ~FAULT_FLAG_WRITE; > @@ -3684,8 +3691,11 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) > /* `inline' is required to avoid gcc 4.1.2 build error */ > static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) > { > - if (vma_is_anonymous(vmf->vma)) > + if (vma_is_anonymous(vmf->vma)) { > + if (userfaultfd_wp(vmf->vma)) > + return handle_userfault(vmf, VM_UFFD_WP); > return do_huge_pmd_wp_page(vmf, orig_pmd); > + } > if (vmf->vma->vm_ops->huge_fault) > return vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD); > > -- > 2.17.1 > -- Sincerely yours, Mike.