On 08/11/22 12:34, David Hildenbrand wrote: > If we ever get a write-fault on a write-protected page in a shared mapping, > we'd be in trouble (again). Instead, we can simply map the page writable. > <snip> > > Reason is that uffd-wp doesn't clear the uffd-wp PTE bit when > unregistering and consequently keeps the PTE writeprotected. Reason for > this is to avoid the additional overhead when unregistering. Note > that this is the case also for !hugetlb and that we will end up with > writable PTEs that still have the uffd-wp PTE bit set once we return > from hugetlb_wp(). I'm not touching the uffd-wp PTE bit for now, because it > seems to be a generic thing -- wp_page_reuse() also doesn't clear it. > > VM_MAYSHARE handling in hugetlb_fault() for FAULT_FLAG_WRITE > indicates that MAP_SHARED handling was at least envisioned, but could never > have worked as expected. > > While at it, make sure that we never end up in hugetlb_wp() on write > faults without VM_WRITE, because we don't support maybe_mkwrite() > semantics as commonly used in the !hugetlb case -- for example, in > wp_page_reuse(). Nit, to me 'make sure that we never end up in hugetlb_wp()' implies that we would check for condition in callers as opposed to first thing in hugetlb_wp(). However, I am OK with description as it. > Note that there is no need to do any kind of reservation in hugetlb_fault() > in this case ... because we already have a hugetlb page mapped R/O > that we will simply map writable and we are not dealing with COW/unsharing. Note that we are not really doing any reservation adjustment in hugetlb_fault(). That code does pre-allocation of reservation data in case we might need it in hugetlb_wp. Since hugetlb_wp will certainly not do an allocation in this case, we do not even need to do the preallocation here. This change is more of an optimization. I am still happy with it. > > Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs") > Cc: <stable@xxxxxxxxxxxxxxx> # v5.19 > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/hugetlb.c | 26 +++++++++++++++++++------- > 1 file changed, 19 insertions(+), 7 deletions(-) Thanks, Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> -- Mike Kravetz > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 0aee2f3ae15c..2480ba627aa5 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5241,6 +5241,21 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, > VM_BUG_ON(unshare && (flags & FOLL_WRITE)); > VM_BUG_ON(!unshare && !(flags & FOLL_WRITE)); > > + /* > + * hugetlb does not support FOLL_FORCE-style write faults that keep the > + * PTE mapped R/O such as maybe_mkwrite() would do. > + */ > + if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE))) > + return VM_FAULT_SIGSEGV; > + > + /* Let's take out MAP_SHARED mappings first. */ > + if (vma->vm_flags & VM_MAYSHARE) { > + if (unlikely(unshare)) > + return 0; > + set_huge_ptep_writable(vma, haddr, ptep); > + return 0; > + } > + > pte = huge_ptep_get(ptep); > old_page = pte_page(pte); > > @@ -5781,12 +5796,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, > * If we are going to COW/unshare the mapping later, we examine the > * pending reservations for this page now. This will ensure that any > * allocations necessary to record that reservation occur outside the > - * spinlock. For private mappings, we also lookup the pagecache > - * page now as it is used to determine if a reservation has been > - * consumed. > + * spinlock. Also lookup the pagecache page now as it is used to > + * determine if a reservation has been consumed. > */ > if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && > - !huge_pte_write(entry)) { > + !(vma->vm_flags & VM_MAYSHARE) && !huge_pte_write(entry)) { > if (vma_needs_reservation(h, vma, haddr) < 0) { > ret = VM_FAULT_OOM; > goto out_mutex; > @@ -5794,9 +5808,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, > /* Just decrements count, does not deallocate */ > vma_end_reservation(h, vma, haddr); > > - if (!(vma->vm_flags & VM_MAYSHARE)) > - pagecache_page = hugetlbfs_pagecache_page(h, > - vma, haddr); > + pagecache_page = hugetlbfs_pagecache_page(h, vma, haddr); > } > > ptl = huge_pte_lock(h, mm, ptep); > -- > 2.35.3 > >