On Mon, 2023-09-25 at 15:25 -0700, Mike Kravetz wrote: > On 09/25/23 16:28, riel@xxxxxxxxxxx wrote: > > > > -void __unmap_hugepage_range_final(struct mmu_gather *tlb, > > - struct vm_area_struct *vma, unsigned long > > start, > > - unsigned long end, struct page *ref_page, > > - zap_flags_t zap_flags) > > +void __hugetlb_zap_begin(struct vm_area_struct *vma, > > + unsigned long *start, unsigned long *end) > > { > > + adjust_range_if_pmd_sharing_possible(vma, start, end); > > hugetlb_vma_lock_write(vma); > > i_mmap_lock_write(vma->vm_file->f_mapping); > > +} > > __unmap_hugepage_range_final() was called from unmap_single_vma. > unmap_single_vma has two callers, zap_page_range_single and > unmap_vmas. > > When the locking was moved into hugetlb_zap_begin, it was only added > to the > zap_page_range_single call path. Calls from unmap_vmas are missing > the > locking. Oh, that's a fun one. I suppose the locking of the f_mapping lock, and calling adjust_range_if_pmd_sharing_possible matters for the call from unmap_vmas, while the call tho hugetlb_vma_lock_write really doesn't matter, since unmap_vmas is called with the mmap_sem held for write, which already excludes page faults. I'll add the call there for v4. Good catch. -- All Rights Reversed.