* Jann Horn <jannh@xxxxxxxxxx> [241007 15:06]: > On Fri, Aug 30, 2024 at 6:00 AM Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> wrote: > > Instead of zeroing the vma tree and then overwriting the area, let the > > area be overwritten and then clean up the gathered vmas using > > vms_complete_munmap_vmas(). > > > > To ensure locking is downgraded correctly, the mm is set regardless of > > MAP_FIXED or not (NULL vma). > > > > If a driver is mapping over an existing vma, then clear the ptes before > > the call_mmap() invocation. This is done using the vms_clean_up_area() > > helper. If there is a close vm_ops, that must also be called to ensure > > any cleanup is done before mapping over the area. This also means that > > calling open has been added to the abort of an unmap operation, for now. > > As currently implemented, this is not a valid optimization because it > violates the (unwritten?) rule that you must not call free_pgd_range() > on a region in the page tables which can concurrently be walked. A > region in the page tables can be concurrently walked if it overlaps a > VMA which is linked into rmaps which are not write-locked. Just for clarity, this is the rmap write lock. > > On Linux 6.12-rc2, when you mmap(MAP_FIXED) over an existing VMA, and > the new mapping is created by expanding an adjacent VMA, the following > race with an ftruncate() is possible (because page tables for the old > mapping are removed while the new VMA in the same location is already > fully set up and linked into the rmap): > > > task 1 (mmap, MAP_FIXED) task 2 (ftruncate) > ======================== ================== > mmap_region > vma_merge_new_range > vma_expand > commit_merge > vma_prepare > [take rmap locks] > vma_set_range > [expand adjacent mapping] > vma_complete > [drop rmap locks] > vms_complete_munmap_vmas > vms_clear_ptes > unmap_vmas > [removes ptes] > free_pgtables > [unlinks old vma from rmap] > unmap_mapping_range > unmap_mapping_pages > i_mmap_lock_read > unmap_mapping_range_tree > [loop] > unmap_mapping_range_vma > zap_page_range_single > unmap_single_vma > unmap_page_range > zap_p4d_range > zap_pud_range > zap_pmd_range > [looks up pmd entry] > free_pgd_range > [frees pmd] > [UAF pmd entry access] > > To reproduce this, apply the attached mmap-vs-truncate-racewiden.diff > to widen the race windows, then build and run the attached reproducer > mmap-fixed-race.c. > > Under a kernel with KASAN, you should ideally get a KASAN splat like this: Thanks for all the work you did finding the root cause here, I appreciate it. I think the correct fix is to take the rmap lock on free_pgtables, when necessary. There are a few code paths (error recovery) that are not regularly run that will also need to change. Regards, Liam