On Wed, Oct 9, 2024 at 1:58 AM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote: > On Mon, Oct 7, 2024 at 5:42 PM Jann Horn <jannh@xxxxxxxxxx> wrote: > Not to overthink it, but do you have any insight into why copy_vma() > only requires the rmap lock under this condition? > > *need_rmap_locks = (new_vma->vm_pgoff <= vma->vm_pgoff); > > Could a collapse still occur when need_rmap_locks is false, > potentially triggering the bug you described? My assumption is no, but > I wanted to double-check. Ah, that code is a bit confusing. There are actually two circumstances under which we take rmap locks, and that condition only captures (part of) the first one: 1. when we might move PTEs against rmap traversal order (we need the lock so that concurrent rmap traversal can't miss the PTEs) 2. when we move page tables (otherwise concurrent rmap traversal could race with page table changes) If you look at the four callsites of move_pgt_entry(), you can see that its parameter "need_rmap_locks" sometimes comes from the caller's "need_rmap_locks" variable (in the HPAGE_PUD and HPAGE_PMD cases), but other times it is just hardcoded to true (in the NORMAL_PUD and NORMAL_PMD cases). So move_normal_pmd() always holds rmap locks. (This code would probably be a bit clearer if we moved the rmap locking into the helpers move_{normal,huge}_{pmd,pud} and got rid of the helper move_pgt_entry()...) (Also, note that when undoing the PTE moves with the second move_page_tables() call, the "need_rmap_locks" parameter to move_page_tables() is hardcoded to true.) > The patch looks good to me overall. I was also curious if > move_normal_pud() would require a similar change, though I’m inclined > to think that path doesn't lead to a bug. Yeah, there is no path that would remove PUD entries pointing to page tables through the rmap, that's a special PMD entry thing. (Well, at least not in non-hugetlb code, I haven't looked at hugetlb in a long time - but hugetlb has an entirely separate codepath for moving page tables, move_hugetlb_page_tables().)