On Sun, 6 Feb 2022 13:42:09 -0800 (PST) Hugh Dickins wrote: > +static void mlock_vma_pages_range(struct vm_area_struct *vma, > + unsigned long start, unsigned long end, vm_flags_t newflags) > { > - /* Reimplementation to follow in later commit */ > + static const struct mm_walk_ops mlock_walk_ops = { > + .pmd_entry = mlock_pte_range, > + }; > + > + /* > + * There is a slight chance that concurrent page migration, > + * or page reclaim finding a page of this now-VM_LOCKED vma, > + * will call mlock_vma_page() and raise page's mlock_count: > + * double counting, leaving the page unevictable indefinitely. > + * Communicate this danger to mlock_vma_page() with VM_IO, > + * which is a VM_SPECIAL flag not allowed on VM_LOCKED vmas. > + * mmap_lock is held in write mode here, so this weird > + * combination should not be visible to others. > + */ > + if (newflags & VM_LOCKED) > + newflags |= VM_IO; > + WRITE_ONCE(vma->vm_flags, newflags); Nit The WRITE_ONCE is not needed, given the certainty of invisibility to others - it will quiesce syzbot reporting the case of visibility. Hillf > + > + lru_add_drain(); > + walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); > + lru_add_drain(); > + > + if (newflags & VM_IO) { > + newflags &= ~VM_IO; > + WRITE_ONCE(vma->vm_flags, newflags); > + } > } >