On Wed, Jul 26, 2023 at 11:42 PM Jann Horn <jannh@xxxxxxxxxx> wrote: > A read of vma->anon_vma under mmap_lock in read mode (in particular in > anon_vma_prepare()) can race with a concurrent update under mmap_lock > in read mode plus pagetable lock (in __prepare_anon_vma()). > However, the only allowed concurrent update is one that changes > vma->anon_vma from NULL to a non-NULL pointer; once vma->anon_vma has > been set to a non-NULL value, it will keep that value as long as the > mmap lock is held in read mode. [...] > @@ -1072,7 +1071,15 @@ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct * > static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old, struct vm_area_struct *a, struct vm_area_struct *b) > { > if (anon_vma_compatible(a, b)) { > - struct anon_vma *anon_vma = READ_ONCE(old->anon_vma); > + /* > + * Pairs with smp_store_release() in __anon_vma_prepare(). > + * > + * We could get away with a READ_ONCE() here, but > + * smp_load_acquire() ensures that the following > + * list_is_singular() check on old->anon_vma_chain doesn't race > + * with __anon_vma_prepare(). Of course I only realize directly after sending this patch that this comment only holds... > + */ > + struct anon_vma *anon_vma = smp_load_acquire(&old->anon_vma); > > if (anon_vma && list_is_singular(&old->anon_vma_chain)) > return anon_vma; > diff --git a/mm/rmap.c b/mm/rmap.c > index 0c0d8857dfce..83bc4267269f 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -210,8 +210,9 @@ int __anon_vma_prepare(struct vm_area_struct *vma) > anon_vma_lock_write(anon_vma); > /* page_table_lock to protect against threads */ > spin_lock(&mm->page_table_lock); > + /* no need for smp_load_acquire() here, the lock prevents concurrency */ > if (likely(!vma->anon_vma)) { > - vma->anon_vma = anon_vma; > + smp_store_release(&vma->anon_vma, anon_vma); > anon_vma_chain_link(vma, avc, anon_vma); ... if we move the smp_store_release() down by one line here. > anon_vma->num_active_vmas++; > allocated = NULL;