* Suren Baghdasaryan <surenb@xxxxxxxxxx> [250110 14:08]: > On Fri, Jan 10, 2025 at 9:48 AM Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> wrote: > > > > * Suren Baghdasaryan <surenb@xxxxxxxxxx> [250108 21:31]: > > > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that > > > object reuse before RCU grace period is over will be detected by > > > lock_vma_under_rcu(). > > > Current checks are sufficient as long as vma is detached before it is > > > freed. The only place this is not currently happening is in exit_mmap(). > > > Add the missing vma_mark_detached() in exit_mmap(). > > > Another issue which might trick lock_vma_under_rcu() during vma reuse > > > is vm_area_dup(), which copies the entire content of the vma into a new > > > one, overriding new vma's vm_refcnt and temporarily making it appear as > > > attached. This might trick a racing lock_vma_under_rcu() to operate on > > > a reused vma if it found the vma before it got reused. To prevent this > > > situation, we should ensure that vm_refcnt stays at detached state (0) > > > when it is copied and advances to attached state only after it is added > > > into the vma tree. Introduce vma_copy() which preserves new vma's > > > vm_refcnt and use it in vm_area_dup(). Since all vmas are in detached > > > state with no current readers when they are freed, lock_vma_under_rcu() > > > will not be able to take vm_refcnt after vma got detached even if vma > > > is reused. > > > Finally, make vm_area_cachep SLAB_TYPESAFE_BY_RCU. This will facilitate > > > vm_area_struct reuse and will minimize the number of call_rcu() calls. > > > > > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > > > --- > > > include/linux/mm.h | 2 - > > > include/linux/mm_types.h | 10 +++-- > > > include/linux/slab.h | 6 --- > > > kernel/fork.c | 72 ++++++++++++++++++++------------ > > > mm/mmap.c | 3 +- > > > mm/vma.c | 11 ++--- > > > mm/vma.h | 2 +- > > > tools/testing/vma/vma_internal.h | 7 +--- > > > 8 files changed, 59 insertions(+), 54 deletions(-) > > > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > > index 1d6b1563b956..a674558e4c05 100644 > > > --- a/include/linux/mm.h > > > +++ b/include/linux/mm.h > > > @@ -258,8 +258,6 @@ void setup_initial_init_mm(void *start_code, void *end_code, > > > struct vm_area_struct *vm_area_alloc(struct mm_struct *); > > > struct vm_area_struct *vm_area_dup(struct vm_area_struct *); > > > void vm_area_free(struct vm_area_struct *); > > > -/* Use only if VMA has no other users */ > > > -void __vm_area_free(struct vm_area_struct *vma); > > > > > > #ifndef CONFIG_MMU > > > extern struct rb_root nommu_region_tree; > > > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > > > index 2d83d79d1899..93bfcd0c1fde 100644 > > > --- a/include/linux/mm_types.h > > > +++ b/include/linux/mm_types.h > > > @@ -582,6 +582,12 @@ static inline void *folio_get_private(struct folio *folio) > > > > > > typedef unsigned long vm_flags_t; > > > > > > +/* > > > + * freeptr_t represents a SLUB freelist pointer, which might be encoded > > > + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. > > > + */ > > > +typedef struct { unsigned long v; } freeptr_t; > > > + > > > /* > > > * A region containing a mapping of a non-memory backed file under NOMMU > > > * conditions. These are held in a global tree and are pinned by the VMAs that > > > @@ -695,9 +701,7 @@ struct vm_area_struct { > > > unsigned long vm_start; > > > unsigned long vm_end; > > > }; > > > -#ifdef CONFIG_PER_VMA_LOCK > > > - struct rcu_head vm_rcu; /* Used for deferred freeing. */ > > > -#endif > > > + freeptr_t vm_freeptr; /* Pointer used by SLAB_TYPESAFE_BY_RCU */ > > > }; > > > > > > /* > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > > index 10a971c2bde3..681b685b6c4e 100644 > > > --- a/include/linux/slab.h > > > +++ b/include/linux/slab.h > > > @@ -234,12 +234,6 @@ enum _slab_flag_bits { > > > #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED > > > #endif > > > > > > -/* > > > - * freeptr_t represents a SLUB freelist pointer, which might be encoded > > > - * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. > > > - */ > > > -typedef struct { unsigned long v; } freeptr_t; > > > - > > > /* > > > * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. > > > * > > > diff --git a/kernel/fork.c b/kernel/fork.c > > > index 9d9275783cf8..770b973a099c 100644 > > > --- a/kernel/fork.c > > > +++ b/kernel/fork.c > > > @@ -449,6 +449,41 @@ struct vm_area_struct *vm_area_alloc(struct mm_struct *mm) > > > return vma; > > > } > > > > > > > There exists a copy_vma() which copies the vma to a new area in the mm > > in rmap. Naming this vma_copy() is confusing :) > > > > It might be better to just put this code in the vm_area_dup() or call it > > __vm_area_dup(), or __vma_dup() ? > > Hmm. It's not really duplicating a vma but copying its content (no > allocation). How about __vm_area_copy() to indicate it is copying > vm_area_struct content? > > > > > > +static void vma_copy(const struct vm_area_struct *src, struct vm_area_struct *dest) > > > +{ > > > + dest->vm_mm = src->vm_mm; > > > + dest->vm_ops = src->vm_ops; > > > + dest->vm_start = src->vm_start; > > > + dest->vm_end = src->vm_end; > > > + dest->anon_vma = src->anon_vma; > > > + dest->vm_pgoff = src->vm_pgoff; > > > + dest->vm_file = src->vm_file; > > > + dest->vm_private_data = src->vm_private_data; > > > + vm_flags_init(dest, src->vm_flags); > > > + memcpy(&dest->vm_page_prot, &src->vm_page_prot, > > > + sizeof(dest->vm_page_prot)); > > > + /* > > > + * src->shared.rb may be modified concurrently, but the clone > > > + * will be reinitialized. > > > + */ > > > + data_race(memcpy(&dest->shared, &src->shared, sizeof(dest->shared))); > > > + memcpy(&dest->vm_userfaultfd_ctx, &src->vm_userfaultfd_ctx, > > > + sizeof(dest->vm_userfaultfd_ctx)); > > > +#ifdef CONFIG_ANON_VMA_NAME > > > + dest->anon_name = src->anon_name; > > > +#endif > > > +#ifdef CONFIG_SWAP > > > + memcpy(&dest->swap_readahead_info, &src->swap_readahead_info, > > > + sizeof(dest->swap_readahead_info)); > > > +#endif > > > +#ifndef CONFIG_MMU > > > + dest->vm_region = src->vm_region; > > > +#endif > > > +#ifdef CONFIG_NUMA > > > + dest->vm_policy = src->vm_policy; > > > +#endif > > > +} > > > + > > > struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > > > { > > > struct vm_area_struct *new = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); > > > @@ -458,11 +493,7 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig) > > > > > > ASSERT_EXCLUSIVE_WRITER(orig->vm_flags); > > > ASSERT_EXCLUSIVE_WRITER(orig->vm_file); > > > - /* > > > - * orig->shared.rb may be modified concurrently, but the clone > > > - * will be reinitialized. > > > - */ > > > - data_race(memcpy(new, orig, sizeof(*new))); > > > + vma_copy(orig, new); > > > vma_lock_init(new, true); > > > > I think this suffers from a race still? > > > > That is, we can still race between vm_lock_seq == mm_lock_seq and the > > lock acquire, where a free and reuse happens. In the even that the > > reader is caught between the sequence and lock taking, the > > vma->vmlock_dep_map may not be replaced and it could see the old lock > > (or zero?) and things go bad: > > > > It could try to take vmlock_dep_map == 0 in read mode. > > > > It can take the old lock, detect the refcnt is wrong and release the new > > lock. > > I don't think this race can happen. Notice a call to > vma_assert_detached() inside vm_area_free(), so before vma is freed > and possibly reused, it has to be detached. vma_mark_detached() > ensures that there are no current or future readers by executing the > __vma_enter_locked() + __vma_exit_locked() sequence if vm_refcnt is > not already at 0. Once __vma_exit_locked() is done, vm_refcnt is at 0 > and any new reader will be rejected on > __refcount_inc_not_zero_limited(), before even checking vm_lock_seq == > mm_lock_seq. Isn't the vm_lock_seq check before the ref count in vma_start_read()?