On Tue, Aug 27, 2024 at 11:10:10PM GMT, Vlastimil Babka wrote: > On 8/27/24 17:59, Christian Brauner wrote: > > When a kmem cache is created with SLAB_TYPESAFE_BY_RCU the free pointer > > must be located outside of the object because we don't know what part of > > the memory can safely be overwritten as it may be needed to prevent > > object recycling. > > > > That has the consequence that SLAB_TYPESAFE_BY_RCU may end up adding a > > new cacheline. This is the case for .e.g, struct file. After having it > > shrunk down by 40 bytes and having it fit in three cachelines we still > > have SLAB_TYPESAFE_BY_RCU adding a fourth cacheline because it needs to > > accomodate the free pointer and is hardware cacheline aligned. > > > > I tried to find ways to rectify this as struct file is pretty much > > everywhere and having it use less memory is a good thing. So here's a > > proposal. > > > > Signed-off-by: Christian Brauner <brauner@xxxxxxxxxx> > > So logistically patch 3 needs stuff in the vfs tree and having 1+2 in slab > tree and 3 in vfs that depends on 1+2 elsewhere is infeasible, so it will be > easiest for whole series to be in vfs, right? Yeah, that's fine by me. > > > --- > > include/linux/slab.h | 9 ++++ > > mm/slab.h | 1 + > > mm/slab_common.c | 133 ++++++++++++++++++++++++++++++++++++--------------- > > mm/slub.c | 17 ++++--- > > 4 files changed, 114 insertions(+), 46 deletions(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index eb2bf4629157..5b2da2cf31a8 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -212,6 +212,12 @@ enum _slab_flag_bits { > > #define SLAB_NO_OBJ_EXT __SLAB_FLAG_UNUSED > > #endif > > > > +/* > > + * freeptr_t represents a SLUB freelist pointer, which might be encoded > > + * and not dereferenceable if CONFIG_SLAB_FREELIST_HARDENED is enabled. > > + */ > > +typedef struct { unsigned long v; } freeptr_t; > > + > > /* > > * ZERO_SIZE_PTR will be returned for zero sized kmalloc requests. > > * > > @@ -242,6 +248,9 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, > > slab_flags_t flags, > > unsigned int useroffset, unsigned int usersize, > > void (*ctor)(void *)); > > +struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size, > > + unsigned int freeptr_offset, > > + slab_flags_t flags); > > void kmem_cache_destroy(struct kmem_cache *s); > > int kmem_cache_shrink(struct kmem_cache *s); > > > > diff --git a/mm/slab.h b/mm/slab.h > > index dcdb56b8e7f5..b05512a14f07 100644 > > --- a/mm/slab.h > > +++ b/mm/slab.h > > @@ -261,6 +261,7 @@ struct kmem_cache { > > unsigned int object_size; /* Object size without metadata */ > > struct reciprocal_value reciprocal_size; > > unsigned int offset; /* Free pointer offset */ > > + unsigned int rcu_freeptr_offset; /* Specific free pointer requested */ > > More precisely something like: > > Specific offset requested (if not > UINT_MAX) Yep, added that. > > ? > > > #ifdef CONFIG_SLUB_CPU_PARTIAL > > /* Number of per cpu partial objects to keep around */ > > unsigned int cpu_partial; > > diff --git a/mm/slab_common.c b/mm/slab_common.c > > index c8dd7e08c5f6..c4beff642fff 100644 > > --- a/mm/slab_common.c > > +++ b/mm/slab_common.c > > @@ -202,9 +202,10 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, > > } > > > > static struct kmem_cache *create_cache(const char *name, > > - unsigned int object_size, unsigned int align, > > - slab_flags_t flags, unsigned int useroffset, > > - unsigned int usersize, void (*ctor)(void *)) > > + unsigned int object_size, unsigned int freeptr_offset, > > + unsigned int align, slab_flags_t flags, > > + unsigned int useroffset, unsigned int usersize, > > + void (*ctor)(void *)) > > { > > struct kmem_cache *s; > > int err; > > @@ -212,6 +213,12 @@ static struct kmem_cache *create_cache(const char *name, > > if (WARN_ON(useroffset + usersize > object_size)) > > useroffset = usersize = 0; > > > > + err = -EINVAL; > > + if (freeptr_offset < UINT_MAX && > > freeptr_offset != UINT_MAX to be more obvious and match has_freeptr_offset() ? Done. > > > + (freeptr_offset >= object_size || > > + (freeptr_offset && !(flags & SLAB_TYPESAFE_BY_RCU)))) > > and here drop the "freeptr_offset &&" as zero is a valid value Yes, thank you. > > instead we could want alignment to sizeof(freeptr_t) if we were paranoid? Added a check for that. > > > + goto out; > > The rest seems good to me now. Thanks for the review! v3 incoming