On Tue, Aug 8, 2023 at 10:54 AM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > kmem_cache_setup_percpu_array() will allocate a per-cpu array for > caching alloc/free objects of given size for the cache. The cache > has to be created with SLAB_NO_MERGE flag. > > The array is filled by freeing. When empty for alloc or full for > freeing, it's simply bypassed by the operation, there's currently no > batch freeing/allocations. > > The locking is copied from the page allocator's pcplists, based on > embedded spin locks. Interrupts are not disabled, only preemption (cpu > migration on RT). Trylock is attempted to avoid deadlock due to > an intnerrupt, trylock failure means the array is bypassed. > > Sysfs stat counters alloc_cpu_cache and free_cpu_cache count operations > that used the percpu array. > > Bulk allocation bypasses the array, bulk freeing does not. > > kmem_cache_prefill_percpu_array() can be called to ensure the array on > the current cpu to at least the given number of objects. However this is > only opportunistic as there's no cpu pinning and the trylocks may always > fail. Therefore allocations cannot rely on the array for success even > after the prefill. But misses should be rare enough that e.g. GFP_ATOMIC > allocations should be acceptable after the refill. > The operation is currently not optimized. As I asked on IRC, I'm curious about three questions: 1) How does this affect SLUB's anti-queueing ideas? 2) Since this is so similar to SLAB's caching, is it realistic to make this opt-out instead? 3) What performance difference do you expect/see from benchmarks? > More TODO/FIXMEs: > > - NUMA awareness - preferred node currently ignored, __GFP_THISNODE not > honored > - slub_debug - will not work for allocations from the array. Normally in > SLUB implementation the slub_debug kills all fast paths, but that > could lead to depleting the reserves if we ignore the prefill and use > GFP_ATOMIC. Needs more thought. > --- > include/linux/slab.h | 4 + > include/linux/slub_def.h | 10 ++ > mm/slub.c | 210 ++++++++++++++++++++++++++++++++++++++- > 3 files changed, 223 insertions(+), 1 deletion(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 848c7c82ad5a..f6c91cbc1544 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -196,6 +196,8 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, > void kmem_cache_destroy(struct kmem_cache *s); > int kmem_cache_shrink(struct kmem_cache *s); > > +int kmem_cache_setup_percpu_array(struct kmem_cache *s, unsigned int count); > + > /* > * Please use this macro to create slab caches. Simply specify the > * name of the structure and maybe some flags that are listed above. > @@ -494,6 +496,8 @@ void kmem_cache_free(struct kmem_cache *s, void *objp); > void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p); > int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p); > > +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int count, gfp_t gfp); > + > static __always_inline void kfree_bulk(size_t size, void **p) > { > kmem_cache_free_bulk(NULL, size, p); > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index deb90cf4bffb..c85434668419 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -13,8 +13,10 @@ > #include <linux/local_lock.h> > > enum stat_item { > + ALLOC_PERCPU_CACHE, /* Allocation from percpu array cache */ > ALLOC_FASTPATH, /* Allocation from cpu slab */ > ALLOC_SLOWPATH, /* Allocation by getting a new cpu slab */ > + FREE_PERCPU_CACHE, /* Free to percpu array cache */ > FREE_FASTPATH, /* Free to cpu slab */ > FREE_SLOWPATH, /* Freeing not to cpu slab */ > FREE_FROZEN, /* Freeing to frozen slab */ > @@ -66,6 +68,13 @@ struct kmem_cache_cpu { > }; > #endif /* CONFIG_SLUB_TINY */ > > +struct slub_percpu_array { > + spinlock_t lock; Since this is a percpu array, you probably want to avoid a lock here. An idea would be to have some sort of bool accessing; and then doing: preempt_disable(); WRITE_ONCE(accessing, 1); /* doing pcpu array stuff */ WRITE_ONCE(accessing, 0); preempt_enable(); which would avoid the atomic in a fast path while still giving you safety on IRQ paths. Although reclamation gets harder as you stop being able to reclaim these pcpu arrays from other CPUs. -- Pedro