On Tue, Sep 21, 2021 at 09:37:40AM -0600, Jens Axboe wrote: > > @@ -424,6 +431,57 @@ kmem_cache_create(const char *name, unsigned int size, unsigned int align, > > } > > EXPORT_SYMBOL(kmem_cache_create); > > > > +/** > > + * kmem_cache_alloc_cached - try to allocate from cache without lock > > + * @s: slab cache > > + * @flags: SLAB flags > > + * > > + * Try to allocate from cache without lock. If fails, fill the lockless cache > > + * using bulk alloc API > > + * > > + * Be sure that there's no race condition. > > + * Must create slab cache with SLAB_LOCKLESS_CACHE flag to use this function. > > + * > > + * Return: a pointer to free object on allocation success, NULL on failure. > > + */ > > +void *kmem_cache_alloc_cached(struct kmem_cache *s, gfp_t gfpflags) > > +{ > > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > > + > > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > > + > > + if (cache->size) /* fastpath without lock */ > > + return cache->queue[--cache->size]; > > + > > + /* slowpath */ > > + cache->size = kmem_cache_alloc_bulk(s, gfpflags, > > + KMEM_LOCKLESS_CACHE_QUEUE_SIZE, cache->queue); > > + if (cache->size) > > + return cache->queue[--cache->size]; > > + else > > + return NULL; > > +} > > +EXPORT_SYMBOL(kmem_cache_alloc_cached); Hello Jens, I'm so happy that you gave comment. > What I implemented for IOPOLL doesn't need to care about interrupts, > hence preemption disable is enough. But we do need that, at least. To be honest, that was my mistake. I was mistakenly using percpu API. it's a shame :> Thank you for pointing that. Fixed it in v3 (work in progress now) > There are basically two types of use cases for this: > > 1) Freeing can happen from interrupts > 2) Freeing cannot happen from interrupts > I considered only case 2) when writing code. Well, To support 1), I think there are two ways: a) internally call kmem_cache_free when in_interrupt() is true b) caller must disable interrupt when freeing I think a) is okay, how do you think? note that b) can be problematic with kmem_cache_free_bulk as it says interrupts must be enabled. > How does this work for preempt? You seem to assume that the function is > invoked with preempt disabled, but then it could only be used with > GFP_ATOMIC. I wrote it just same prototype with kmem_cache_alloc, and the gfpflags parameter is unnecessary as you said. Okay, let's remove it in v3. > And if you don't care about users that free from irq/softirq, then that > should be mentioned. Locking context should be mentioned, too. The above > may be just fine IFF both alloc and free are protected by a lock higher > up. If not, both need preemption disabled and GFP_ATOMIC. I'd suggest > making the get/put cpu part of the API internally. Actually I didn't put much effort in documentation. (Especially on what context is expected before calling them) comments will be updated in v3, with your comment in mind. > > +/** > > + * kmem_cache_free_cached - return object to cache > > + * @s: slab cache > > + * @p: pointer to free > > + */ > > +void kmem_cache_free_cached(struct kmem_cache *s, void *p) > > +{ > > + struct kmem_lockless_cache *cache = this_cpu_ptr(s->cache); > > + > > + BUG_ON(!(s->flags & SLAB_LOCKLESS_CACHE)); > > Don't use BUG_ON, just do: > > if (WARN_ON_ONCE(!(s->flags & SLAB_LOCKLESS_CACHE))) { > kmem_cache_free(s, p); > return; > } > Ok. I agree WARN is better than BUG. Thanks, Hyeonggon Yoo > -- > Jens Axboe >