On 9/20/21 03:53, Matthew Wilcox wrote: > On Mon, Sep 20, 2021 at 01:09:38AM +0000, Hyeonggon Yoo wrote: >> Hello Matthew, Thanks to give me a comment! I appreciate it. >> Yeah, we can implement lockless cache using kmem_cache_alloc_{bulk, free} >> but kmem_cache_alloc_{free,bulk} is not enough. >> >> > I'd rather see this be part of the slab allocator than a separate API. >> >> And I disagree on this. for because most of situation, we cannot >> allocate without lock, it is special case for IO polling. >> >> To make it as part of slab allocator, we need to modify existing data >> structure. But making it part of slab allocator will be waste of memory >> because most of them are not using this. > > Oh, it would have to be an option. Maybe as a new slab_flags_t flag. > Or maybe a kmem_cache_alloc_percpu_lockless(). I've recently found out that similar attempts (introduce queueing to SLUB) have been done around 2010. See e.g. [1] but there will be other threads to search at lore too. Haven't checked yet while it wasn't ultimately merged, I guess Christoph and David could remember (this was before my time). I guess making it opt-in only for caches where performance improvement was measured would make it easier to add, as for some caches it would mean no improvement, but increased memory usage. But of course it makes the API more harder to use. I'd be careful about the name "lockless", as that's ambiguous. Is it "mostly lockless" therefore fast, but if the cache is empty, it will still take locks as part of refill? Or is it lockless always, therefore useful in contexts that can take no locks, but then the caller has to have fallbacks in case the cache is empty and nothing is allocated? [1] https://lore.kernel.org/linux-mm/20100804024531.914852850@xxxxxxxxx/T/#u