On 2/28/25 18:18, Vlastimil Babka wrote: > On 2/28/25 17:34, Johannes Weiner wrote: >> On Fri, Feb 28, 2025 at 07:38:36PM +0800, Jingxiang Zeng wrote: >>> @@ -84,6 +86,9 @@ lock_list_lru_of_memcg(struct list_lru *lru, int nid, struct mem_cgroup *memcg, >>> spin_unlock_irq(&l->lock); >>> else >>> spin_unlock(&l->lock); >>> + } else { >>> + if (!memcg_list_lru_alloc(memcg, lru)) >>> + goto again; >>> } >> >> Unfortunately, you can't do allocations from this path :( >> >> list_lru_add() is called from many places with spinlocks, rcu locks >> etc. held. > > Aww, I was hoping we'd get rid of all the plumbing of lru through the slab > allocator. > But maybe we could, anyway? In __memcg_slab_post_alloc_hook() AFAICS the > only part that lru handling reuses is the objcg pointer. Moving the code to > kmem_cache_alloc_lru() would mean just another current_obj_cgroup() lookup > and that's not that expensive in the likely() cases, or is it? At the time it was introduced by commit 88f2ef73fd66 ("mm: introduce kmem_cache_alloc_lru") there was get_obj_cgroup_from_current() which seems much more involved - looking up memcg first, then ojbcg from that, with a obj_cgroup_tryget(). The tradeoff might be different today and not warrant the lru parameter anymore.