On Fri, Mar 01, 2024 at 06:07:08PM +0100, Vlastimil Babka wrote: > The MEMCG_KMEM integration with slab currently relies on two hooks > during allocation. memcg_slab_pre_alloc_hook() determines the objcg and > charges it, and memcg_slab_post_alloc_hook() assigns the objcg pointer > to the allocated object(s). > > As Linus pointed out, this is unnecessarily complex. Failing to charge > due to memcg limits should be rare, so we can optimistically allocate > the object(s) and do the charging together with assigning the objcg > pointer in a single post_alloc hook. In the rare case the charging > fails, we can free the object(s) back. > > This simplifies the code (no need to pass around the objcg pointer) and > potentially allows to separate charging from allocation in cases where > it's common that the allocation would be immediately freed, and the > memcg handling overhead could be saved. > > Suggested-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > Link: https://lore.kernel.org/all/CAHk-=whYOOdM7jWy5jdrAm8LxcgCMFyk2bt8fYYvZzM4U-zAQA@xxxxxxxxxxxxxx/ > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> Nice cleanup, Vlastimil! Couple of small nits below, but otherwise, please, add my Reviewed-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Thanks! --- > mm/slub.c | 180 +++++++++++++++++++++++++++----------------------------------- > 1 file changed, 77 insertions(+), 103 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 2ef88bbf56a3..7022a1246bab 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1897,23 +1897,36 @@ static inline size_t obj_full_size(struct kmem_cache *s) > return s->size + sizeof(struct obj_cgroup *); > } > > -/* > - * Returns false if the allocation should fail. > - */ > -static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, > - struct list_lru *lru, > - struct obj_cgroup **objcgp, > - size_t objects, gfp_t flags) > +static bool __memcg_slab_post_alloc_hook(struct kmem_cache *s, > + struct list_lru *lru, > + gfp_t flags, size_t size, > + void **p) > { > + struct obj_cgroup *objcg; > + struct slab *slab; > + unsigned long off; > + size_t i; > + > /* > * The obtained objcg pointer is safe to use within the current scope, > * defined by current task or set_active_memcg() pair. > * obj_cgroup_get() is used to get a permanent reference. > */ > - struct obj_cgroup *objcg = current_obj_cgroup(); > + objcg = current_obj_cgroup(); > if (!objcg) > return true; > > + /* > + * slab_alloc_node() avoids the NULL check, so we might be called with a > + * single NULL object. kmem_cache_alloc_bulk() aborts if it can't fill > + * the whole requested size. > + * return success as there's nothing to free back > + */ > + if (unlikely(*p == NULL)) > + return true; Probably better to move this check up? current_obj_cgroup() != NULL check is more expensive. > + > + flags &= gfp_allowed_mask; > + > if (lru) { > int ret; > struct mem_cgroup *memcg; > @@ -1926,71 +1939,51 @@ static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, > return false; > } > > - if (obj_cgroup_charge(objcg, flags, objects * obj_full_size(s))) > + if (obj_cgroup_charge(objcg, flags, size * obj_full_size(s))) > return false; > > - *objcgp = objcg; > + for (i = 0; i < size; i++) { > + slab = virt_to_slab(p[i]); Not specific to this change, but I wonder if it makes sense to introduce virt_to_slab() variant without any extra checks for this and similar cases, where we know for sure that p resides on a slab page. What do you think?