On Tue, Jan 5, 2021 at 8:22 PM Roman Gushchin <guro@xxxxxx> wrote: > > Imran Khan reported a regression in hackbench results caused by the > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages"). The regression is noticeable in the case of > a consequent allocation of several relatively large slab objects, > e.g. skb's. As soon as the amount of stocked bytes exceeds PAGE_SIZE, > drain_obj_stock() and __memcg_kmem_uncharge() are called, and it leads > to a number of atomic operations in page_counter_uncharge(). > > The corresponding call graph is below (provided by Imran Khan): > |__alloc_skb > | | > | |__kmalloc_reserve.isra.61 > | | | > | | |__kmalloc_node_track_caller > | | | | > | | | |slab_pre_alloc_hook.constprop.88 > | | | obj_cgroup_charge > | | | | | > | | | | |__memcg_kmem_charge > | | | | | | > | | | | | |page_counter_try_charge > | | | | | > | | | | |refill_obj_stock > | | | | | | > | | | | | |drain_obj_stock.isra.68 > | | | | | | | > | | | | | | |__memcg_kmem_uncharge > | | | | | | | | > | | | | | | | |page_counter_uncharge > | | | | | | | | | > | | | | | | | | |page_counter_cancel > | | | | > | | | | > | | | |__slab_alloc > | | | | | > | | | | |___slab_alloc > | | | | | > | | | |slab_post_alloc_hook > > Instead of directly uncharging the accounted kernel memory, it's > possible to refill the generic page-sized per-cpu stock instead. > It's a much faster operation, especially on a default hierarchy. > As a bonus, __memcg_kmem_uncharge_page() will also get faster, > so the freeing of page-sized kernel allocations (e.g. large kmallocs) > will become faster. > > A similar change has been done earlier for the socket memory by > the commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for > socket memory uncharging"). > > Signed-off-by: Roman Gushchin <guro@xxxxxx> > Reported-by: Imran Khan <imran.f.khan@xxxxxxxxxx> I remember seeing this somewhere https://lore.kernel.org/linux-mm/20190423154405.259178-1-shakeelb@xxxxxxxxxx/ Reviewed-by: Shakeel Butt <shakeelb@xxxxxxxxxx>