On Tue, Jan 05, 2021 at 08:22:39PM -0800, Roman Gushchin <guro@xxxxxx> wrote: > Imran Khan reported a regression in hackbench results caused by the > commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects > instead of pages"). The regression is noticeable in the case of > a consequent allocation of several relatively large slab objects, > e.g. skb's. As soon as the amount of stocked bytes exceeds PAGE_SIZE, > drain_obj_stock() and __memcg_kmem_uncharge() are called, and it leads > to a number of atomic operations in page_counter_uncharge(). > > The corresponding call graph is below (provided by Imran Khan): > |__alloc_skb > | | > | |__kmalloc_reserve.isra.61 > | | | > | | |__kmalloc_node_track_caller > | | | | > | | | |slab_pre_alloc_hook.constprop.88 > | | | obj_cgroup_charge > | | | | | > | | | | |__memcg_kmem_charge > | | | | | | > | | | | | |page_counter_try_charge > | | | | | > | | | | |refill_obj_stock > | | | | | | > | | | | | |drain_obj_stock.isra.68 <--- draining old memcg > | | | | | | | > | | | | | | |__memcg_kmem_uncharge > | | | | | | | | > | | | | | | | |page_counter_uncharge > | | | | | | | | | > | | | | | | | | |page_counter_cancel > [...] > - page_counter_uncharge(&memcg->memory, nr_pages); > - if (do_memsw_account()) > - page_counter_uncharge(&memcg->memsw, nr_pages); > + refill_stock(memcg, nr_pages); I noticed that refill_stock(memcg,...) switches the local stock to memcg. In this particular call chain, the uncharged memcg is the old memcg of stock->cached_objcg. The refill_stock() then may switch stock->cached to the old memcg too. If the patch leads to better performance, then the switch probably doesn't happen at this moment (and I guess stock->cached_objcg and stock->cached can be independent to some extent, so the old memcg in one needn't be the old in the latter). In conclusion Reviewed-by: Michal Koutný <mkoutny@xxxxxxxx> Michal
Attachment:
signature.asc
Description: Digital signature