On Tue, Dec 20, 2022 at 10:28 AM Roman Gushchin <roman.gushchin@xxxxxxxxx> wrote: > > Manually inline memcg_kmem_bypass() and active_memcg() to speed up > get_obj_cgroup_from_current() by avoiding duplicate in_task() checks > and active_memcg() readings. > > Also add a likely() macro to __get_obj_cgroup_from_memcg(): > obj_cgroup_tryget() should succeed at almost all times except > a very unlikely race with the memcg deletion path. > > Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> Can you please add your performance experiment setup and result of this patch in the commit description of this patch as well? Acked-by: Shakeel Butt <shakeelb@xxxxxxxxxx>