On Tue, Dec 20, 2022 at 11:55:34AM -0800, Shakeel Butt wrote: > On Tue, Dec 20, 2022 at 10:28 AM Roman Gushchin > <roman.gushchin@xxxxxxxxx> wrote: > > > > Manually inline memcg_kmem_bypass() and active_memcg() to speed up > > get_obj_cgroup_from_current() by avoiding duplicate in_task() checks > > and active_memcg() readings. > > > > Also add a likely() macro to __get_obj_cgroup_from_memcg(): > > obj_cgroup_tryget() should succeed at almost all times except > > a very unlikely race with the memcg deletion path. > > > > Signed-off-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> > > Can you please add your performance experiment setup and result of > this patch in the commit description of this patch as well? Sure. I used a small hack to just do a bunch of allocations in a raw and measured the time. Will include it into the commit message. Also will fix the #ifdef thing from the second patch, thanks for spotting it. > > Acked-by: Shakeel Butt <shakeelb@xxxxxxxxxx> Thank you for taking a look!