On Wed 22-11-23 13:58:36, Dmitry Rokosov wrote: > Hello Michal, > > Thank you for the quick review! > > On Wed, Nov 22, 2023 at 11:23:24AM +0100, Michal Hocko wrote: > > On Wed 22-11-23 13:01:56, Dmitry Rokosov wrote: > > > The shrink_memcg flow plays a crucial role in memcg reclamation. > > > Currently, it is not possible to trace this point from non-direct > > > reclaim paths. > > > > Is this really true? AFAICS we have > > mm_vmscan_lru_isolate > > mm_vmscan_lru_shrink_active > > mm_vmscan_lru_shrink_inactive > > > > which are in the vry core of the memory reclaim. Sure post processing > > those is some work. > > Sure, you are absolutely right. In the usual scenario, the memcg > shrinker utilizes two sub-shrinkers: slab and LRU. We can enable the > tracepoints you mentioned and analyze them. However, there is one > potential issue. Enabling these tracepoints will trigger the reclaim > events show for all pages. Although we can filter them per pid, we > cannot filter them per cgroup. Nevertheless, there are times when it > would be extremely beneficial to comprehend the effectiveness of the > reclaim process within the relevant cgroup. For this reason, I am adding > the cgroup name to the memcg tracepoints and implementing a cumulative > tracepoint for memcg shrink (LRU + slab)." I can see how printing memcg in mm_vmscan_memcg_reclaim_begin makes it easier to postprocess per memcg reclaim. But you could do that just by adding that to mm_vmscan_memcg_reclaim_{begin, end}, no? Why exactly does this matter for kswapd and other global reclaim contexts? -- Michal Hocko SUSE Labs