Re: [PATCH v2 2/2] mm: memcg: introduce new event to trace shrink_memcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michal, Shakeel,

Sorry for pinging you here, but I don't quite understand your decision
on this patchset.

Is it a NAK or not? If it's not, should I consider redesigning
something? For instance, introducing stub functions to
remove ifdefs from shrink_node_memcgs().

Thank you for taking the time to look into this!

On Wed, Nov 22, 2023 at 09:57:27PM +0300, Dmitry Rokosov wrote:
> On Wed, Nov 22, 2023 at 02:24:59PM +0100, Michal Hocko wrote:
> > On Wed 22-11-23 13:58:36, Dmitry Rokosov wrote:
> > > Hello Michal,
> > > 
> > > Thank you for the quick review!
> > > 
> > > On Wed, Nov 22, 2023 at 11:23:24AM +0100, Michal Hocko wrote:
> > > > On Wed 22-11-23 13:01:56, Dmitry Rokosov wrote:
> > > > > The shrink_memcg flow plays a crucial role in memcg reclamation.
> > > > > Currently, it is not possible to trace this point from non-direct
> > > > > reclaim paths.
> > > > 
> > > > Is this really true? AFAICS we have
> > > > mm_vmscan_lru_isolate
> > > > mm_vmscan_lru_shrink_active
> > > > mm_vmscan_lru_shrink_inactive
> > > > 
> > > > which are in the vry core of the memory reclaim. Sure post processing
> > > > those is some work.
> > > 
> > > Sure, you are absolutely right. In the usual scenario, the memcg
> > > shrinker utilizes two sub-shrinkers: slab and LRU. We can enable the
> > > tracepoints you mentioned and analyze them. However, there is one
> > > potential issue. Enabling these tracepoints will trigger the reclaim
> > > events show for all pages. Although we can filter them per pid, we
> > > cannot filter them per cgroup. Nevertheless, there are times when it
> > > would be extremely beneficial to comprehend the effectiveness of the
> > > reclaim process within the relevant cgroup. For this reason, I am adding
> > > the cgroup name to the memcg tracepoints and implementing a cumulative
> > > tracepoint for memcg shrink (LRU + slab)."
> > 
> > I can see how printing memcg in mm_vmscan_memcg_reclaim_begin makes it
> > easier to postprocess per memcg reclaim. But you could do that just by
> > adding that to mm_vmscan_memcg_reclaim_{begin, end}, no? Why exactly
> > does this matter for kswapd and other global reclaim contexts? 
> 
> From my point of view, kswapd and other non-direct reclaim paths are
> important for memcg analysis because they also influence the memcg
> reclaim statistics.
> 
> The tracepoint mm_vmscan_memcg_reclaim_{begin, end} is called from the
> direct memcg reclaim flow, such as:
>     - a direct write to the 'reclaim' node
>     - changing 'max' and 'high' thresholds
>     - raising the 'force_empty' mechanism
>     - the charge path
>     - etc.
> 
> However, it doesn't cover global reclaim contexts, so it doesn't provide
> us with the full memcg reclaim statistics.
> 
> -- 
> Thank you,
> Dmitry

-- 
Thank you,
Dmitry




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux