Re: [PATCH v2] mm/madvise: add vmstat statistics for madvise_[cold|pageout]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 25, 2023 at 09:04:16AM +0100, Michal Hocko wrote:
> On Tue 24-01-23 16:54:57, Minchan Kim wrote:
> > madvise LRU manipulation APIs need to scan address ranges to find
> > present pages at page table and provides advice hints for them.
> > 
> > Likewise pg[scan/steal] count on vmstat, madvise_pg[scanned/hinted]
> > shows the proactive reclaim efficiency so this patch adds those
> > two statistics in vmstat.
> > 
> > 	madvise_pgscanned, madvise_pghinted
> > 
> > Since proactive reclaim using process_madvise(2) as userland
> > memory policy is popular(e.g,. Android ActivityManagerService),
> > those stats are helpful to know how efficiently the policy works
> > well.
> 
> The usecase description is still too vague. What are those values useful
> for? Is there anything actionable based on those numbers? How do you
> deal with multiple parties using madvise resp. process_madvise so that
> their stats are combined?

The metric helps monitoing system MM health under fleet and experimental
tuning with diffrent policies from the centralized userland memory daemon.
That's really good fit under vmstat along with other MM metrics.

> 
> In the previous version I have also pointed out that this might be
> easily achieved by tracepoints. Your counterargument was a convenience
> in a large scale monitoring without going much into details. Presumably
> this is because your fleet based monitoring already collects
> /proc/vmstat while tracepoints based monitoring would require additional
> changes. This alone is rather weak argument to be honest because
> deploying tracepoints monitoring is quite trivial and can be done
> outside of the said memory reclaim agent.

The convenience matters but that's not my argument. 

Ithink using tracepoint for system metric makes no sense even though
the tracepoint could be extended by using bpf or histogram trigger to
get the accumulated counters for system metric.

The tracepoint is the next step if we want to know further breakdown
once something strange happens. That's why we have separate level metric
system to narrow problem down rather than implementing all the metric
with tracepoint. Please, look at vmstat fields. Almost every fields
would have same question you asked "how do you break down if multiple
processes were invovled to contribute the metric?"

I am fine if you suggest adding tracepoint as well as the vmstat fields
for further breakdown but only relying on tracepoint and frineds for
system global metric doesn't make sense.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux