Re: [PATCH v2] mm/madvise: add vmstat statistics for madvise_[cold|pageout]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 27, 2023 at 10:48:25AM +0100, Michal Hocko wrote:
> On Thu 26-01-23 16:08:43, Minchan Kim wrote:
> > On Thu, Jan 26, 2023 at 08:58:57PM +0100, Michal Hocko wrote:
> > > On Thu 26-01-23 09:10:46, Minchan Kim wrote:
> > > > On Thu, Jan 26, 2023 at 09:50:37AM +0100, Michal Hocko wrote:
> > > [...]
> > > > > I suspect you try to mimic pgscan/pgsteal effectivness metric on the
> > > > > address space but that is a fundamentally different thing.
> > > > 
> > > > I don't see anything different, fundamentally.
> > > 
> > > OK, this really explains our disconnect here. Your metric reports
> > > nr_page_tables (nr_scanned) and number of aged and potentially reclaimed
> > > pages. You do not know whether that reclaim was successful. So you
> > > effectively learn how many pages have already been unmapped before your
> > > call. Can this be sometimes useful? Probably yes. Does it say anything
> > > about the reclaim efficiency? I do not think so. You could have hit
> > > pinned pages or countless other conditions why those pages couldn't have
> > > been reclaimed and they have stayed mapped after madvise call.
> > > 
> > > pgsteal tells you how many pages from those scanned have been reclaimed.
> > > See the difference?
> > 
> > That's why my previous version kept counting exact number of reclaimed/
> > deactivated pages but I changed mind since I observed majority of failure
> > happened from already-paged-out ranges and shared pages rather than minor
> > countless other conditions in real practice. Without finding present pages,
> > the mavise hints couldn't do anything from the beginning and that's the
> > major cost we are facing.
> 
> I cannot really comment on your user space reclaim policy but I would
> have expected that you at least check for rss before trying to use
> madvise on the range. Learning that from the operation sounds like a
> suboptimal policy to me.

Current rss couldn't say where is the present pages among huge address spaces.
And that's not what I want to from the operation but keep monitoring trending
under fleet. 


> 
> > Saing again, I don't think the global stat could cover all the minor
> > you are insisting and I agree tracepoint could do better jobs to pinpoint
> > root causes but the global stat still have a role to provides basic ground
> > to sense abnormal and guides us moving next steps with easier interface/
> > efficient way.
> 
> I hate to repeat myself but the more we discuss this the more I am
> convinced that vmstat is a bad fit. Sooner or later you end up realizing
> that nr_reclaimed/nr_scanned is insufficient metric because you would
> need to learn more anout those reclaim failures. Really what you want is
> to have a tracepoint with a full reclaim metric and grow monitoring tooling
> around that. This will deal with the major design flaw of global stat
> mentioned ealier (that you cannot attribute specific stats to the
> corresponding madvise caller).

Then, let me ask back to you.

What statistcis in the current vmstat fields or pending fields
(to be merged) among accumulated counter stats sound reasonable
to be part of vmstat fields not tracepoint from your perspective?

Almost every stat would have corner cases by various reasons and
people would want to know the reason from process, context, function
or block scope depending on how they want to use the stat.
Even, tracepoint you're loving couldn't tell all the detail what they want
without adding more and more as on growing code chages.
However, unlike your worry, people has used such an high level vague
vmstat fields very well to understand/monitor system health even though
it has various miscounting cases since they know the corner cases
are really minor.

I am really curious what metric we could add in the vmstat instead of
tracepoint in future if we follow your logic. 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux