Re: [PATCH] mm: vmscan: split khugepaged stats from direct reclaim stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 25, 2022 at 02:53:01PM -0700, Yang Shi wrote:
> On Tue, Oct 25, 2022 at 1:54 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> >
> > On Tue, Oct 25, 2022 at 12:40:15PM -0700, Yang Shi wrote:
> > > On Tue, Oct 25, 2022 at 10:05 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> > > >
> > > > Direct reclaim stats are useful for identifying a potential source for
> > > > application latency, as well as spotting issues with kswapd. However,
> > > > khugepaged currently distorts the picture: as a kernel thread it
> > > > doesn't impose allocation latencies on userspace, and it explicitly
> > > > opts out of kswapd reclaim. Its activity showing up in the direct
> > > > reclaim stats is misleading. Counting it as kswapd reclaim could also
> > > > cause confusion when trying to understand actual kswapd behavior.
> > > >
> > > > Break out khugepaged from the direct reclaim counters into new
> > > > pgsteal_khugepaged, pgdemote_khugepaged, pgscan_khugepaged counters.
> > > >
> > > > Test with a huge executable (CONFIG_READ_ONLY_THP_FOR_FS):
> > > >
> > > > pgsteal_kswapd 1342185
> > > > pgsteal_direct 0
> > > > pgsteal_khugepaged 3623
> > > > pgscan_kswapd 1345025
> > > > pgscan_direct 0
> > > > pgscan_khugepaged 3623
> > >
> > > There are other kernel threads or works may allocate memory then
> > > trigger memory reclaim, there may be similar problems for them and
> > > someone may try to add a new stat. So how's about we make the stats
> > > more general, for example, call it "pg{steal|scan}_kthread"?
> >
> > I'm not convinved that's a good idea.
> >
> > Can you generally say that userspace isn't indirectly waiting for one
> > of those allocating threads? With khugepaged, we know.
> 
> AFAIK, ksm may do slab allocation with __GFP_DIRECT_RECLAIM.

Right, but ksm also uses __GFP_KSWAPD_RECLAIM. So while userspace
isn't directly waiting for ksm, when ksm enters direct reclaim it's
because kswapd failed. This is of interest to kernel developers.
Userspace will likely see direct reclaim in that scenario as well, so
the ksm direct reclaim counts aren't liable to confuse users.

Khugepaged on the other hand will *always* reclaim directly, even if
there is no memory pressure or kswapd failure. The direct reclaim
counts there are misleading to both developers and users.

What it really should be is pgscan_nokswapd_nouserprocesswaiting, but
that just seems kind of long ;-)

I'm also not sure anybody but khugepaged is doing direct reclaim
without kswapd reclaim. It seems unlikely we'll get more of those.

> Some device mapper drivers may do heavy lift in the work queue, for
> example, dm-crypt, particularly for writing.

Userspace will wait for those through dirty throttling. We'd want to
know about kswapd failures in that case - again, without them being
muddied by khugepaged.

> > And those other allocations are usually ___GFP_KSWAPD_RECLAIM, so if
> > they do direct reclaim, we'd probably want to know that kswapd is
> > failing to keep up (doubly so if userspace is waiting). In a shared
> > kthread counter, khugepaged would again muddy the waters.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux