On Fri, Jan 10, 2025 at 5:20 AM Li Zhijian <lizhijian@xxxxxxxxxxx> wrote: > > Commit f77f0c751478 ("mm,memcg: provide per-cgroup counters for NUMA balancing operations") > moved the accounting of PGDEMOTE_* statistics to shrink_inactive_list(). > However, shrink_inactive_list() is not called when lrugen_enabled is true, > leading to incorrect demotion statistics despite actual demotion events > occurring. > > Add the PGDEMOTE_* accounting in evict_folios(), ensuring that demotion > statistics are correctly updated regardless of the lru_gen_enabled state. > This fix is crucial for systems that rely on accurate NUMA balancing > metrics for performance tuning and resource management. > > Fixes: f77f0c751478 ("mm,memcg: provide per-cgroup counters for NUMA balancing operations") > Signed-off-by: Li Zhijian <lizhijian@xxxxxxxxxxx> > --- > mm/vmscan.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 430d580e37dd..f2d279de06c4 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -4642,6 +4642,8 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap > reset_batch_size(walk); > } > > + __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), > + stat.nr_demoted); The mm-hotfixes-stable already has the same fix from Donet: https://lore.kernel.org/linux-mm/20250109060540.451261-1-donettom@xxxxxxxxxxxxx/ Andew, can you please drop this one from mm-unstable so that we won't increment the counter twice? Thanks!