On Thu, Feb 09, 2023 at 12:01:50PM -0300, Marcelo Tosatti wrote: > This patch series addresses the following two problems: > > 1. A customer provided some evidence which indicates that > the idle tick was stopped; albeit, CPU-specific vmstat > counters still remained populated. > > Thus one can only assume quiet_vmstat() was not > invoked on return to the idle loop. If I understand > correctly, I suspect this divergence might erroneously > prevent a reclaim attempt by kswapd. If the number of > zone specific free pages are below their per-cpu drift > value then zone_page_state_snapshot() is used to > compute a more accurate view of the aforementioned > statistic. Thus any task blocked on the NUMA node > specific pfmemalloc_wait queue will be unable to make > significant progress via direct reclaim unless it is > killed after being woken up by kswapd > (see throttle_direct_reclaim()) > > 2. With a SCHED_FIFO task that busy loops on a given CPU, > and kworker for that CPU at SCHED_OTHER priority, > queuing work to sync per-vmstats will either cause that > work to never execute, or stalld (i.e. stall daemon) > boosts kworker priority which causes a latency > violation > > By having vmstat_shepherd flush the per-CPU counters to the > global counters from remote CPUs. > > This is done using cmpxchg to manipulate the counters, > both CPU locally (via the account functions), > and remotely (via cpu_vm_stats_fold). Frankly another case of bandaid[1] ? [1] https://lore.kernel.org/lkml/20230223150624.GA29739@xxxxxx/