On Mon, Mar 20, 2023 at 07:21:04PM +0100, Michal Hocko wrote: > On Mon 20-03-23 15:03:33, Marcelo Tosatti wrote: > > A customer provided evidence indicating that a process > > was stalled in direct reclaim: > > > > - The process was trapped in throttle_direct_reclaim(). > > The function wait_event_killable() was called to wait condition > > allow_direct_reclaim(pgdat) for current node to be true. > > The allow_direct_reclaim(pgdat) examined the number of free pages > > on the node by zone_page_state() which just returns value in > > zone->vm_stat[NR_FREE_PAGES]. > > > > - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0. > > However, the freelist on this node was not empty. > > > > - This inconsistent of vmstat value was caused by percpu vmstat on > > nohz_full cpus. Every increment/decrement of vmstat is performed > > on percpu vmstat counter at first, then pooled diffs are cumulated > > to the zone's vmstat counter in timely manner. However, on nohz_full > > cpus (in case of this customer's system, 48 of 52 cpus) these pooled > > diffs were not cumulated once the cpu had no event on it so that > > the cpu started sleeping infinitely. > > I checked percpu vmstat and found there were total 69 counts not > > cumulated to the zone's vmstat counter yet. > > > > - In this situation, kswapd did not help the trapped process. > > In pgdat_balanced(), zone_wakermark_ok_safe() examined the number > > of free pages on the node by zone_page_state_snapshot() which > > checks pending counts on percpu vmstat. > > Therefore kswapd could know there were 69 free pages correctly. > > Since zone->_watermark = {8, 20, 32}, kswapd did not work because > > 69 was greater than 32 as high watermark. > > > > Change allow_direct_reclaim to use zone_page_state_snapshot, which > > allows a more precise version of the vmstat counters to be used. > > > > allow_direct_reclaim will only be called from try_to_free_pages, > > which is not a hot path. > > Have you managed to test this patch to confirm it addresses the above > issue? It should but better double check that. > > > Suggested-by: Michal Hocko <mhocko@xxxxxxxx> > > Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> > > The patch makes sense regardless but a note about testing should be > added. > > Acked-by: Michal Hocko <mhocko@xxxxxxxx> Michal, The patch has not been tested in the original setup where the problem was found, however i don't think its easy to do that validation (checking with the reporter anyway). Perhaps one could find a synthetic reproducer. It is pretty easy to note that, on an isolated nohz_full CPU, the deferrable timer that is queued on it (timer which should queue vmstat_update on that CPU) does not execute for long periods. This makes the global stats stale (since per-CPU free pages can become stale for as long as the CPU has tick processing stopped). Which matches the data available. Thanks!