Re: [PATCH v5 00/12] fold per-CPU vmstats remotely

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 14-03-23 09:59:37, Marcelo Tosatti wrote:
> On Tue, Mar 14, 2023 at 01:25:53PM +0100, Michal Hocko wrote:
> > On Mon 13-03-23 13:25:07, Marcelo Tosatti wrote:
> > > This patch series addresses the following two problems:
> > > 
> > >     1. A customer provided some evidence which indicates that
> > >        the idle tick was stopped; albeit, CPU-specific vmstat
> > >        counters still remained populated.
> > > 
> > >        Thus one can only assume quiet_vmstat() was not
> > >        invoked on return to the idle loop. If I understand
> > >        correctly, I suspect this divergence might erroneously
> > >        prevent a reclaim attempt by kswapd. If the number of
> > >        zone specific free pages are below their per-cpu drift
> > >        value then zone_page_state_snapshot() is used to
> > >        compute a more accurate view of the aforementioned
> > >        statistic.  Thus any task blocked on the NUMA node
> > >        specific pfmemalloc_wait queue will be unable to make
> > >        significant progress via direct reclaim unless it is
> > >        killed after being woken up by kswapd
> > >        (see throttle_direct_reclaim())
> > 
> > I have hard time to follow the actual problem described above. Are you
> > suggesting that a lack of pcp vmstat counters update has led to
> > reclaim issues? What is the said "evidence"? Could you share more of the
> > story please?
> 
> 
>   - The process was trapped in throttle_direct_reclaim().
>     The function wait_event_killable() was called to wait condition
>     allow_direct_reclaim(pgdat) for current node to be true.
>     The allow_direct_reclaim(pgdat) examined the number of free pages
>     on the node by zone_page_state() which just returns value in
>     zone->vm_stat[NR_FREE_PAGES].
> 
>   - On node #1, zone->vm_stat[NR_FREE_PAGES] was 0.
>     However, the freelist on this node was not empty.
> 
>   - This inconsistent of vmstat value was caused by percpu vmstat on
>     nohz_full cpus. Every increment/decrement of vmstat is performed
>     on percpu vmstat counter at first, then pooled diffs are cumulated
>     to the zone's vmstat counter in timely manner. However, on nohz_full
>     cpus (in case of this customer's system, 48 of 52 cpus) these pooled
>     diffs were not cumulated once the cpu had no event on it so that
>     the cpu started sleeping infinitely.
>     I checked percpu vmstat and found there were total 69 counts not
>     cumulated to the zone's vmstat counter yet.
> 
>   - In this situation, kswapd did not help the trapped process.
>     In pgdat_balanced(), zone_wakermark_ok_safe() examined the number
>     of free pages on the node by zone_page_state_snapshot() which
>     checks pending counts on percpu vmstat.
>     Therefore kswapd could know there were 69 free pages correctly.
>     Since zone->_watermark = {8, 20, 32}, kswapd did not work because
>     69 was greater than 32 as high watermark.

If the imprecision of allow_direct_reclaim is the underlying problem why
haven't you used zone_page_state_snapshot instead?

Anyway, this is kind of information that is really helpful to have in
the patch description.

[...]
> > >     2. With a SCHED_FIFO task that busy loops on a given CPU,
> > >        and kworker for that CPU at SCHED_OTHER priority,
> > >        queuing work to sync per-vmstats will either cause that
> > >        work to never execute, or stalld (i.e. stall daemon)
> > >        boosts kworker priority which causes a latency
> > >        violation
> > 
> > Why is that a problem? Out-of-sync stats shouldn't cause major problems.
> > Or can they?
> 
> Consider SCHED_FIFO task that is polling the network queue (say
> testpmd).
> 
> 	do {
> 	 	if (net_registers->state & DATA_AVAILABLE) {
> 			process_data)();
> 		}
> 	 } while (!stopped);
> 
> Since this task runs at SCHED_FIFO priority, kworker won't 
> be scheduled to run (therefore per-CPU vmstats won't be
> flushed to global vmstats). 

Yes, that is certainly possible. But my main point is that vmstat
imprecision shouldn't cause functional problems. That is why we have
_snapshot readers to get an exact value where it matters for
consistency.

> Or, if testpmd runs at SCHED_OTHER, then the work item to
> flush per-CPU vmstats causes
> 
> 	testpmd -> kworker
> 	kworker: flush per-CPU vmstats
> 	kworker -> testpmd
> 
> And this might cause undesired latencies to the packets being
> processed by the testpmd task.

Right but can you have any latencies expectation in a situation like
that?

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux