Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> writes: > On Fri, Jul 21, 2023 at 03:28:43PM +0800, Huang, Ying wrote: >> Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> writes: >> >> > On Wed, Jul 19, 2023 at 01:59:00PM +0800, Huang, Ying wrote: >> >> > The big remaaining corner case to watch out for is where the sum >> >> > of the boosted pcp->high exceeds the low watermark. If that should ever >> >> > happen then potentially a premature OOM happens because the watermarks >> >> > are fine so no reclaim is active but no pages are available. It may even >> >> > be the case that the sum of pcp->high should not exceed *min* as that >> >> > corner case means that processes may prematurely enter direct reclaim >> >> > (not as bad as OOM but still bad). >> >> >> >> Sorry, I don't understand this. When pages are moved from buddy to PCP, >> >> zone NR_FREE_PAGES will be decreased in rmqueue_bulk(). That is, pages >> >> in PCP will be counted as used instead of free. And, in >> >> zone_watermark_ok*() and zone_watermark_fast(), zone NR_FREE_PAGES is >> >> used to check watermark. So, if my understanding were correct, if the >> >> number of pages in PCP is larger than low/min watermark, we can still >> >> trigger reclaim. Whether is my understanding correct? >> >> >> > >> > You're right, I didn't check the timing of the accounting and all that >> > occurred to me was "the timing of when watermarks trigger kswapd or >> > direct reclaim may change as a result of PCP adaptive resizing". Even >> > though I got the timing wrong, the shape of the problem just changes. >> > I suspect that excessively large PCP high relative to the watermarks may >> > mean that reclaim happens prematurely if too many pages are pinned by PCP >> > pages as the zone free pages approaches the watermark. >> >> Yes. I think so too. In addition to reclaim, falling back to remote >> NUMA node may happen prematurely too. >> > > Yes, with the added bonus that this is relatively easy to detect from > the NUMA miss stats. I say "relative" because in a lot of cases, it'll be > difficult to distinguish from the noise. Hence, it's better to be explicit in > the change log that the potential problem is known and has been considered. > That way, if bisect points the finger at adaptive resizing, there will be > some notes on how to investigate the bug. Sure. Will do that. >> > While disabling the adaptive resizing during reclaim will limit the >> > worst of the problem, it may still be the case that kswapd is woken >> > early simply because there are enough CPUs pinning pages in PCP >> > lists. Similarly, depending on the size of pcp->high and the gap >> > between the watermarks, it's possible for direct reclaim to happen >> > prematurely. I could still be wrong because I'm not thinking the >> > problem through fully, examining the code or thinking about the >> > implementation. It's simply worth keeping in mind the impact elevated >> > PCP high values has on the timing of watermarks failing. If it's >> > complex enough, it may be necessary to have a separate patch dealing >> > with the impact of elevated pcp->high on watermarks. >> >> Sure. I will keep this in mind. We may need to check zone watermark >> when tuning pcp->high and free some pages from PCP before falling back >> to other node or reclaiming. >> > > That would certainly be one option, a cap on adaptive resizing as memory > gets lower. It's not perfect but ideally the worst-case behaviour would be > that PCP adaptive sizing returns to existing behaviour when memory usage > is persistently high and near watermarks within a zone. -- Best Regards, Huang, Ying