On Mon, 6 Sep 2021 16:40:28 +0200 Vlastimil Babka wrote: >On 9/2/21 20:17, Mike Kravetz wrote: >> >> Here is some very high level information from a long stall that was >> interrupted. This was an order 9 allocation from alloc_buddy_huge_page(). >> >> 55269.530564] __alloc_pages_slowpath: jiffies 47329325 tries 609673 cpu_tries 1 node 0 FAIL >> [55269.539893] r_tries 25 c_tries 609647 reclaim 47325161 compact 607 >> >> Yes, in __alloc_pages_slowpath for 47329325 jiffies before being interrupted. >> should_reclaim_retry returned true 25 times and should_compact_retry returned >> true 609647 times. >> Almost all time (47325161 jiffies) spent in __alloc_pages_direct_reclaim, and >> 607 jiffies spent in __alloc_pages_direct_compact. >> >> Looks like both >> reclaim retries > MAX_RECLAIM_RETRIES >> and >> compaction retries > MAX_COMPACT_RETRIES >> >Yeah AFAICS that's only possible with the scenario I suspected. I guess >we should put a limit on compact retries (maybe some multiple of >MAX_COMPACT_RETRIES) even if it thinks that reclaim could help, while >clearly it doesn't (i.e. because somebody else is stealing the page like >in your test case). And/or clamp reclaim retries for costly orders reclaim retries = MAX_RECLAIM_RETRIES - order; to pull down the chance for stall as low as possible.