A need_resched warning was reported and already fixed by adding a need_resched() check to walk_pud_range(). Dial down MAX_LRU_BATCH anyway in the interest of direct reclaim latency. WARNING: CPU: 22 PID: 2771 at kernel/sched/core.c:3637 scheduler_tick+0x339/0x410 Call Trace: <IRQ> update_process_times+0x7b/0x90 tick_sched_timer+0x82/0xd0 __run_hrtimer+0x81/0x200 hrtimer_interrupt+0x192/0x450 smp_apic_timer_interrupt+0xac/0x1d0 apic_timer_interrupt+0x88/0x90 </IRQ> RIP: 0010:walk_pte_range+0x1c2/0x6a0 walk_pmd_range+0x1ed/0x490 walk_pud_range+0xe6/0x2b0 __walk_page_range+0x111/0x690 walk_page_range+0x4e/0x150 walk_mm+0x110/0x200 try_to_inc_max_seq+0xdb/0xb10 lru_gen_run_cmd+0x153/0x1c0 lru_gen_run+0x150/0x210 stale_page_run+0x62/0x730 kthread+0x148/0x1b0 ret_from_fork+0x54/0x60 Reported-by: David Rientjes <rientjes@xxxxxxxxxx> Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx> --- include/linux/mmzone.h | 2 +- mm/vmscan.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 2b3f273faf68..4c8510f26b02 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -381,7 +381,7 @@ enum { }; #define MIN_LRU_BATCH BITS_PER_LONG -#define MAX_LRU_BATCH (MIN_LRU_BATCH * 128) +#define MAX_LRU_BATCH (MIN_LRU_BATCH * 64) /* whether to keep historical stats from evicted generations */ #ifdef CONFIG_LRU_GEN_STATS diff --git a/mm/vmscan.c b/mm/vmscan.c index 77d2d08950ba..2add99eecd0c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4400,7 +4400,7 @@ static bool try_to_inc_max_seq(struct lruvec *lruvec, unsigned long max_seq, } while (mm); done: if (!success) { - if (sc->priority < DEF_PRIORITY - 2) + if (sc->priority <= DEF_PRIORITY - 2) wait_event_killable(lruvec->mm_state.wait, max_seq < READ_ONCE(lrugen->max_seq)); -- 2.37.3.968.ga6b4b080e4-goog