On production we have noticed hard lockups on large machines running large jobs due to kswaps hoarding lru lock within isolate_lru_pages when sc->reclaim_idx is 0 which is a small zone. The lru was couple hundred GiBs and the condition (page_zonenum(page) > sc->reclaim_idx) in isolate_lru_pages was basically skipping GiBs of pages while holding the LRU spinlock with interrupt disabled. On further inspection, it seems like there are two issues: 1) If the kswapd on the return from balance_pgdat() could not sleep (maybe all zones are still unbalanced), the classzone_idx is set to 0, unintentionally, and the whole reclaim cycle of kswapd will try to reclaim only the lowest and smallest zone while traversing the whole memory. 2) Fundamentally isolate_lru_pages() is really bad when the allocation has woken kswapd for a smaller zone on a very large machine running very large jobs. It can hoard the LRU spinlock while skipping over 100s of GiBs of pages. This patch only fixes the (1). The (2) needs a more fundamental solution. Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely due to mismatched classzone_idx") Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx> --- mm/vmscan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9e3292ee5c7c..786dacfdfe29 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3908,7 +3908,7 @@ static int kswapd(void *p) /* Read the new order and classzone_idx */ alloc_order = reclaim_order = pgdat->kswapd_order; - classzone_idx = kswapd_classzone_idx(pgdat, 0); + classzone_idx = kswapd_classzone_idx(pgdat, classzone_idx); pgdat->kswapd_order = 0; pgdat->kswapd_classzone_idx = MAX_NR_ZONES; -- 2.22.0.410.gd8fdbe21b5-goog