It is needed to ensure sc->nr.unqueued_dirty > 0, which can avoid to set PGDAT_DIRTY flag when sc->nr.unqueued_dirty and sc->nr.file_taken are both zero at the same time. It can't be guaranteed for the PGDAT_WRITEBACK flag that only pages marked for immediate reclaim are on evictable LRUs in other following shrink processes of the same kswapd shrink recycling. So when both a small amount of pages marked for immediate reclaim and a large amount of pages marked for non-immediate reclaim are on evictable LRUs at the same time, if it's only determined that there is at least a page marked for immediate reclaim on evictable LRUs, kswapd shrink is throttled to sleep, which will increase kswapd process consumption. It can be fixed to throttle kswapd shrink when sc->nr.immediate is equal to sc->nr.file_taken. Signed-off-by: Zhiguo Jiang <justinjiang@xxxxxxxx> --- mm/vmscan.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) mode change 100644 => 100755 mm/vmscan.c diff --git a/mm/vmscan.c b/mm/vmscan.c index d8c3338fee0f..5723672bbdc2 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -5915,17 +5915,17 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) set_bit(PGDAT_WRITEBACK, &pgdat->flags); /* Allow kswapd to start writing pages during reclaim.*/ - if (sc->nr.unqueued_dirty == sc->nr.file_taken) + if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty == sc->nr.file_taken) set_bit(PGDAT_DIRTY, &pgdat->flags); /* - * If kswapd scans pages marked for immediate + * If kswapd scans massive pages marked for immediate * reclaim and under writeback (nr_immediate), it * implies that pages are cycling through the LRU * faster than they are written so forcibly stall * until some pages complete writeback. */ - if (sc->nr.immediate) + if (sc->nr.immediate && sc->nr.immediate == sc->nr.file_taken) reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); } -- 2.39.0