On Wed, Nov 24, 2021 at 09:44:43PM +0900, Alexey Avramov wrote: > > can you test this? > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > Sorry, I didn't notice the diff you provided right away. > > Now I've tested it and the result is the same: 1 min stall: > > $ mem2log > Starting mem2log with interval 2s, mode: 1 > Process memory locked with MCL_CURRENT | MCL_FUTURE | MCL_ONFAULT > All values are in mebibytes > MemTotal: 11798.5, SwapTotal: 0.0 Curious that it's the same, it reduced the time to OOM for me quite a bit. Another version is in a diff below. It special cases NOPROGRESS to not stall at all if kswapd is disabled and otherwise stall for the shortest possible duration. For my tests, it almost always hits OOM in the same time as 5.15 with one corner case but OOM may still be delayed if kswapd active or there are a lot of pages under writeback as there is the possibility the system can make forward progress when writeback completes. >From another mail, you wrote > My dissatisfaction is caused by the fact that the scale has now > tipped sharply in favor of stall. Understandable but the old throttling mechanism was functionally broken and without some sort of throttling, CPU usage due to excessive LRU scanning causes a different class of bugs. > Although even before this change, users complained about the inability > to wait for OOM: > https://lore.kernel.org/lkml/d9802b6a-949b-b327-c4a6-3dbca485ec20@xxxxxxx/ I think there might be an unwritten mm law now that someone is always unhappy with OOM behaviour :( Please let me know if this version works any better diff --git a/mm/vmscan.c b/mm/vmscan.c index 07db03883062..d9166e94eb95 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1057,7 +1057,17 @@ void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason) break; case VMSCAN_THROTTLE_NOPROGRESS: - timeout = HZ/2; + timeout = 1; + + /* + * If kswapd is disabled, reschedule if necessary but do not + * throttle as the system is likely near OOM. + */ + if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES) { + cond_resched(); + return; + } + break; case VMSCAN_THROTTLE_ISOLATED: timeout = HZ/50; @@ -3395,7 +3405,7 @@ static void consider_reclaim_throttle(pg_data_t *pgdat, struct scan_control *sc) return; /* Throttle if making no progress at high prioities. */ - if (sc->priority < DEF_PRIORITY - 2) + if (sc->priority < DEF_PRIORITY - 2 && !sc->nr_reclaimed) reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS); } @@ -3415,6 +3425,7 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc) unsigned long nr_soft_scanned; gfp_t orig_mask; pg_data_t *last_pgdat = NULL; + pg_data_t *first_pgdat = NULL; /* * If the number of buffer_heads in the machine exceeds the maximum @@ -3478,14 +3489,18 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc) /* need some check for avoid more shrink_zone() */ } + if (!first_pgdat) + first_pgdat = zone->zone_pgdat; + /* See comment about same check for global reclaim above */ if (zone->zone_pgdat == last_pgdat) continue; last_pgdat = zone->zone_pgdat; shrink_node(zone->zone_pgdat, sc); - consider_reclaim_throttle(zone->zone_pgdat, sc); } + consider_reclaim_throttle(first_pgdat, sc); + /* * Restore to original mask to avoid the impact on the caller if we * promoted it to __GFP_HIGHMEM. -- Mel Gorman SUSE Labs