To spread dirty page, nodes are checked whether it reached the dirty limit using the expensive node_dirty_ok(). To reduce the number of calling node_dirty_ok(), last node that hit the dirty limit is cached. Instead of caching the node, caching both node and it's result of node_dirty_ok() can reduce the number of calling node_dirty_ok() more than before. Signed-off-by: Wonhyuk Yang <vvghjk1234@xxxxxxxxx> --- mm/page_alloc.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c1..aba62cf31a0e 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4068,7 +4068,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, { struct zoneref *z; struct zone *zone; - struct pglist_data *last_pgdat_dirty_limit = NULL; + struct pglist_data *last_pgdat = NULL; + bool last_pgdat_dirty_limit = false; bool no_fallback; retry: @@ -4107,13 +4108,13 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * dirty-throttling and the flusher threads. */ if (ac->spread_dirty_pages) { - if (last_pgdat_dirty_limit == zone->zone_pgdat) - continue; + if (last_pgdat != zone->zone_pgdat) { + last_pgdat = zone->zone_pgdat; + last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat); + } - if (!node_dirty_ok(zone->zone_pgdat)) { - last_pgdat_dirty_limit = zone->zone_pgdat; + if (!last_pgdat_dirty_limit) continue; - } } if (no_fallback && nr_online_nodes > 1 && -- 2.30.2