The patch titled Subject: mm/page_alloc: cache the result of node_dirty_ok() has been added to the -mm mm-unstable branch. Its filename is mm-page_alloc-cache-the-result-of-node_dirty_ok.patch This patch should soon appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Wonhyuk Yang <vvghjk1234@xxxxxxxxx> Subject: mm/page_alloc: cache the result of node_dirty_ok() To spread dirty pages, nodes are checked whether they have reached the dirty limit using the expensive node_dirty_ok(). To reduce the frequency of calling node_dirty_ok(), the last node that hit the dirty limit can be cached. Instead of caching the node, caching both the node and its node_dirty_ok() status can reduce the number of calle to node_dirty_ok(). Link: https://lkml.kernel.org/r/20220430011032.64071-1-vvghjk1234@xxxxxxxxx Signed-off-by: Wonhyuk Yang <vvghjk1234@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Donghyeok Kim <dthex5d@xxxxxxxxx> Cc: JaeSang Yoo <jsyoo5b@xxxxxxxxx> Cc: Jiyoup Kim <lakroforce@xxxxxxxxx> Cc: Ohhoon Kwon <ohkwon1043@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/page_alloc.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) --- a/mm/page_alloc.c~mm-page_alloc-cache-the-result-of-node_dirty_ok +++ a/mm/page_alloc.c @@ -4021,7 +4021,8 @@ get_page_from_freelist(gfp_t gfp_mask, u { struct zoneref *z; struct zone *zone; - struct pglist_data *last_pgdat_dirty_limit = NULL; + struct pglist_data *last_pgdat = NULL; + bool last_pgdat_dirty_limit = false; bool no_fallback; retry: @@ -4060,13 +4061,13 @@ retry: * dirty-throttling and the flusher threads. */ if (ac->spread_dirty_pages) { - if (last_pgdat_dirty_limit == zone->zone_pgdat) - continue; + if (last_pgdat != zone->zone_pgdat) { + last_pgdat = zone->zone_pgdat; + last_pgdat_dirty_limit = node_dirty_ok(zone->zone_pgdat); + } - if (!node_dirty_ok(zone->zone_pgdat)) { - last_pgdat_dirty_limit = zone->zone_pgdat; + if (!last_pgdat_dirty_limit) continue; - } } if (no_fallback && nr_online_nodes > 1 && _ Patches currently in -mm which might be from vvghjk1234@xxxxxxxxx are mm-page_alloc-cache-the-result-of-node_dirty_ok.patch