The patch titled Subject: mm, pcp: decrease PCP high if free pages < high watermark has been added to the -mm mm-unstable branch. Its filename is mm-pcp-decrease-pcp-high-if-free-pages-high-watermark.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-pcp-decrease-pcp-high-if-free-pages-high-watermark.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Huang Ying <ying.huang@xxxxxxxxx> Subject: mm, pcp: decrease PCP high if free pages < high watermark Date: Tue, 26 Sep 2023 14:09:09 +0800 One target of PCP is to minimize pages in PCP if the system free pages is too few. To reach that target, when page reclaiming is active for the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in allocating path, decrease PCP high and free some pages in freeing path. But this may be too late because the background page reclaiming may introduce latency for some workloads. So, in this patch, during page allocation we will detect whether the number of free pages of the zone is below high watermark. If so, we will stop increasing PCP high in allocating path, decrease PCP high and free some pages in freeing path. With this, we can reduce the possibility of the premature background page reclaiming caused by too large PCP. The high watermark checking is done in allocating path to reduce the overhead in hotter freeing path. Link: https://lkml.kernel.org/r/20230926060911.266511-9-ying.huang@xxxxxxxxx Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Johannes Weiner <jweiner@xxxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Arjan van de Ven <arjan@xxxxxxxxxxxxxxx> Cc: Sudeep Holla <sudeep.holla@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mmzone.h | 1 + mm/page_alloc.c | 22 ++++++++++++++++++++-- 2 files changed, 21 insertions(+), 2 deletions(-) --- a/include/linux/mmzone.h~mm-pcp-decrease-pcp-high-if-free-pages-high-watermark +++ a/include/linux/mmzone.h @@ -1018,6 +1018,7 @@ enum zone_flags { * Cleared when kswapd is woken. */ ZONE_RECLAIM_ACTIVE, /* kswapd may be scanning the zone. */ + ZONE_BELOW_HIGH, /* zone is below high watermark. */ }; static inline unsigned long zone_managed_pages(struct zone *zone) --- a/mm/page_alloc.c~mm-pcp-decrease-pcp-high-if-free-pages-high-watermark +++ a/mm/page_alloc.c @@ -2439,7 +2439,13 @@ static int nr_pcp_high(struct per_cpu_pa return min(batch << 2, pcp->high); } - if (pcp->count >= high && high_min != high_max) { + if (high_min == high_max) + return high; + + if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) { + pcp->high = max(high - (batch << pcp->free_factor), high_min); + high = max(pcp->count, high_min); + } else if (pcp->count >= high) { int need_high = (batch << pcp->free_factor) + batch; /* pcp->high should be large enough to hold batch freed pages */ @@ -2489,6 +2495,10 @@ static void free_unref_page_commit(struc if (pcp->count >= high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), pcp, pindex); + if (test_bit(ZONE_BELOW_HIGH, &zone->flags) && + zone_watermark_ok(zone, 0, high_wmark_pages(zone), + ZONE_MOVABLE, 0)) + clear_bit(ZONE_BELOW_HIGH, &zone->flags); } } @@ -2795,7 +2805,7 @@ static int nr_pcp_alloc(struct per_cpu_p * If we had larger pcp->high, we could avoid to allocate from * zone. */ - if (high_min != high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) + if (high_min != high_max && !test_bit(ZONE_BELOW_HIGH, &zone->flags)) high = pcp->high = min(high + batch, high_max); if (!order) { @@ -3256,6 +3266,14 @@ retry: } } + mark = high_wmark_pages(zone); + if (zone_watermark_fast(zone, order, mark, + ac->highest_zoneidx, alloc_flags, + gfp_mask)) + goto try_this_zone; + else if (!test_bit(ZONE_BELOW_HIGH, &zone->flags)) + set_bit(ZONE_BELOW_HIGH, &zone->flags); + mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); if (!zone_watermark_fast(zone, order, mark, ac->highest_zoneidx, alloc_flags, _ Patches currently in -mm which might be from ying.huang@xxxxxxxxx are mm-fix-draining-remote-pageset.patch memory-tiering-add-abstract-distance-calculation-algorithms-management.patch acpi-hmat-refactor-hmat_register_target_initiators.patch acpi-hmat-calculate-abstract-distance-with-hmat.patch dax-kmem-calculate-abstract-distance-with-general-interface.patch mm-pcp-avoid-to-drain-pcp-when-process-exit.patch cacheinfo-calculate-per-cpu-data-cache-size.patch mm-pcp-reduce-lock-contention-for-draining-high-order-pages.patch mm-restrict-the-pcp-batch-scale-factor-to-avoid-too-long-latency.patch mm-page_alloc-scale-the-number-of-pages-that-are-batch-allocated.patch mm-add-framework-for-pcp-high-auto-tuning.patch mm-tune-pcp-high-automatically.patch mm-pcp-decrease-pcp-high-if-free-pages-high-watermark.patch mm-pcp-avoid-to-reduce-pcp-high-unnecessarily.patch mm-pcp-reduce-detecting-time-of-consecutive-high-order-page-freeing.patch