Furthermore, for latency-insensitive applications, we can keep the default setting for better throughput. In our production environment, we set this value to 0 for applications running on Kubernetes servers while keeping it at the default value of 5 for other applications like big data. It's not common to release individual kernel packages for each application. Future work =========== To ultimately mitigate the zone->lock contention issue, several suggestions have been proposed. One approach involves dividing large zones into multi smaller zones, as suggested by Matthew[2], while another entails splitting the zone->lock using a mechanism similar to memory arenas and shifting away from relying solely on zone_id to identify the range of free lists a particular page belongs to, as suggested by Mel[3]. However, implementing these solutions is likely to necessitate a more extended development effort. Link: https://lwn.net/Articles/981069/ [0] Link: https://github.com/iovisor/bcc/blob/master/tools/funclatency.py [1] Link: https://lore.kernel.org/linux-mm/ZnTrZ9mcAIRodnjx@xxxxxxxxxxxxxxxxxxxx/ [2] Link: https://lore.kernel.org/linux-mm/20240705130943.htsyhhhzbcptnkcu@xxxxxxxxxxxxxxxxxxx/ [3] Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx> Cc: "Huang, Ying" <ying.huang@xxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> --- Documentation/admin-guide/sysctl/vm.rst | 17 +++++++++++++++++ mm/Kconfig | 11 ----------- mm/page_alloc.c | 23 +++++++++++++++++------ 3 files changed, 34 insertions(+), 17 deletions(-) diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst index e86c968a7a0e..aa29f2fdad7c 100644 --- a/Documentation/admin-guide/sysctl/vm.rst +++ b/Documentation/admin-guide/sysctl/vm.rst @@ -65,6 +65,7 @@ Currently, these files are in /proc/sys/vm: - page-cluster - page_lock_unfairness - panic_on_oom +- pcp_batch_scale_max - percpu_pagelist_high_fraction - stat_interval - stat_refresh @@ -845,6 +846,22 @@ panic_on_oom=2+kdump gives you very strong tool to investigate why oom happens. You can get snapshot. +pcp_batch_scale_max +=================== + +In page allocator, PCP (Per-CPU pageset) is refilled and drained in +batches. The batch number is scaled automatically to improve page +allocation/free throughput. But too large scale factor may hurt +latency. This option sets the upper limit of scale factor to limit +the maximum latency. + +The range for this parameter spans from 0 to 6, with a default value of 5. +The value assigned to 'N' signifies that during each refilling or draining +process, a maximum of (batch << N) pages will be involved, where "batch" +represents the default batch size automatically computed by the kernel for +each zone. + + percpu_pagelist_high_fraction ============================= diff --git a/mm/Kconfig b/mm/Kconfig index b4cb45255a54..41fe4c13b7ac 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -663,17 +663,6 @@ config HUGETLB_PAGE_SIZE_VARIABLE config CONTIG_ALLOC def_bool (MEMORY_ISOLATION && COMPACTION) || CMA -config PCP_BATCH_SCALE_MAX - int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free" - default 5 - range 0 6 - help - In page allocator, PCP (Per-CPU pageset) is refilled and drained in - batches. The batch number is scaled automatically to improve page - allocation/free throughput. But too large scale factor may hurt - latency. This option sets the upper limit of scale factor to limit - the maximum latency. - config PHYS_ADDR_T_64BIT def_bool 64BIT diff --git a/mm/page_alloc.c b/mm/page_alloc.c index bfd44b65777c..8d6f9dc99387 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -273,6 +273,8 @@ int min_free_kbytes = 1024; int user_min_free_kbytes = -1; static int watermark_boost_factor __read_mostly = 15000; static int watermark_scale_factor = 10; +static int pcp_batch_scale_max = 5; +static int sysctl_6 = 6; /* movable_zone is the "real" zone pages in ZONE_MOVABLE are taken from */ int movable_zone; @@ -2334,7 +2336,7 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) int count = READ_ONCE(pcp->count); while (count) { - int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX); + int to_drain = min(count, pcp->batch << pcp_batch_scale_max); count -= to_drain; spin_lock(&pcp->lock); @@ -2462,7 +2464,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, bool free /* Free as much as possible if batch freeing high-order pages. */ if (unlikely(free_high)) - return min(pcp->count, batch << CONFIG_PCP_BATCH_SCALE_MAX); + return min(pcp->count, batch << pcp_batch_scale_max); /* Check for PCP disabled or boot pageset */ if (unlikely(high < batch)) @@ -2494,7 +2496,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone, return 0; if (unlikely(free_high)) { - pcp->high = max(high - (batch << CONFIG_PCP_BATCH_SCALE_MAX), + pcp->high = max(high - (batch << pcp_batch_scale_max), high_min); return 0; } @@ -2564,9 +2566,9 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp, } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) { pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER; } - if (pcp->free_count < (batch << CONFIG_PCP_BATCH_SCALE_MAX)) + if (pcp->free_count < (batch << pcp_batch_scale_max)) pcp->free_count = min(pcp->free_count + (1 << order), - batch << CONFIG_PCP_BATCH_SCALE_MAX); + batch << pcp_batch_scale_max); high = nr_pcp_high(pcp, zone, batch, free_high); if (pcp->count >= high) { free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high), @@ -2908,7 +2910,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int order) * subsequent allocation of order-0 pages without any freeing. */ if (batch <= max_nr_alloc && - pcp->alloc_factor < CONFIG_PCP_BATCH_SCALE_MAX) + pcp->alloc_factor < pcp_batch_scale_max) pcp->alloc_factor++; batch = min(batch, max_nr_alloc); } @@ -6275,6 +6277,15 @@ static struct ctl_table page_alloc_sysctl_table[] = { .proc_handler = percpu_pagelist_high_fraction_sysctl_handler, .extra1 = SYSCTL_ZERO, }, + { + .procname = "pcp_batch_scale_max", + .data = &pcp_batch_scale_max, + .maxlen = sizeof(pcp_batch_scale_max), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, + .extra1 = SYSCTL_ZERO, + .extra2 = &sysctl_6, + }, { .procname = "lowmem_reserve_ratio", .data = &sysctl_lowmem_reserve_ratio, -- 2.43.5