The quilt patch titled Subject: mm/vmstat: defer the refresh_zone_stat_thresholds after all CPUs bringup has been removed from the -mm tree. Its filename was mm-vmstat-defer-the-refresh_zone_stat_thresholds-after-all-cpus-bringup.patch This patch was dropped because an updated version will be issued ------------------------------------------------------ From: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> Subject: mm/vmstat: defer the refresh_zone_stat_thresholds after all CPUs bringup Date: Fri, 5 Jul 2024 01:48:21 -0700 refresh_zone_stat_thresholds function has two loops which is expensive for higher number of CPUs and NUMA nodes. Below is the rough estimation of total iterations done by these loops based on number of NUMA and CPUs. Total number of iterations: nCPU * 2 * Numa * mCPU Where: nCPU = total number of CPUs Numa = total number of NUMA nodes mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs) For the system under test with 16 NUMA nodes and 1024 CPUs, this results in a substantial increase in the number of loop iterations during boot-up when NUMA is enabled: No NUMA = 1024*2*1*512 = 1,048,576 : Here refresh_zone_stat_thresholds takes around 224 ms total for all the CPUs in the system under test. 16 NUMA = 1024*2*16*512 = 16,777,216 : Here refresh_zone_stat_thresholds takes around 4.5 seconds total for all the CPUs in the system under test. Calling this for each CPU is expensive when there are large number of CPUs along with multiple NUMAs. Fix this by deferring refresh_zone_stat_thresholds to be called later at once when all the secondary CPUs are up. Also, register the DYN hooks to keep the existing hotplug functionality intact. Without this patch, refresh_zone_stat_threshold was being called 1024 times. After applying the patch, it is called only once, which is same as the last iteration of the earlier 1024 calls. Further testing with this patch, I observed a 4.5-second improvement in the overall boot timing due to this fix, which is same as the total time taken by refresh_zone_stat_thresholds without thie patch, leading me to reasonably conclude that refresh_zone_stat_threshold now takes a negligible amount of time (likely just a few milliseconds). [ssengar@xxxxxxxxxxxxxxxxxxx: fix warning] Link: https://lkml.kernel.org/r/1723443220-20623-1-git-send-email-ssengar@xxxxxxxxxxxxxxxxxxx Link: https://lkml.kernel.org/r/1720169301-21002-1-git-send-email-ssengar@xxxxxxxxxxxxxxxxxxx Signed-off-by: Saurabh Sengar <ssengar@xxxxxxxxxxxxxxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> Reviewed-by: Srivatsa S. Bhat (Microsoft) <srivatsa@xxxxxxxxxxxxx> Cc: Wei Liu <wei.liu@xxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmstat.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) --- a/mm/vmstat.c~mm-vmstat-defer-the-refresh_zone_stat_thresholds-after-all-cpus-bringup +++ a/mm/vmstat.c @@ -1929,6 +1929,7 @@ static const struct seq_operations vmsta #ifdef CONFIG_SMP static DEFINE_PER_CPU(struct delayed_work, vmstat_work); int sysctl_stat_interval __read_mostly = HZ; +static int vmstat_late_init_done; #ifdef CONFIG_PROC_FS static void refresh_vm_stats(struct work_struct *work) @@ -2131,7 +2132,8 @@ static void __init init_cpu_node_state(v static int vmstat_cpu_online(unsigned int cpu) { - refresh_zone_stat_thresholds(); + if (vmstat_late_init_done) + refresh_zone_stat_thresholds(); if (!node_state(cpu_to_node(cpu), N_CPU)) { node_set_state(cpu_to_node(cpu), N_CPU); @@ -2163,6 +2165,14 @@ static int vmstat_cpu_dead(unsigned int return 0; } +static int __init vmstat_late_init(void) +{ + refresh_zone_stat_thresholds(); + vmstat_late_init_done = 1; + + return 0; +} +late_initcall(vmstat_late_init); #endif struct workqueue_struct *mm_percpu_wq; _ Patches currently in -mm which might be from ssengar@xxxxxxxxxxxxxxxxxxx are