When checking a performance change for will-it-scale scalability mmap test [1], we found very high lock contention for spinlock of percpu counter 'vm_committed_as': 94.14% 0.35% [kernel.kallsyms] [k] _raw_spin_lock_irqsave 48.21% _raw_spin_lock_irqsave;percpu_counter_add_batch;__vm_enough_memory;mmap_region;do_mmap; 45.91% _raw_spin_lock_irqsave;percpu_counter_add_batch;__do_munmap; Actually this heavy lock contention is not always necessary. The 'vm_committed_as' needs to be very precise when the strict OVERCOMMIT_NEVER policy is set, which requires a rather small batch number for the percpu counter. So keep 'batch' number unchanged for strict OVERCOMMIT_NEVER policy, and enlarge it for not-so-strict OVERCOMMIT_ALWAYS and OVERCOMMIT_GUESS policies. Benchmark with the same testcase in [1] shows 53% improvement on a 8C/16T desktop, and 2097%(20X) on a 4S/72C/144T server. And for that case, whether it shows improvements depends on if the test mmap size is bigger than the batch number computed. We tested 10+ platforms in 0day (server, desktop and laptop). If we lift it to 64X, 80%+ platforms show improvements, and for 16X lift, 1/3 of the platforms will show improvements. And generally it should help the mmap/unmap usage,as Michal Hocko mentioned: " I believe that there are non-synthetic worklaods which would benefit from a larger batch. E.g. large in memory databases which do large mmaps during startups from multiple threads. " Note: There are some style complain from checkpatch for patch 4, as sysctl handler declaration follows the similar format of sibling functions [1] https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/ patch1: a cleanup for /proc/meminfo patch2: a preparation patch which also improve the accuracy of vm_memory_committed patch3: add a percpu_counter sync func patch4: main change Please help to review, thanks! - Feng ---------------------------------------------------------------- Changelog: v6: * fix the ltp vm-overcommit test case fail reported by 0day test robot, by syncing the percpu-counter when changing policy to OVERCOMMIT_NEVER v5: * rebase after 5.8-rc1 * remove the 3/4 patch in v4 which is merged in v5.7 * add code comments for vm_memory_committed() v4: * Remove the VM_WARN_ONCE check for vm_committed_as underflow, thanks to Qian Cai for finding and testing the warning v3: * refine commit log and cleanup code, according to comments from Michal Hocko and Matthew Wilcox * change the lift from 16X and 64X after test v2: * add the sysctl handler to cover runtime overcommit policy change, as suggested by Andres Morton * address the accuracy concern of vm_memory_committed() from Andi Kleen *** BLURB HERE *** Feng Tang (4): proc/meminfo: avoid open coded reading of vm_committed_as mm/util.c: make vm_memory_committed() more accurate percpu_counter: add percpu_counter_sync() mm: adjust vm_committed_as_batch according to vm overcommit policy fs/proc/meminfo.c | 2 +- include/linux/mm.h | 2 ++ include/linux/mman.h | 4 ++++ include/linux/percpu_counter.h | 4 ++++ kernel/sysctl.c | 2 +- lib/percpu_counter.c | 19 +++++++++++++++++ mm/mm_init.c | 22 +++++++++++++------ mm/util.c | 48 +++++++++++++++++++++++++++++++++++++++++- 8 files changed, 94 insertions(+), 9 deletions(-) -- 2.7.4