The patch titled Subject: percpu_counter: add percpu_counter_sync() has been added to the -mm tree. Its filename is percpu_counter-add-percpu_counter_sync.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/percpu_counter-add-percpu_counter_sync.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/percpu_counter-add-percpu_counter_sync.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Feng Tang <feng.tang@xxxxxxxxx> Subject: percpu_counter: add percpu_counter_sync() percpu_counter's accuracy is related to its batch size. For a percpu_counter with a big batch, its deviation could be big, so when the counter's batch is runtime changed to a smaller value for better accuracy, there could also be requirment to reduce the big deviation. So add a percpu-counter sync function to be run on each CPU. Link: http://lkml.kernel.org/r/1594389708-60781-4-git-send-email-feng.tang@xxxxxxxxx Reported-by: kernel test robot <rong.a.chen@xxxxxxxxx> Signed-off-by: Feng Tang <feng.tang@xxxxxxxxx> Cc: Dennis Zhou <dennis@xxxxxxxxxx> Cc: Tejun Heo <tj@xxxxxxxxxx> Cc: Christoph Lameter <cl@xxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Qian Cai <cai@xxxxxx> Cc: Andi Kleen <andi.kleen@xxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Dave Hansen <dave.hansen@xxxxxxxxx> Cc: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Kees Cook <keescook@xxxxxxxxxxxx> Cc: "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxx> Cc: Tim Chen <tim.c.chen@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/percpu_counter.h | 4 ++++ lib/percpu_counter.c | 19 +++++++++++++++++++ 2 files changed, 23 insertions(+) --- a/include/linux/percpu_counter.h~percpu_counter-add-percpu_counter_sync +++ a/include/linux/percpu_counter.h @@ -44,6 +44,7 @@ void percpu_counter_add_batch(struct per s32 batch); s64 __percpu_counter_sum(struct percpu_counter *fbc); int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch); +void percpu_counter_sync(struct percpu_counter *fbc); static inline int percpu_counter_compare(struct percpu_counter *fbc, s64 rhs) { @@ -172,6 +173,9 @@ static inline bool percpu_counter_initia return true; } +static inline void percpu_counter_sync(struct percpu_counter *fbc) +{ +} #endif /* CONFIG_SMP */ static inline void percpu_counter_inc(struct percpu_counter *fbc) --- a/lib/percpu_counter.c~percpu_counter-add-percpu_counter_sync +++ a/lib/percpu_counter.c @@ -99,6 +99,25 @@ void percpu_counter_add_batch(struct per EXPORT_SYMBOL(percpu_counter_add_batch); /* + * For percpu_counter with a big batch, the devication of its count could + * be big, and there is requirement to reduce the deviation, like when the + * counter's batch could be runtime decreased to get a better accuracy, + * which can be achieved by running this sync function on each CPU. + */ +void percpu_counter_sync(struct percpu_counter *fbc) +{ + unsigned long flags; + s64 count; + + raw_spin_lock_irqsave(&fbc->lock, flags); + count = __this_cpu_read(*fbc->counters); + fbc->count += count; + __this_cpu_sub(*fbc->counters, count); + raw_spin_unlock_irqrestore(&fbc->lock, flags); +} +EXPORT_SYMBOL(percpu_counter_sync); + +/* * Add up all the per-cpu counts, return the result. This is a more accurate * but much slower version of percpu_counter_read_positive() */ _ Patches currently in -mm which might be from feng.tang@xxxxxxxxx are proc-meminfo-avoid-open-coded-reading-of-vm_committed_as.patch mm-utilc-make-vm_memory_committed-more-accurate.patch percpu_counter-add-percpu_counter_sync.patch mm-adjust-vm_committed_as_batch-according-to-vm-overcommit-policy.patch