On Thu, May 28, 2020 at 11:21:36PM +0800, Kleen, Andi wrote: > > > >If it's true, then there could be 2 solutions, one is to skip the WARN_ONCE as it has no practical value, as the real >check is the following code, the other is to rectify the percpu counter when the policy is changing to >OVERCOMMIT_NEVER. > > I think it's better to fix it up when the policy changes. That's the right place. The WARN_ON might be useful to catch other bugs. If we keep the WARN_ON, then the draft fix patch I can think of looks like: diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index a66595b..02d87fc 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -98,6 +98,20 @@ void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch) } EXPORT_SYMBOL(percpu_counter_add_batch); +void percpu_counter_sync(struct percpu_counter *fbc) +{ + unsigned long flags; + s64 count; + + raw_spin_lock_irqsave(&fbc->lock, flags); + count = __this_cpu_read(*fbc->counters); + fbc->count += count; + __this_cpu_sub(*fbc->counters, count); + raw_spin_unlock_irqrestore(&fbc->lock, flags); +} +EXPORT_SYMBOL(percpu_counter_sync); + + /* * Add up all the per-cpu counts, return the result. This is a more accurate * but much slower version of percpu_counter_read_positive() diff --git a/mm/util.c b/mm/util.c index 580d268..24322da 100644 --- a/mm/util.c +++ b/mm/util.c @@ -746,14 +746,24 @@ int overcommit_ratio_handler(struct ctl_table *table, int write, void *buffer, return ret; } +static void sync_overcommit_as(struct work_struct *dummy) +{ + percpu_counter_sync(&vm_committed_as); +} + int overcommit_policy_handler(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { int ret; ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); - if (ret == 0 && write) + if (ret == 0 && write) { + if (sysctl_overcommit_memory == OVERCOMMIT_NEVER) + schedule_on_each_cpu(sync_overcommit_as); + mm_compute_batch(); + } return ret; } Any comments? Thanks, Feng > -Andi >