The interruption caused by queueing work on nohz_full CPUs is undesirable for certain aplications. Fix by not refreshing per-CPU stats of nohz_full CPUs. Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx> --- v2: opencode schedule_on_each_cpu (Michal Hocko) Index: linux-vmstat-remote/mm/vmstat.c =================================================================== --- linux-vmstat-remote.orig/mm/vmstat.c +++ linux-vmstat-remote/mm/vmstat.c @@ -1881,8 +1881,13 @@ int vmstat_refresh(struct ctl_table *tab void *buffer, size_t *lenp, loff_t *ppos) { long val; - int err; int i; + int cpu; + struct work_struct __percpu *works; + + works = alloc_percpu(struct work_struct); + if (!works) + return -ENOMEM; /* * The regular update, every sysctl_stat_interval, may come later @@ -1896,9 +1901,24 @@ int vmstat_refresh(struct ctl_table *tab * transiently negative values, report an error here if any of * the stats is negative, so we know to go looking for imbalance. */ - err = schedule_on_each_cpu(refresh_vm_stats); - if (err) - return err; + cpus_read_lock(); + for_each_online_cpu(cpu) { + struct work_struct *work; + + if (cpu_is_isolated(cpu)) + continue; + work = per_cpu_ptr(works, cpu); + INIT_WORK(work, refresh_vm_stats); + schedule_work_on(cpu, work); + } + + for_each_online_cpu(cpu) { + if (cpu_is_isolated(cpu)) + continue; + flush_work(per_cpu_ptr(works, cpu)); + } + cpus_read_unlock(); + free_percpu(works); for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { /* * Skip checking stats known to go negative occasionally.