On Fri, Dec 16, 2022 at 01:16:09PM -0300, Marcelo Tosatti wrote: > On Wed, Dec 14, 2022 at 02:33:02PM +0100, Frederic Weisbecker wrote: > > On Tue, Dec 06, 2022 at 01:18:29PM -0300, Marcelo Tosatti wrote: > > > static inline void vmstat_mark_dirty(void) > > > { > > > + int cpu = smp_processor_id(); > > > + > > > + if (tick_nohz_full_cpu(cpu) && !this_cpu_read(vmstat_dirty)) { > > > + struct delayed_work *dw; > > > + > > > + dw = &per_cpu(vmstat_work, cpu); > > > + if (!delayed_work_pending(dw)) { > > > + unsigned long delay; > > > + > > > + delay = round_jiffies_relative(sysctl_stat_interval); > > > + queue_delayed_work_on(cpu, mm_percpu_wq, dw, delay); > > > > Currently the vmstat_work is flushed on cpu_hotplug (CPUHP_AP_ONLINE_DYN). > > vmstat_shepherd makes sure to not rearm it afterward. But now it looks > > possible for the above to do that mistake? > > Don't think the mistake is an issue. In case of a > queue_delayed_work_on being called after cancel_delayed_work_sync, > either vmstat_update executes on the local CPU, or on a > different CPU (after the bound kworkers have been moved). But after the CPU goes offline, its workqueue pool becomes UNBOUND. Which means that the vmstat_update() from the offline CPU can then execute partly on CPU 0, then gets preempted and executes halfway on CPU 1, then gets preempted and... Having a quick look at refresh_cpu_vm_stats(), I doesn't look ready for that... Thanks. > > Each case is fine (see vmstat_update). > > > > + } > > > + } > > > this_cpu_write(vmstat_dirty, true); > > > } > > > @@ -2009,6 +2028,10 @@ static void vmstat_shepherd(struct work_ > > > for_each_online_cpu(cpu) { > > > struct delayed_work *dw = &per_cpu(vmstat_work, cpu); > > > > > > + /* NOHZ full CPUs manage their own vmstat flushing */ > > > + if (tick_nohz_full_cpu(smp_processor_id())) > > > > It should be the remote CPU instead of the current one. > > Fixed. >