On 6 Jan 2023 15:16:23 -0300 Marcelo Tosatti <mtosatti@xxxxxxxxxx> > On Fri, Jan 06, 2023 at 11:01:54PM +0800, Hillf Danton wrote: > > On 6 Jan 2023 09:51:00 -0300 Marcelo Tosatti <mtosatti@xxxxxxxxxx> > > > On Fri, Jan 06, 2023 at 08:12:44AM +0800, Hillf Danton wrote: > > > > > > > > Regression wrt V12 if timer is added on the CPU that is not doing HK_TYPE_TIMER? > > > > > > Before this change, the timer was managed (and queued on an isolated > > > CPU) by vmstat_shepherd. Now it is managed (and queued) by the local > > > CPU, so there is no regression. > > > > Given vm stats folded when returning to userspace, queuing the delayed work > > barely makes sense in the first place. If it can be canceled, queuing it burns > > cycles with nothing earned. Otherwise vm stats got folded already. > > Agree, but you can't know whether return to userspace will occur > before the timer is fired. No way to predict a random timer expiration, no? > > So queueing the timer is to _ensure_ that eventually vmstats will be > synced (which maintains the current timing behaviour wrt vmstat syncs). After this change, > > > > > @@ -1988,13 +2022,19 @@ void quiet_vmstat(void) > > > > > if (!is_vmstat_dirty()) > > > > > return; > > > > > it is only ensured eventually by this check instead. > > > > > + refresh_cpu_vm_stats(false); > > > > > + > > > > > + if (!IS_ENABLED(CONFIG_FLUSH_WORK_ON_RESUME_USER)) > > > > > + return; > > > > > + > > > > > + if (!user) > > > > > + return; > Also don't think the queueing cost is significant: it only happens > for the first vmstat dirty item. Cost is considered only if it is needed. > > > Nor does shepherd even without delay. And the right thing is only make shepherd > > leave isolated CPUs intact.