On Wed 19-04-23 13:35:12, Marcelo Tosatti wrote: [...] > This is a burden for application writers and for system configuration. Yes. And I find it reasonable to expect that burden put there as there are non-trivial requirements for those workloads anyway. It is not out-of-the-box thing, right? > Or it could be done automatically (from outside of the application). > Which is what is described and implemented here: > > https://lore.kernel.org/lkml/20220204173537.429902988@fedora.localdomain/ > > "Task isolation is divided in two main steps: configuration and > activation. > > Each step can be performed by an external tool or the latency > sensitive application itself. util-linux contains the "chisol" tool > for this purpose." I cannot say I would be a fan of prctl interfaces in general but I do agree with the overal idea to forcing a quiescent state on a set of CPUs. > But not only that, the second thing is: > > "> Another important point is this: if an application dirties > > its own per-CPU vmstat cache, while performing a system call, > > Or while handling a VM-exit from a vCPU. Do you have any specific examples on this? > This are, in my mind, sufficient reasons to discard the "flush per-cpu > caches" idea. This is also why i chose to abandon the prctrl interface > patchset. > > > and a vmstat sync event is triggered on a different CPU, you'd have to: > > > > 1) Wait for that CPU to return to userspace and sync its stats > > (unfeasible). > > > > 2) Queue work to execute on that CPU (undesirable, as that causes > > an interruption). > > > > 3) Remotely sync the vmstat for that CPU." > > So the only option is to remotely sync vmstat for the CPU > (unless you have a better suggestion). `echo 1 > /proc/sys/vm/stat_refresh' achieves essentially the same without any kernel changes. But let me repeat, this is not just about vmstats. Just have a look at other queue_work_on users. You do not want to handy pick each and every one and do so in the future as well. -- Michal Hocko SUSE Labs