On 4/19/23 13:29, Marcelo Tosatti wrote: > On Wed, Apr 19, 2023 at 08:14:09AM -0300, Marcelo Tosatti wrote: >> This was tried before: >> https://lore.kernel.org/lkml/20220127173037.318440631@fedora.localdomain/ >> >> My conclusion from that discussion (and work) is that a special system >> call: >> >> 1) Does not allow the benefits to be widely applied (only modified >> applications will benefit). Is not portable across different operating systems. >> >> Removing the vmstat_work interruption is a benefit for HPC workloads, >> for example (in fact, it is a benefit for any kind of application, >> since the interruption causes cache misses). >> >> 2) Increases the system call cost for applications which would use >> the interface. >> >> So avoiding the vmstat_update update interruption, without userspace >> knowledge and modifications, is a better than solution than a modified >> userspace. > > Another important point is this: if an application dirties > its own per-CPU vmstat cache, while performing a system call, > and a vmstat sync event is triggered on a different CPU, you'd have to: > > 1) Wait for that CPU to return to userspace and sync its stats > (unfeasible). > > 2) Queue work to execute on that CPU (undesirable, as that causes > an interruption). So you're saying the application might do a syscall from the isolcpu, so IIUC it cannot expect any latency guarantees at that very moment, but then it immediately starts expecting them again after returning to userspace, and a single interruption for a one-time flush after the syscall would be too intrusive? (elsewhere in the thread you described an RT app initialization that may generate vmstats to flush and then entry userspace loop, again, would a single interruption soon after entering the loop be so critical?) > 3) Remotely sync the vmstat for that CPU. > > >