On Mon 20-03-23 16:07:29, Marcelo Tosatti wrote: > On Mon, Mar 20, 2023 at 07:25:55PM +0100, Michal Hocko wrote: > > On Mon 20-03-23 15:03:32, Marcelo Tosatti wrote: > > > This patch series addresses the following two problems: > > > > > > 1. A customer provided evidence indicating that a process > > > was stalled in direct reclaim: > > > > > This is addressed by the trivial patch 1. > > > > [...] > > > 2. With a task that busy loops on a given CPU, > > > the kworker interruption to execute vmstat_update > > > is undesired and may exceed latency thresholds > > > for certain applications. > > > > Yes it can but why does that matter? > > It matters for the application that is executing and expects > not to be interrupted. Those workloads shouldn't enter the kernel in the first place, no? Otherwise the in kernel execution with all the direct or indirect dependencies (e.g. via locks) can throw any latency expectations off the window. > > > By having vmstat_shepherd flush the per-CPU counters to the > > > global counters from remote CPUs. > > > > > > This is done using cmpxchg to manipulate the counters, > > > both CPU locally (via the account functions), > > > and remotely (via cpu_vm_stats_fold). > > > > > > Thanks to Aaron Tomlin for diagnosing issue 1 and writing > > > the initial patch series. > > > > > > > > > Performance details for the kworker interruption: > > > > > > oslat 1094.456862: sys_mlock(start: 7f7ed0000b60, len: 1000) > > > oslat 1094.456971: workqueue_queue_work: ... function=vmstat_update ... > > > oslat 1094.456974: sched_switch: prev_comm=oslat ... ==> next_comm=kworker/5:1 ... > > > kworker 1094.456978: sched_switch: prev_comm=kworker/5:1 ==> next_comm=oslat ... > > > > > > The example above shows an additional 7us for the > > > > > > oslat -> kworker -> oslat > > > > > > switches. In the case of a virtualized CPU, and the vmstat_update > > > interruption in the host (of a qemu-kvm vcpu), the latency penalty > > > observed in the guest is higher than 50us, violating the acceptable > > > latency threshold for certain applications. > > > > I do not think we have ever promissed any specific latency guarantees > > for vmstat. These are statistics have been mostly used for debugging > > purposes AFAIK. I am not aware of any specific user space use case that > > would be latency sensitive. Your changelog doesn't go into details there > > either. > > There is a class of workloads for which response time can be > of interest. MAC scheduler is an example: > > https://par.nsf.gov/servlets/purl/10090368 Yes, I am not disputing low latency workloads in general. I am just saying that you haven't really established a very sound justification here. Of course there are workloads which do not want to conflict with any in kernel house keeping. Those have to be configured and implemented very carefully though. Vmstat as such should not collide with those workloads as long as they do not interact with the kernel in a way counters are updated. Is this hard or impossible to avoid? I can imagine that those workloads have an start up sequence where the kernel is involved and counters updated so that deferred flushing could interfere with the later and latency sensitive phase. Is that a real problem in practice? Please tell us much more why we need to make the vmstat code more complex. Thanks! -- Michal Hocko SUSE Labs