On Thu, Mar 28, 2024, Leonardo Bras wrote: > I am dealing with a latency issue inside a KVM guest, which is caused by > a sched_switch to rcuc[1]. > > During guest entry, kernel code will signal to RCU that current CPU was on > a quiescent state, making sure no other CPU is waiting for this one. > > If a vcpu just stopped running (guest_exit), and a syncronize_rcu() was > issued somewhere since guest entry, there is a chance a timer interrupt > will happen in that CPU, which will cause rcu_sched_clock_irq() to run. > > rcu_sched_clock_irq() will check rcu_pending() which will return true, > and cause invoke_rcu_core() to be called, which will (in current config) > cause rcuc/N to be scheduled into the current cpu. > > On rcu_pending(), I noticed we can avoid returning true (and thus invoking > rcu_core()) if the current cpu is nohz_full, and the cpu came from either > idle or userspace, since both are considered quiescent states. > > Since this is also true to guest context, my idea to solve this latency > issue by avoiding rcu_core() invocation if it was running a guest vcpu. > > On the other hand, I could not find a way of reliably saying the current > cpu was running a guest vcpu, so patch #1 implements a per-cpu variable > for keeping the time (jiffies) of the last guest exit. > > In patch #2 I compare current time to that time, and if less than a second > has past, we just skip rcu_core() invocation, since there is a high chance > it will just go back to the guest in a moment. What's the downside if there's a false positive? > What I know it's weird with this patch: > 1 - Not sure if this is the best way of finding out if the cpu was > running a guest recently. > > 2 - This per-cpu variable needs to get set at each guest_exit(), so it's > overhead, even though it's supposed to be in local cache. If that's > an issue, I would suggest having this part compiled out on > !CONFIG_NO_HZ_FULL, but further checking each cpu for being nohz_full > enabled seems more expensive than just setting this out. A per-CPU write isn't problematic, but I suspect reading jiffies will be quite imprecise, e.g. it'll be a full tick "behind" on many exits. > 3 - It checks if the guest exit happened over than 1 second ago. This 1 > second value was copied from rcu_nohz_full_cpu() which checks if the > grace period started over than a second ago. If this value is bad, > I have no issue changing it. IMO, checking if a CPU "recently" ran a KVM vCPU is a suboptimal heuristic regardless of what magic time threshold is used. IIUC, what you want is a way to detect if a CPU is likely to _run_ a KVM vCPU in the near future. KVM can provide that information with much better precision, e.g. KVM knows when when it's in the core vCPU run loop. > 4 - Even though I could detect no issue, I included linux/kvm_host.h into > rcu/tree_plugin.h, which is the first time it's getting included > outside of kvm or arch code, and can be weird. Heh, kvm_host.h isn't included outside of KVM because several architectures can build KVM as a module, which means referencing global KVM varibles from the kernel proper won't work. > An alternative would be to create a new header for providing data for > non-kvm code. I doubt a new .h or .c file is needed just for this, there's gotta be a decent landing spot for a one-off variable. E.g. I wouldn't be at all surprised if there is additional usefulness in knowing if a CPU is in KVM's core run loop and thus likely to do a VM-Enter in the near future, at which point you could probably make a good argument for adding a flag in "struct context_tracking". Even without a separate use case, there's a good argument for adding that info to context_tracking.