On Tue, May 07, 2024 at 02:00:12PM -0700, Sean Christopherson wrote: > On Tue, May 07, 2024, Paul E. McKenney wrote: > > On Tue, May 07, 2024 at 10:55:54AM -0700, Sean Christopherson wrote: > > > On Fri, May 03, 2024, Paul E. McKenney wrote: > > > > On Fri, May 03, 2024 at 02:29:57PM -0700, Sean Christopherson wrote: > > > > > So if we're comfortable relying on the 1 second timeout to guard against a > > > > > misbehaving userspace, IMO we might as well fully rely on that guardrail. I.e. > > > > > add a generic PF_xxx flag (or whatever flag location is most appropriate) to let > > > > > userspace communicate to the kernel that it's a real-time task that spends the > > > > > overwhelming majority of its time in userspace or guest context, i.e. should be > > > > > given extra leniency with respect to rcuc if the task happens to be interrupted > > > > > while it's in kernel context. > > > > > > > > But if the task is executing in host kernel context for quite some time, > > > > then the host kernel's RCU really does need to take evasive action. > > > > > > Agreed, but what I'm saying is that RCU already has the mechanism to do so in the > > > form of the 1 second timeout. > > > > Plus RCU will force-enable that CPU's scheduler-clock tick after about > > ten milliseconds of that CPU not being in a quiescent state, with > > the time varying depending on the value of HZ and the number of CPUs. > > After about ten seconds (halfway to the RCU CPU stall warning), it will > > resched_cpu() that CPU every few milliseconds. > > > > > And while KVM does not guarantee that it will immediately resume the guest after > > > servicing the IRQ, neither does the existing userspace logic. E.g. I don't see > > > anything that would prevent the kernel from preempting the interrupt task. > > > > Similarly, the hypervisor could preempt a guest OS's RCU read-side > > critical section or its preempt_disable() code. > > > > Or am I missing your point? > > I think you're missing my point? I'm talking specifically about host RCU, what > is or isn't happening in the guest is completely out of scope. Ah, I was thinking of nested virtualization. > My overarching point is that the existing @user check in rcu_pending() is optimistic, > in the sense that the CPU is _likely_ to quickly enter a quiescent state if @user > is true, but it's not 100% guaranteed. And because it's not guaranteed, RCU has > the aforementioned guardrails. You lost me on this one. The "user" argument to rcu_pending() comes from the context saved at the time of the scheduling-clock interrupt. In other words, the CPU really was executing in user mode (which is an RCU quiescent state) when the interrupt arrived. And that suffices, 100% guaranteed. The reason that it suffices is that other RCU code such as rcu_qs() and rcu_note_context_switch() ensure that this CPU does not pay attention to the user-argument-induced quiescent state unless this CPU had previously acknowledged the current grace period. And if the CPU has previously acknowledged the current grace period, that acknowledgement must have preceded the interrupt from user-mode execution. Thus the prior quiescent state represented by that user-mode execution applies to that previously acknowledged grace period. This is admittedly a bit indirect, but then again this is Linux-kernel RCU that we are talking about. > And I'm arguing that, since the @user check isn't bombproof, there's no reason to > try to harden against every possible edge case in an equivalent @guest check, > because it's unnecessary for kernel safety, thanks to the guardrails. And the same argument above would also apply to an equivalent check for execution in guest mode at the time of the interrupt. Please understand that I am not saying that we absolutely need an additional check (you tell me!). But if we do need RCU to be more aggressive about treating guest execution as an RCU quiescent state within the host, that additional check would be an excellent way of making that happen. Thanx, Paul