On 19/10/21 19:34, Sean Christopherson wrote:
The intent of the extra check was to avoid the locked instruction that comes with disabling preemption via rcu_read_lock(). But thinking more, the extra op should be little more than a basic arithmetic operation in the grand scheme on modern x86 since the cache line is going to be locked and written no matter what, either immediately before or immediately after.
There should be no locked instructions unless you're using PREEMPT_RT/PREEMPT_RCU, no? The preempt_disable count is in a percpu variable.
+ /* + * Avoid the moderately expensive "should kick" operation if this pCPU + * is currently running the target vCPU, in which case it's a KVM bug + * if the vCPU is in the inner run loop. + */ + if (vcpu == __this_cpu_read(kvm_running_vcpu) && + !WARN_ON_ONCE(vcpu->mode == IN_GUEST_MODE)) + goto out; +
It should not even be a problem if vcpu->mode == IN_GUEST_MODE, you just set it to EXITING_GUEST_MODE without even the need for atomic_cmpxchg.
I'll send a few patches out, since I think I found some related issues. Paolo