On Tue, 2025-02-04 at 18:58 +0100, Paolo Bonzini wrote: > On 2/4/25 18:51, Sean Christopherson wrote: > > On Tue, Feb 04, 2025, Naveen N Rao wrote: > > > On Mon, Feb 03, 2025 at 09:00:05PM -0500, Maxim Levitsky wrote: > > > > On Mon, 2025-02-03 at 22:33 +0530, Naveen N Rao (AMD) wrote: > > > > > apicv_update_lock is not required when querying the state of guest > > > > > debug in all the vcpus. Remove usage of the same, and switch to > > > > > kvm_set_or_clear_apicv_inhibit() helper to simplify the code. > > > > > > > > It might be worth to mention that the reason why the lock is not needed, > > > > is because kvm_vcpu_ioctl from which this function is called takes 'vcpu->mutex' > > > > and thus concurrent execution of this function is not really possible. > > > > > > Looking at this again, that looks to be a vcpu-specific lock, so I guess > > > it is possible for multiple vcpus to run this concurrently? > > > > Correct. > > And this patch is incorrect. Because there is a store and many loads, > you have the typical race when two vCPUs set blockirq at the same time > > vcpu 0 vcpu 1 > --------------- -------------- > set vcpu0->guest_debug > clear vcpu1->guest_debug > read vcpu0->guest_debug > read vcpu1->guest_debug > set inhibit > read stale vcpu0->guest_debug > read vcpu1->guest_debug > clear inhibit > > But since this is really a slow path, why even bother optimizing it? > > Paolo > Paolo, you are absolutely right! the vcpu mutex only prevents concurrent ioctl on a same vcpu, but not on different vcpus, and without locking of course this patch isn't going to work. The per-vcpu mutex is not something I know well, and I only recently made aware of it, so I mixed this thing up. So yes, some kind of lock is needed here. Best regards, Maxim Levitsky