On 02/06, Sasha Levin wrote: > > Can we modify it slightly to avoid potentially accessing invalid memory: > > diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h > index 5315887..cd22d73 100644 > --- a/arch/x86/include/asm/spinlock.h > +++ b/arch/x86/include/asm/spinlock.h > @@ -144,13 +144,13 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock > if (TICKET_SLOWPATH_FLAG && > static_key_false(¶virt_ticketlocks_enabled)) { > __ticket_t prev_head; > - > + bool needs_kick = lock->tickets.tail & TICKET_SLOWPATH_FLAG; > prev_head = lock->tickets.head; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > - if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) { > + if (unlikely(needs_kick)) { This doesn't look right too... We need to guarantee that either unlock() sees TICKET_SLOWPATH_FLAG, or the calller of __ticket_enter_slowpath() sees the result of add_smp(). Suppose that kvm_lock_spinning() is called right before add_smp() and it sets SLOWPATH. It will block then because .head != want, and it needs __ticket_unlock_kick(). Oleg. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html