On Wed, Jan 11, 2023 at 09:37:50AM +0000, David Woodhouse wrote: > diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c > index 07e61cc9881e..c444948ab1ac 100644 > --- a/arch/x86/kvm/xen.c > +++ b/arch/x86/kvm/xen.c > @@ -272,7 +272,12 @@ static void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, bool atomic) > * Attempt to obtain the GPC lock on *both* (if there are two) > * gfn_to_pfn caches that cover the region. > */ > - read_lock_irqsave(&gpc1->lock, flags); > + local_irq_save(flags); > + if (!read_trylock(&gpc1->lock)) { > + if (atomic) > + return; > + read_lock(&gpc1->lock); > + } > while (!kvm_gpc_check(gpc1, user_len1)) { > read_unlock_irqrestore(&gpc1->lock, flags); > There might be a problem with this pattern that would be alleviated when written like: local_irq_save(flags); if (atomic) { if (!read_trylock(&gpc1->lock)) { local_irq_restore(flags); return; } } else { read_lock(&gpc1->lock); } (also note you forgot the irq_restore on the exit path) Specifically the problem is that trylock will not trigger the regular lockdep machinery since it doesn't wait and hence cannot cause a deadlock. With your form the trylock is the common case and lockdep will only trigger (observe any potential cycles) if/when this hits contention. By using an unconditional read_lock() for the !atomic case this is avoided.