On Wed, 2023-01-11 at 23:42 +0100, Paolo Bonzini wrote: > On 1/11/23 19:06, David Woodhouse wrote: > > From: David Woodhouse <dwmw@xxxxxxxxxxxx> > > > > In commit 14243b387137a ("KVM: x86/xen: Add KVM_IRQ_ROUTING_XEN_EVTCHN > > and event channel delivery") the clever version of me left some helpful > > notes for those who would come after him: > > > > /* > > * For the irqfd workqueue, using the main kvm->lock mutex is > > * fine since this function is invoked from kvm_set_irq() with > > * no other lock held, no srcu. In future if it will be called > > * directly from a vCPU thread (e.g. on hypercall for an IPI) > > * then it may need to switch to using a leaf-node mutex for > > * serializing the shared_info mapping. > > */ > > mutex_lock(&kvm->lock); > > > > In commit 2fd6df2f2b47 ("KVM: x86/xen: intercept EVTCHNOP_send from guests") > > the other version of me ran straight past that comment without reading it, > > and introduced a potential deadlock by taking vcpu->mutex and kvm->lock > > in the wrong order. > > > > Solve this as originally suggested, by adding a leaf-node lock in the Xen > > state rather than using kvm->lock for it. > > > > Fixes: 2fd6df2f2b47 ("KVM: x86/xen: intercept EVTCHNOP_send from guests") > > Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> > > Same as my patch except that this one is for an older tree and is > missing this: > > @@ -1958,7 +1950,7 @@ static int kvm_xen_eventfd_reset(struct kvm *kvm) > > all_evtchnfds = kmalloc_array(n, sizeof(struct evtchnfd *), GFP_KERNEL); > if (!all_evtchnfds) { > - mutex_unlock(&kvm->lock); > + mutex_unlock(&kvm->arch.xen.xen_lock); > return -ENOMEM; > } > I was basing it on kvm/queue; should I have been using kvm/next? I had noted the absence of that deadlock fix; I think I said to Sean that we might even be able to revert it... but we can't. We do still take this new xen_lock in a kvm->scru read section just as we did kvm->lock. So we still can't call synchronize_scru() with it held. > FWIW my commit message was this: > > Using kvm->lock is incorrect because the rule is that kvm->lock is held > outside vcpu->mutex. This is relatively rare and generally the locks > are held independently, which is why the incorrect use did not cause > deadlocks left and right; on x86 in fact it only happens for SEV's > move-context ioctl, a niche feature whose intersection with Xen is > pretty much empty. But still, locking rules are locking rules and > the comment in kvm_xen_set_evtchn already hinted that using a separate > leaf mutex would be needed as soon as event channel hypercalls would > be supported. I can be a lot ruder about my own mental capacity in a commit message than you can :) > Queued all four, thanks. > Thanks. Did you get the QEMU bits running? If built from my xenfv branch it should boot fairly much any stock Linux guest image as long as the distro kernel has Xen support (which most do). And use SATA for the disk, as noted. -accel kvm,xen-version=0x4000a,kernel-irqchip=split
Attachment:
smime.p7s
Description: S/MIME cryptographic signature