Re: [PATCH 1/1] KVM: Fix potentially recursively get kvm lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 12 May 2009 19:55:24 Marcelo Tosatti wrote:
> On Tue, May 12, 2009 at 05:32:09PM +0800, Sheng Yang wrote:
> > kvm_vm_ioctl_deassign_dev_irq() would potentially recursively get
> > kvm->lock, because it called kvm_deassigned_irq() which implicit hold
> > kvm->lock by calling deassign_host_irq().
> >
> > Fix it by move kvm_deassign_irq() out of critial region. And add the
> > missing lock for deassign_guest_irq().
> >
> > Reported-by: Alex Williamson <alex.williamson@xxxxxx>
> > Signed-off-by: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
> > ---
> >  virt/kvm/kvm_main.c |   14 +++++++-------
> >  1 files changed, 7 insertions(+), 7 deletions(-)
> >
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 4d00942..3c69655 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -215,6 +215,8 @@ static void kvm_assigned_dev_ack_irq(struct
> > kvm_irq_ack_notifier *kian) static void deassign_guest_irq(struct kvm
> > *kvm,
> >  			       struct kvm_assigned_dev_kernel *assigned_dev)
> >  {
> > +	mutex_lock(&kvm->lock);
> > +
> >  	kvm_unregister_irq_ack_notifier(&assigned_dev->ack_notifier);
> >  	assigned_dev->ack_notifier.gsi = -1;
> >
> > @@ -222,6 +224,8 @@ static void deassign_guest_irq(struct kvm *kvm,
> >  		kvm_free_irq_source_id(kvm, assigned_dev->irq_source_id);
> >  	assigned_dev->irq_source_id = -1;
> >  	assigned_dev->irq_requested_type &= ~(KVM_DEV_IRQ_GUEST_MASK);
> > +
> > +	mutex_unlock(&kvm->lock);
> >  }
> >
> >  /* The function implicit hold kvm->lock mutex due to cancel_work_sync()
> > */ @@ -558,20 +562,16 @@ static int kvm_vm_ioctl_deassign_dev_irq(struct
> > kvm *kvm, struct kvm_assigned_irq
> >  					 *assigned_irq)
> >  {
> > -	int r = -ENODEV;
> >  	struct kvm_assigned_dev_kernel *match;
> >
> >  	mutex_lock(&kvm->lock);
> > -
> >  	match = kvm_find_assigned_dev(&kvm->arch.assigned_dev_head,
> >  				      assigned_irq->assigned_dev_id);
> > +	mutex_unlock(&kvm->lock);
>
> assigned_dev list is protected by kvm->lock. So you could have another
> ioctl adding to it at the same time you're searching.

Oh, yes... My fault... 

> Could either have a separate kvm->assigned_devs_lock, to protect
> kvm->arch.assigned_dev_head (users are ioctls that manipulate it), or
> change the IRQ injection to use a separate spinlock, kill the workqueue
> and call kvm_set_irq from the assigned device interrupt handler.

Peferred the latter, though needs more work. But the only reason for put a 
workqueue here is because kvm->lock is a mutex? I can't believe... If so, I 
think we had made a big mistake - we have to fix all kinds of racy problem 
caused by this, but finally find it's unnecessary... 

Maybe another reason is kvm_kick_vcpu(), but have already fixed by you.

Continue to check the code...

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux