On Tue, May 12, 2009 at 10:13:36PM +0800, Yang, Sheng wrote: > > > + mutex_unlock(&kvm->lock); > > > > assigned_dev list is protected by kvm->lock. So you could have another > > ioctl adding to it at the same time you're searching. > > Oh, yes... My fault... > > > Could either have a separate kvm->assigned_devs_lock, to protect > > kvm->arch.assigned_dev_head (users are ioctls that manipulate it), or > > change the IRQ injection to use a separate spinlock, kill the workqueue > > and call kvm_set_irq from the assigned device interrupt handler. > > Peferred the latter, though needs more work. But the only reason for put a > workqueue here is because kvm->lock is a mutex? I can't believe... If so, I > think we had made a big mistake - we have to fix all kinds of racy problem > caused by this, but finally find it's unnecessary... One issue is that kvm_set_irq can take too long while interrupts are blocked, and you'd have to disable interrupts in other contexes that inject interrupts (say qemu->ioctl(SET_INTERRUPT)->...->), so all i can see is a tradeoff. <guess mode on> But the interrupt injection path seems to be pretty short and efficient to happen in host interrupt context. <guess mode off> Avi, Gleb? > Maybe another reason is kvm_kick_vcpu(), but have already fixed by you. Note you tested the spinlock_irq patch with GigE and there was no significant performance regression right? > > Continue to check the code... > > -- > regards > Yang, Sheng -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html