Re: [patch 0/4] move irq protection role to separate lock v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 20, 2009 at 03:48:41PM -0300, Marcelo Tosatti wrote:
> Addressing comment and covering irqfd (did not remove the deadlock avoidance 
> there since it might be useful later).

I can't see any reason why serializing the vm ioctl is a bad idea.

There is one indication of disagreement here:

http://patchwork.kernel.org/patch/5661/

"The original intent was that kvm_arch_vcpu_create() not "link in" the
vcpu to any registers. That allows most of the vcpu creation to happen
outside a lock."

But I fail to see the case where vcpu creation is a fast path (unless
you're benchmarking cpu hotplug/hotunplug).

Note there is no risk of a deadlock in case a vcpu thread attempts to
execute vm_ioctl.

Also vm_ioctl_lock will always be the first lock grabbed in the vm_ioctl
path, and not used in any other path, there's no risk of deadlock.

The reason for this is there are sites, such as KVM_CREATE_PIT, which
handle concurrency properly (with custom locking that becomes obsolete),
but some don't.


Index: kvm/include/linux/kvm_host.h
===================================================================
--- kvm.orig/include/linux/kvm_host.h
+++ kvm/include/linux/kvm_host.h
@@ -131,9 +131,9 @@ struct kvm {
 					KVM_PRIVATE_MEM_SLOTS];
 	struct kvm_vcpu *vcpus[KVM_MAX_VCPUS];
 	struct list_head vm_list;
+	struct mutex vm_ioctl_lock; /* protects concurrent vm fd ioctls */
 	struct mutex lock; /*
 			    * - protects mmio_bus, pio_bus.
-			    * - protects a few concurrent ioctl's (FIXME).
 			    * - protects concurrent create_vcpu, but
 			    *   kvm->vcpus walkers do it locklessly (FIXME).
 			    */
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -988,6 +988,7 @@ static struct kvm *kvm_create_vm(void)
 	INIT_LIST_HEAD(&kvm->irqfds);
 	mutex_init(&kvm->lock);
 	mutex_init(&kvm->irq_lock);
+	mutex_init(&kvm->vm_ioctl_lock);
 	kvm_io_bus_init(&kvm->mmio_bus);
 	init_rwsem(&kvm->slots_lock);
 	atomic_set(&kvm->users_count, 1);
@@ -2053,6 +2054,9 @@ static long kvm_vm_ioctl(struct file *fi
 
 	if (kvm->mm != current->mm)
 		return -EIO;
+
+	mutex_lock(&kvm->vm_ioctl_lock);
+
 	switch (ioctl) {
 	case KVM_CREATE_VCPU:
 		r = kvm_vm_ioctl_create_vcpu(kvm, arg);
@@ -2228,6 +2232,7 @@ static long kvm_vm_ioctl(struct file *fi
 		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
 	}
 out:
+	mutex_unlock(&kvm->vm_ioctl_lock);
 	return r;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux