On 20/10/2015 11:57, Yacine wrote: > vcpu_load; start cr3 trapping; vcpu_put > > it worked correctly (in my logs I see that vcpu.cpu become equal to "cpu = > raw_smp_processor_id();") but the VM blocks for a lot of time due to mutex > in vcpu_load (up to serveral seconds and sometimes minutes !) Right, that's because while the CPU is running the mutex is taken. If the VCPU doesn't exit, the mutex is held. > I replaced vcpu_load with kvm_sched_in, now everything works perfectly and > the VM doesn't block at all (logs here: http://pastebin.com/h5XNNMcb). > > So, what I want to know is: what is the difference between vcpu_load and > kvm_sched_in ? both of this functions call kvm_arch_vcpu_loadbut the latter > one does it without doing a mutex kvm_sched_out and kvm_sched_in are part of KVM's preemption hooks. The hooks are registered only between vcpu_load and vcpu_put, therefore they know that the mutex is taken. The sequence will go like this: vcpu_load kvm_sched_out kvm_sched_in kvm_sched_out kvm_sched_in ... vcpu_put and it will all happen with the mutex held. > Is there a problem in using kvm_sched_in instead of vcpu_load for my use case ? Yes, unfortunately it is a problem: you are loading the same VMCS on two processors, which has undefined results. To fix the problem, wrap the ioctl into a function and pass the function to QEMU's "run_on_cpu" function. It will send the ioctl from the right thread, so that the kernel will not be holding the vcpu mutex. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html