On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote: > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto: > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor to update the irq routing table, > > in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is blocked for so much time to wait RCU grace period, and during this period, this vcpu cannot provide service to VM, > > so those interrupts delivered to this vcpu cannot be handled in time, and the apps running on this vcpu cannot be serviced too. > > It's unacceptable in some real-time scenario, e.g. telecom. > > > > So, I want to create a single workqueue for each VM, to asynchronously performing the RCU synchronization for irq routing table, > > and let the vcpu thread return and VMENTRY to service VM immediately, no more need to blocked to wait RCU grace period. > > And, I have implemented a raw patch, took a test in our telecom environment, above problem disappeared. > > I don't think a workqueue is even needed. You just need to use call_rcu > to free "old" after releasing kvm->irq_lock. > > What do you think? > It should be rate limited somehow. Since it guest triggarable guest may cause host to allocate a lot of memory this way. Is this about MSI interrupt affinity? IIRC changing INT interrupt affinity should not trigger kvm_set_irq_routing update. If this is about MSI only then what about changing userspace to use KVM_SIGNAL_MSI for MSI injection? -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html