On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote: > Hi, > > to finally select the approach for adding overdue IRQ sharing support > for PCI pass-through, I hacked up two versions based on Thomas' patches > and his suggestion to use a timeout-based mode transition: > > git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify > git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout > > git://git.kiszka.org/qemu-kvm.git queues/dev-assign > > Both approaches work, but I'm either lacking a sufficiently stressing > test environment to tickle out a relevant delta, even between masking at > irqchip vs. PCI config space level - or there is none... Yes, there are > differences at micro level but they do not manifest in measurable (ie. > above the noise level) load increase or throughput/latency decrease in > my limited tests here. I that actually turns out to be true, I would > happily bury all this dynamic mode switching again. > > So, if you have a good high-bandwidth test case at hand, I would > appreciate if you could give this a try and report your findings. Does > switching from exclusive to shared IRQ mode decrease the throughput or > increase the host load? Is there a difference to current kvm? I think any sufficiently high bandwidth device will be using MSI and or NAPI, so I wouldn't expect we're going to see much change there. Perhaps you can simply force a 1GbE device to use INTx and do some netperf TCP_RR tests to try to expose any latency differences. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html