Am 07.01.2011 19:57, Alex Williamson wrote: > On Fri, 2011-01-07 at 10:33 +0100, Jan Kiszka wrote: >> Hi, >> >> to finally select the approach for adding overdue IRQ sharing support >> for PCI pass-through, I hacked up two versions based on Thomas' patches >> and his suggestion to use a timeout-based mode transition: >> >> git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify >> git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout >> >> git://git.kiszka.org/qemu-kvm.git queues/dev-assign >> >> Both approaches work, but I'm either lacking a sufficiently stressing >> test environment to tickle out a relevant delta, even between masking at >> irqchip vs. PCI config space level - or there is none... Yes, there are >> differences at micro level but they do not manifest in measurable (ie. >> above the noise level) load increase or throughput/latency decrease in >> my limited tests here. I that actually turns out to be true, I would >> happily bury all this dynamic mode switching again. >> >> So, if you have a good high-bandwidth test case at hand, I would >> appreciate if you could give this a try and report your findings. Does >> switching from exclusive to shared IRQ mode decrease the throughput or >> increase the host load? Is there a difference to current kvm? > > I think any sufficiently high bandwidth device will be using MSI and or > NAPI, so I wouldn't expect we're going to see much change there. That's also why I'm no longer sure it's worth to worry about irq_disable vs. PCI disable. Anyone who cares about performance in a large pass-through scenario will try to use MSI-capable hardware anyway (or was so far unable to use tons of legacy IRQ driven devices due to IRQ conflicts). > Perhaps you can simply force a 1GbE device to use INTx and do some > netperf TCP_RR tests to try to expose any latency differences. Thanks, I had the same idea, but I'm lacking a 1GbE peer here. :( Jan
Attachment:
signature.asc
Description: OpenPGP digital signature