On 01/07/2011 09:03 PM, Jan Kiszka wrote:
>> >> So, if you have a good high-bandwidth test case at hand, I would >> appreciate if you could give this a try and report your findings. Does >> switching from exclusive to shared IRQ mode decrease the throughput or >> increase the host load? Is there a difference to current kvm? > > I think any sufficiently high bandwidth device will be using MSI and or > NAPI, so I wouldn't expect we're going to see much change there. That's also why I'm no longer sure it's worth to worry about irq_disable vs. PCI disable. Anyone who cares about performance in a large pass-through scenario will try to use MSI-capable hardware anyway (or was so far unable to use tons of legacy IRQ driven devices due to IRQ conflicts).
PCI disable is probably only ridiculously slow with cf8/cfc config space access, and significantly faster (though still slow) with mmconfig. Needs to be taken into account as well.
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html