Hi, to finally select the approach for adding overdue IRQ sharing support for PCI pass-through, I hacked up two versions based on Thomas' patches and his suggestion to use a timeout-based mode transition: git://git.kiszka.org/linux-kvm.git queues/dev-assign.notify git://git.kiszka.org/linux-kvm.git queues/dev-assign.timeout git://git.kiszka.org/qemu-kvm.git queues/dev-assign Both approaches work, but I'm either lacking a sufficiently stressing test environment to tickle out a relevant delta, even between masking at irqchip vs. PCI config space level - or there is none... Yes, there are differences at micro level but they do not manifest in measurable (ie. above the noise level) load increase or throughput/latency decrease in my limited tests here. I that actually turns out to be true, I would happily bury all this dynamic mode switching again. So, if you have a good high-bandwidth test case at hand, I would appreciate if you could give this a try and report your findings. Does switching from exclusive to shared IRQ mode decrease the throughput or increase the host load? Is there a difference to current kvm? Thanks in advance, Jan
Attachment:
signature.asc
Description: OpenPGP digital signature