Re: [PATCH v2 0/9] qemu-kvm: Clean up and enhance MSI irqchip support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2011-04-27 11:04, Avi Kivity wrote:
> On 04/27/2011 12:00 PM, Jan Kiszka wrote:
>> On 2011-04-27 09:27, Avi Kivity wrote:
>>>  On 04/26/2011 04:19 PM, Jan Kiszka wrote:
>>>>  I've still plans to consolidate MSI-X mask notifiers and KVM hooks, but
>>>>  that can wait until we go upstream.
>>>>
>>>>  This version still makes classic MSI usable in irqchip mode, now not
>>>>  only for PCI devices (AHCI, HDA) but also for the HPET (with msi=on).
>>>>  Moreover, it contains an additional patch to refresh the MSI IRQ routes
>>>>  after vmload.
>>>>
>>>
>>>  Patches 1-8 applied, thanks.  I'm not sure about 9 (hpet kvm msi
>>>  integration) - it seems very intrusive to do this to every
>>>  msi-supporting device.  At least for pci we get all pci devices done in
>>>  one shot.
>>
>> Right, it is a but intrusive, but I do not see any real alternative.
>>
>>>
>>>  We could do this transparently in hw/apic.c.  When the message is sent
>>>  for the first time we look it up, fail, and update the kvm routing
>>>  entry.  Next time the lookup succeeds and we just use KVM_IRQ_LINE,
>>>  until the message changes and we need to update the irq entry again.
>>
>> I thought about this, also for PCI devices that aren't assigned or
>> vhost-driven, but we would quickly end up with unused and never freed
>> IRQ routing entries. We still need to track the vector configurations.
> 
> We can simply drop all route entries that are used exclusively in qemu 
> (i.e. not bound to an irqfd) and let the cache rebuild itself.

When should they be dropped?

> 
>> What would help at least in the HPET case is a new DELIVER_MSI syscall
>> that completely skips the IRQ routing thing.
> 
> It would only help users of 2.6.40 kernels.

Exactly.

> 
>> Actually, we only need
>> routing for IRQs that shall be injected directly at kernel level. OTOH,
>> this service would not be available on existing kernels, and we would
>> not be able to simplify the PCI code that way (due to vhost
>> requirements). So I dropped this idea as well and accepted that IRQ
>> routing is the way to go.
> 
> I think that with the cache cleanup as outlined above it can work, no?
> 

I don't see yet how.

Jan

-- 
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux