On 2011-10-18 14:17, Michael S. Tsirkin wrote: > On Mon, Oct 17, 2011 at 09:19:34PM +0200, Jan Kiszka wrote: >> On 2011-10-17 17:37, Michael S. Tsirkin wrote: >>> On Mon, Oct 17, 2011 at 01:19:56PM +0200, Jan Kiszka wrote: >>>> On 2011-10-17 13:06, Avi Kivity wrote: >>>>> On 10/17/2011 11:27 AM, Jan Kiszka wrote: >>>>>> This cache will help us implementing KVM in-kernel irqchip support >>>>>> without spreading hooks all over the place. >>>>>> >>>>>> KVM requires us to register it first and then deliver it by raising a >>>>>> pseudo IRQ line returned on registration. While this could be changed >>>>>> for QEMU-originated MSI messages by adding direct MSI injection, we will >>>>>> still need this translation for irqfd-originated messages. The >>>>>> MSIRoutingCache will allow to track those registrations and update them >>>>>> lazily before the actual delivery. This avoid having to track MSI >>>>>> vectors at device level (like qemu-kvm currently does). >>>>>> >>>>>> >>>>>> +typedef enum { >>>>>> + MSI_ROUTE_NONE = 0, >>>>>> + MSI_ROUTE_STATIC, >>>>>> +} MSIRouteType; >>>>>> + >>>>>> +struct MSIRoutingCache { >>>>>> + MSIMessage msg; >>>>>> + MSIRouteType type; >>>>>> + int kvm_gsi; >>>>>> + int kvm_irqfd; >>>>>> +}; >>>>>> + >>>>>> diff --git a/hw/pci.h b/hw/pci.h >>>>>> index 329ab32..5b5d2fd 100644 >>>>>> --- a/hw/pci.h >>>>>> +++ b/hw/pci.h >>>>>> @@ -197,6 +197,10 @@ struct PCIDevice { >>>>>> MemoryRegion rom; >>>>>> uint32_t rom_bar; >>>>>> >>>>>> + /* MSI routing chaches */ >>>>>> + MSIRoutingCache *msi_cache; >>>>>> + MSIRoutingCache *msix_cache; >>>>>> + >>>>>> /* MSI entries */ >>>>>> int msi_entries_nr; >>>>>> struct KVMMsiMessage *msi_irq_entries; >>>>> >>>>> IMO this needlessly leaks kvm information into core qemu. The cache >>>>> should be completely hidden in kvm code. >>>>> >>>>> I think msi_deliver() can hide the use of the cache completely. For >>>>> pre-registered events like kvm's irqfd, you can use something like >>>>> >>>>> qemu_irq qemu_msi_irq(MSIMessage msg) >>>>> >>>>> for non-kvm, it simply returns a qemu_irq that triggers a stl_phys(); >>>>> for kvm, it allocates an irqfd and a permanent entry in the cache and >>>>> returns a qemu_irq that triggers the irqfd. >>>> >>>> See my previously mail: you want to track the life-cycle of an MSI >>>> source to avoid generating routes for identical sources. A messages is >>>> not a source. Two identical messages can come from different sources. >>> >>> Since MSI messages are edge triggered, I don't see how this >>> would work without losing interrupts. And AFAIK, >>> existing guests do not use the same message for >>> different sources. >> >> Just like we handle shared edge-triggered line-base IRQs, shared MSIs >> are in principle feasible as well. >> >> Jan >> > > For this case it seems quite harmless to use multiple > routes for identical sources. Unless we track the source (via the MSIRoutingCache abstraction), there can be no multiple routes. The core cannot differentiate between identical messages, thus will not create multiple routes. But that's actually a corner case, and we could probably live with it. The real question is if we want to search for MSI routes on each message delivery. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html