On Wed, Apr 27, 2011 at 03:31:16PM +0200, Jan Kiszka wrote: > On 2011-04-27 14:12, Avi Kivity wrote: > > On 04/27/2011 02:21 PM, Jan Kiszka wrote: > >> On 2011-04-27 11:14, Avi Kivity wrote: > >>> On 04/27/2011 12:06 PM, Jan Kiszka wrote: > >>>>> > >>>>> We can simply drop all route entries that are used exclusively in qemu > >>>>> (i.e. not bound to an irqfd) and let the cache rebuild itself. > >>>> > >>>> When should they be dropped? > >>> > >>> Whenever we need to allocate a new routing entry, but cannot because it > >>> is full. > >>> > >>> def kvm_send_msi_message(addr, val): > >>> gsi = route_cache.get((addr, val), None) > >>> if gsi is None: > >>> expire_volatile_route_cache_entries_if_full() > >>> gsi = alloc_gsi_cache_entry() > >>> route_cache[(addr, val)] = gsi > >>> update_route_cache() > >>> kvm_irq_line(gsi, 1) > >>> kvm_irq_line(gsi, 0) > >>> > >>> The code would have to be in kvm.c, where it can track whether an entry > >>> is volatile or persistent. > >> > >> Yeah, what I forgot is the other side of this caching: Looking up the > >> GSI from the MSI addr/data tuple. That's more complex than the current > >> O(1) way via device.msitable[vector]. Well, we could use some hash > >> table. But is it worth it? IMHO only if we could radically simplify the > >> PCI device hooking as well (HPET is negligible compared to that). But > >> I'm not yet sure if we can given what vhost needs. > > > > A hash table is indeed overcomplicated for this. > > > > How about a replacement for stl_phys() for the MSI case: > > > > - stl_phys(timer->fsb >> 32, timer->fsb & 0xffffffff); > > + msi_stl_phys(timer->fsb >> 32, timer->fsb & 0xffffffff, > > &timer->msi_cache); > > > > msi_stl_phys(target_phys_addr_t addr, uint32_t data, MSICache *cache) Let's try to use uint64_t for addresses. This is what msi deals with anyway. > > { > > if (kvm_msi_enabled() && addr & MSI_ADDR_MASK == msi_base_addr) { > > if (cache->addr != addr || cache->data != data) { > > kvm_update_msi_cache(cache, addr, data); > > } > > kvm_irq_line(cache->gsi, 1); > > kvm_irq_line(cache->gsi, 0); I think this second ioctl isn't needed on latest kernels. Was it needed for older ones? Is there a way to detect such? > > return; > > } > > stl_phys(addr, data); > > } > > I was planning for a MSI short-path anyway. Also for TCG, it's pointless > to go through lengthy stl_phys if we know it's supposed to be an MSI > message. Well, in theory we don't. All MSI does is a store, it could just go into memory. For PV (virtio) which is what I coded up msix originally for, the assumption that it will go into the apic is fine I think, but I'm a bit less sure with emulated devices. > > > > This bit of code would need to be updated for IOMMU and interrupt > > remapping, > > Quite a bit updates required for that anyway. Yes, I don't think we need to worry about that just yet. > > but at least it means that devices don't need significant > > change for kvm support. We could also allocate a single gsi for use in > > hw/apic.c so hacks like using DMA to generate an MSI will work (will be > > slow, though). > > Needs some thoughts, maybe it will work. Though, it's not yet clear to > me if we can drop the kvm hooks from msi/msix.c and still support > vhost/dev-assignment this way. Just to keep hpet.c cleaner, I don't > think it's worth the effort. > > Jan > > -- > Siemens AG, Corporate Technology, CT T DE IT 1 > Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html