On 2011-04-27 11:14, Avi Kivity wrote: > On 04/27/2011 12:06 PM, Jan Kiszka wrote: >>> >>> We can simply drop all route entries that are used exclusively in qemu >>> (i.e. not bound to an irqfd) and let the cache rebuild itself. >> >> When should they be dropped? > > Whenever we need to allocate a new routing entry, but cannot because it > is full. > > def kvm_send_msi_message(addr, val): > gsi = route_cache.get((addr, val), None) > if gsi is None: > expire_volatile_route_cache_entries_if_full() > gsi = alloc_gsi_cache_entry() > route_cache[(addr, val)] = gsi > update_route_cache() > kvm_irq_line(gsi, 1) > kvm_irq_line(gsi, 0) > > The code would have to be in kvm.c, where it can track whether an entry > is volatile or persistent. Yeah, what I forgot is the other side of this caching: Looking up the GSI from the MSI addr/data tuple. That's more complex than the current O(1) way via device.msitable[vector]. Well, we could use some hash table. But is it worth it? IMHO only if we could radically simplify the PCI device hooking as well (HPET is negligible compared to that). But I'm not yet sure if we can given what vhost needs. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html