On 07/06/2019 15:15, Auger Eric wrote: > Hi Marc, > On 6/7/19 2:44 PM, Marc Zyngier wrote: >> Hi Eric, >> >> On 07/06/2019 13:09, Auger Eric wrote: >>> Hi Marc, [...] >>>> +#define LPI_CACHE_SIZE(kvm) (atomic_read(&(kvm)->online_vcpus) * 4) >>> Couldn't the cache be a function of the number of allocated lpis. We >>> could realloc the list accordingly. I miss why it is rather dependent on >>> the number of vcpus and not on the number of assigned devices/MSIs? >> >> How do you find out about the number of LPIs? That's really for the >> guest to decide what it wants to do. Also, KVM itself doesn't have much >> of a clue about the number of assigned devices or their MSI capability. >> That's why I've suggested that userspace could be involved here. > > Can't we setup an heuristic based on dist->lpi_list_count incremented on > vgic_add_lpi() used on MAPI/MAPTI? Of course not all of them are > assigned device ones. But currently the cache is being used for all LPIs > including those triggered through the user space injection (KVM_SIGNAL_MSI). I'm happy to grow the cache on MAPI, but that doesn't solve the real problem: how do we cap it to a value that is "good enough"? > Otherwise there is an existing interface between KVM and VFIO that may > be leveraged to pass info between both? Same thing. The problem is in defining the limit. I guess only people deploying real workloads can tell us what is a reasonable default, and we can also make that a tuneable parameter... M. -- Jazz is not dead. It just smells funny... _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm