On Mon, Jan 22, 2018 at 10:57:57 +0000, Daniel P. Berrange wrote: > On Mon, Jan 22, 2018 at 11:46:14AM +0100, Jiri Denemark wrote: > > Whenever a different kernel is booted, some capabilities related to KVM > > (such as CPUID bits) may change. We need to refresh the cache to see the > > changes. > > > > Signed-off-by: Jiri Denemark <jdenemar@xxxxxxxxxx> > > --- > > > > Notes: > > The capabilities may also change if a parameter passed to a kvm module > > changes (kvm_intel.nested is a good example) so this is not a complete > > solution, but we're hopefully getting closer to it :-) > > You mean getting closer to a situation where we are effectively storing the > cache on tmpfs, because we invalidate it on every reboot ! Well, that's a possible result, yes. Although it's both incomplete and invalidating the cache too often at the same time. It's possible we won't be able to come up with anything more clever anyway. > I think sometime soon we're going to need to consider if our cache invalidation > approach is fundamentally broken. We have a huge amount of stuff we query from > QEMU, but only a tiny amount is dependant on host kernel / microcode / kvm mod > options. Should we go back to invalidating only when libvirt/qemu binary changes > but then do partial invalidation of specific data items for kernel/microcode > changes. On the other hand, while we have QEMU running, probing for all capabilities vs just a limited set which depend on the host shouldn't be a big difference. I haven't actually measured it though. However, we only invalidate the cache more often for KVM, which makes it pretty limited already since we only invalidate the capabilities for a single binary. Jirka -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list