Re: [PATCH] qemu: Refresh caps cache after booting a different kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 30, 2018 at 13:26:49 +0300, Nikolay Shirokovskiy wrote:
> 
> 
> On 22.01.2018 15:36, Daniel P. Berrange wrote:
> > On Mon, Jan 22, 2018 at 01:31:21PM +0100, Jiri Denemark wrote:
> >> On Mon, Jan 22, 2018 at 10:57:57 +0000, Daniel P. Berrange wrote:
> >>> On Mon, Jan 22, 2018 at 11:46:14AM +0100, Jiri Denemark wrote:
> >>>> Whenever a different kernel is booted, some capabilities related to KVM
> >>>> (such as CPUID bits) may change. We need to refresh the cache to see the
> >>>> changes.
> >>>>
> >>>> Signed-off-by: Jiri Denemark <jdenemar@xxxxxxxxxx>
> >>>> ---
> >>>>
> >>>> Notes:
> >>>>     The capabilities may also change if a parameter passed to a kvm module
> >>>>     changes (kvm_intel.nested is a good example) so this is not a complete
> >>>>     solution, but we're hopefully getting closer to it :-)
> >>>
> >>> You mean getting closer to a situation where we are effectively storing the
> >>> cache on tmpfs, because we invalidate it on every reboot !
> >>
> >> Well, that's a possible result, yes. Although it's both incomplete and
> >> invalidating the cache too often at the same time. It's possible we
> >> won't be able to come up with anything more clever anyway.
> >>
> >>> I think sometime soon we're going to need to consider if our cache invalidation
> >>> approach is fundamentally broken.  We have a huge amount of stuff we query from
> >>> QEMU, but only a tiny amount is dependant on host kernel / microcode / kvm mod
> >>> options. Should we go back to invalidating only when libvirt/qemu binary changes
> >>> but then do partial invalidation of specific data items for kernel/microcode
> >>> changes.
> >>
> >> On the other hand, while we have QEMU running, probing for all
> >> capabilities vs just a limited set which depend on the host shouldn't be
> >> a big difference. I haven't actually measured it though. However, we
> >> only invalidate the cache more often for KVM, which makes it pretty
> >> limited already since we only invalidate the capabilities for a single
> >> binary.
> > 
> > Oh true, I didn't notice you'd only done invalidation for the KVM code
> > path. That should avoid the major pain that GNOME Boxes saw where we
> > spend ages probing 20 QEMU binaries on every startup
> > 
> 
> Hi. Hope this topic is not too old...
> 
> If this is what qemu caps cache for - dealing with a lot of binaries
> we could disable cache for kvm only and solve issues with kvm_intel.nested.

Not really. If you are going to start a bunch of kvm VMs at the same
time, you'd still get considerable penalty from trying to re-detect the
capabilities every single time.

The conversation with qemu used to do the probing exchanges around 300k
of JSON, which needs to be processed by libvirt.

I don't think we can stop caching capabilities for qemu given the volume
of stuff we need to query.

Attachment: signature.asc
Description: PGP signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux