Re: [PATCH 2/2] kvm: Add ioctl for gathering debug counters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/01/20 13:32, Alexander Graf wrote:
>> See above: I am not sure they are the same story because their consumers
>> might be very different from registers.  Registers are generally
>> consumed by programs (to migrate VMs, for example) and only occasionally
>> by humans, while stats are meant to be consumed by humans.  We may
>> disagree on whether this justifies a completely different API...
> 
> I don't fully agree on the "human" part here.

I agree it's not entirely about humans, but in general it's going to be
rules and queries on monitoring tools, where 1) the monitoring tools'
output is generally not KVM-specific, 2) the rules and queries will be
written by humans.

So if the kernel produces insn_emulation_fail, the plugin for the
monitoring tool will just log kvm.insn_emulation_fail.  If the kernel
produces 0x10042, the plugin will have to convert it and then log it.
This is why I'm not sure that providing strings is actually less work
for userspace.

Paolo

> At the end of the day, you
> want stats because you want to act on stats. Ideally, you want to do
> that fully automatically. Let me give you a few examples:
> 
> 1) insn_emulation_fail triggers
> 
> You may want to feed all the failures into a database to check whether
> there is something wrong in the emulator.
> 
> 2) (remote_)tlb_flush beyond certain threshold
> 
> If you see that you're constantly flushing remote TLBs, there's a good
> chance that you found a workload that may need tuning in KVM. You want
> to gather those stats across your full fleet of hosts, so that for the
> few occasions when you hit it, you can work with the actual VM owners to
> potentially improve their performance
> 
> 3) exits beyond certain threshold
> 
> You know roughly how many exits your fleet would usually see, so you can
> configure an upper threshold on that. When you now have an automated way
> to notify you when the threshold is exceeded, you can check what that
> particular guest did to raise so many exits.
> 
> 
> ... and I'm sure there's room for a lot more potential stats that could
> be useful to gather to determine the health of a KVM environment, such
> as a "vcpu steal time" one or a "maximum time between two VMENTERS while
> the guest was in running state".
> 
> All of these should eventually feed into something bigger that collects
> the numbers across your full VM fleet, so that a human can take actions
> based on them. However, that means the values are no longer directly
> impacting a human, they need to feed into machines. And for that, exact,
> constant identifiers make much more sense




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux