Re: [PATCH v1 3/3] KVM: arm64: Add histogram stats for handling time of arch specific exit reasons

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 23 Sep 2021 00:22:12 +0100,
David Matlack <dmatlack@xxxxxxxxxx> wrote:
> 
> On Wed, Sep 22, 2021 at 11:53 AM Marc Zyngier <maz@xxxxxxxxxx> wrote:
> >
> > On Wed, 22 Sep 2021 19:13:40 +0100,
> > Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >
> > > Stepping back a bit, this is one piece of the larger issue of how to
> > > modernize KVM for hyperscale usage.  BPF and tracing are great when
> > > the debugger has root access to the machine and can rerun the
> > > failing workload at will.  They're useless for identifying trends
> > > across large numbers of machines, triaging failures after the fact,
> > > debugging performance issues with workloads that the debugger
> > > doesn't have direct access to, etc...
> >
> > Which is why I suggested the use of trace points as kernel module
> > hooks to perform whatever accounting you require. This would give you
> > the same level of detail this series exposes.
> 
> How would a kernel module (or BPF program) get the data to userspace?
> The KVM stats interface that Jing added requires KVM to know how to
> get the data when handling the read() syscall.

I don't think it'd be that hard to funnel stats generated in a module
through the same read interface.

> > And I'm all for adding these hooks where it matters as long as they
> > are not considered ABI and don't appear in /sys/debug/tracing (in
> > general, no userspace visibility).
> >
> > The scheduler is a interesting example of this, as it exposes all sort
> > of hooks for people to look under the hood. No user of the hook? No
> > overhead, no additional memory used. I may have heard that Android
> > makes heavy use of this.
> >
> > Because I'm pretty sure that whatever stat we expose, every cloud
> > vendor will want their own variant, so we may just as well put the
> > matter in their own hands.
> 
> I think this can be mitigated by requiring sufficient justification
> when adding a new stat to KVM. There has to be a real use-case and it
> has to be explained in the changelog. If a stat has a use-case for one
> cloud provider, it will likely be useful to other cloud providers as
> well.

My (limited) personal experience is significantly different. The
diversity of setups make the set of relevant stats pretty hard to
guess (there isn't much in common if you use KVM to strictly partition
a system vs oversubscribing it).

> 
> >
> > I also wouldn't discount BPF as a possibility. You could perfectly
> > have permanent BPF programs running from the moment you boot the
> > system, and use that to generate your histograms. This isn't necessary
> > a one off, debug only solution.
> >
> > > Logging is a similar story, e.g. using _ratelimited() printk to aid
> > > debug works well when there are a very limited number of VMs and
> > > there is a human that can react to arbitrary kernel messages, but
> > > it's basically useless when there are 10s or 100s of VMs and taking
> > > action on a kernel message requires a prior knowledge of the
> > > message.
> >
> > I'm not sure logging is remotely the same. For a start, the kernel
> > should not log anything unless something has oopsed (and yes, I still
> > have some bits to clean on the arm64 side). I'm not even sure what you
> > would want to log. I'd like to understand the need here, because I
> > feel like I'm missing something.
> >
> > > I'm certainly not expecting other people to solve our challenges,
> > > and I fully appreciate that there are many KVM users that don't care
> > > at all about scalability, but I'm hoping we can get the community at
> > > large, and especially maintainers and reviewers, to also consider
> > > at-scale use cases when designing, implementing, reviewing, etc...
> >
> > My take is that scalability has to go with flexibility. Anything that
> > gets hardcoded will quickly become a burden: I definitely regret
> > adding the current KVM trace points, as they don't show what I need,
> > and I can't change them as they are ABI.
> 
> This brings up a good discussion topic: To what extent are the KVM
> stats themselves an ABI? I don't think this is documented anywhere.
> The API itself is completely dynamic and does not hardcode a list of
> stats or metadata about them. Userspace has to read stats fd to see
> what's there.
> 
> Fwiw we just deleted the lpages stat without any drama.

Maybe the new discoverable interface makes dropping some stats
easier. But it still remains that what is useless for someone has the
potential of being crucial for someone else. I wouldn't be surprised
if someone would ask for this stat back once they upgrade to a recent
host kernel, probably in a couple of years from now.

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux