Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 04 Apr 2023 at 21:49:10 (+0100), Marc Zyngier wrote:
> On Tue, 04 Apr 2023 20:43:40 +0100,
> Oliver Upton <oliver.upton@xxxxxxxxx> wrote:
> > 
> > Folks,
> > 
> > On Thu, Mar 30, 2023 at 03:43:35PM -0700, David Dai wrote:
> > 
> > <snip>
> > 
> > > PCMark
> > > Higher is better
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Test Case (score) | Baseline |  Hypercall | %delta |  MMIO | %delta |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Weighted Total    |     6136 |       7274 |   +19% |  6867 |   +12% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Web Browsing      |     5558 |       6273 |   +13% |  6035 |    +9% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Video Editing     |     4921 |       5221 |    +6% |  5167 |    +5% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Writing           |     6864 |       8825 |   +29% |  8529 |   +24% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Photo Editing     |     7983 |      11593 |   +45% | 10812 |   +35% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > | Data Manipulation |     5814 |       6081 |    +5% |  5327 |    -8% |
> > > +-------------------+----------+------------+--------+-------+--------+
> > > 
> > > PCMark Performance/mAh
> > > Higher is better
> > > +-----------+----------+-----------+--------+------+--------+
> > > |           | Baseline | Hypercall | %delta | MMIO | %delta |
> > > +-----------+----------+-----------+--------+------+--------+
> > > | Score/mAh |       79 |        88 |   +11% |   83 |    +7% |
> > > +-----------+----------+-----------+--------+------+--------+
> > > 
> > > Roblox
> > > Higher is better
> > > +-----+----------+------------+--------+-------+--------+
> > > |     | Baseline |  Hypercall | %delta |  MMIO | %delta |
> > > +-----+----------+------------+--------+-------+--------+
> > > | FPS |    18.25 |      28.66 |   +57% | 24.06 |   +32% |
> > > +-----+----------+------------+--------+-------+--------+
> > > 
> > > Roblox Frames/mAh
> > > Higher is better
> > > +------------+----------+------------+--------+--------+--------+
> > > |            | Baseline |  Hypercall | %delta |   MMIO | %delta |
> > > +------------+----------+------------+--------+--------+--------+
> > > | Frames/mAh |    91.25 |     114.64 |   +26% | 103.11 |   +13% |
> > > +------------+----------+------------+--------+--------+--------+
> > 
> > </snip>
> > 
> > > Next steps:
> > > ===========
> > > We are continuing to look into communication mechanisms other than
> > > hypercalls that are just as/more efficient and avoid switching into the VMM
> > > userspace. Any inputs in this regard are greatly appreciated.
> > 
> > We're highly unlikely to entertain such an interface in KVM.
> > 
> > The entire feature is dependent on pinning vCPUs to physical cores, for which
> > userspace is in the driver's seat. That is a well established and documented
> > policy which can be seen in the way we handle heterogeneous systems and
> > vPMU.
> > 
> > Additionally, this bloats the KVM PV ABI with highly VMM-dependent interfaces
> > that I would not expect to benefit the typical user of KVM.
> > 
> > Based on the data above, it would appear that the userspace implementation is
> > in the same neighborhood as a KVM-based implementation, which only further
> > weakens the case for moving this into the kernel.
> > 
> > I certainly can appreciate the motivation for the series, but this feature
> > should be in userspace as some form of a virtual device.
> 
> +1 on all of the above.

And I concur with all the above as well. Putting this in the kernel is
not an obvious fit at all as that requires a number of assumptions about
the VMM.

As Oliver pointed out, the guest topology, and how it maps to the host
topology (vcpu pinning etc) is very much a VMM policy decision and will
be particularly important to handle guest frequency requests correctly.

In addition to that, the VMM's software architecture may have an impact.
Crosvm for example does device emulation in separate processes for
security reasons, so it is likely that adjusting the scheduling
parameters ('util_guest', uclamp, or else) only for the vCPU thread that
issues frequency requests will be sub-optimal for performance, we may
want to adjust those parameters for all the tasks that are on the
critical path.

And at an even higher level, assuming in the kernel a certain mapping of
vCPU threads to host threads feels kinda wrong, this too is a host
userspace policy decision I believe. Not that anybody in their right
mind would want to do this, but I _think_ it would technically be
feasible to serialize the execution of multiple vCPUs on the same host
thread, at which point the util_guest thingy becomes entirely bogus. (I
obviously don't want to conflate this use-case, it's just an example
that shows the proposed abstraction in the series is not a perfect fit
for the KVM userspace delegation model.)

So +1 from me to move this as a virtual device of some kind. And if the
extra cost of exiting all the way back to userspace is prohibitive (is
it btw?), then we can try to work on that. Maybe something a la vhost
can be done to optimize, I'll have a think.

> The one thing I'd like to understand that the comment seems to imply
> that there is a significant difference in overhead between a hypercall
> and an MMIO. In my experience, both are pretty similar in cost for a
> handling location (both in userspace or both in the kernel). MMIO
> handling is a tiny bit more expensive due to a guaranteed TLB miss
> followed by a walk of the in-kernel device ranges, but that's all. It
> should hardly register.
> 
> And if you really want some super-low latency, low overhead
> signalling, maybe an exception is the wrong tool for the job. Shared
> memory communication could be more appropriate.

I presume some kind of signalling mechanism will be necessary to
synchronously update host scheduling parameters in response to guest
frequency requests, but if the volume of data requires it then a shared
buffer + doorbell type of approach should do.

Thinking about it, using SCMI over virtio would implement exactly that.
Linux-as-a-guest already supports it IIRC, so possibly the problem
being addressed in this series could be 'simply' solved using an SCMI
backend in the VMM...

Thanks,
Quentin



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux