Re: [RFC PATCH 0/6] Improve VM DVFS and task placement behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




This patch series is a continuation of the talk Saravana gave at LPC 2022
titled "CPUfreq/sched and VM guest workload problems" [1][2][3]. The gist
of the talk is that workloads running in a guest VM get terrible task
placement and DVFS behavior when compared to running the same workload in
the host. Effectively, no EAS for threads inside VMs. This would make power
and performance terrible just by running the workload in a VM even if we
assume there is zero virtualization overhead.

We have been iterating over different options for communicating between
guest and host, ways of applying the information coming from the
guest/host, etc to figure out the best performance and power improvements
we could get.

The patch series in its current state is NOT meant for landing in the
upstream kernel. We are sending this patch series to share the current
progress and data we have so far. The patch series is meant to be easy to
cherry-pick and test on various devices to see what performance and power
benefits this might give for others.

With this series, a workload running in a VM gets the same task placement
and DVFS treatment as it would when running in the host.

As expected, we see significant performance improvement and better
performance/power ratio. If anyone else wants to try this out for your VM
workloads and report findings, that'd be very much appreciated.

The idea is to improve VM CPUfreq/sched behavior by:
- Having guest kernel to do accurate load tracking by taking host CPU
   arch/type and frequency into account.
- Sharing vCPU run queue utilization information with the host so that the
   host can do proper frequency scaling and task placement on the host side.


[...]


Next steps:
===========
We are continuing to look into communication mechanisms other than
hypercalls that are just as/more efficient and avoid switching into the VMM
userspace. Any inputs in this regard are greatly appreciated.


I am trying to understand why virtio based cpufrq does not work here?
The VMM on host can process requests from guest VM like freq table,
current frequency and setting the min_freq. I believe Virtio backend
has mechanisms for acceleration (vhost) so that user space is not
involved for every frequency request from the guest.

It has advantages of (1) Hypervisor agnostic (virtio basically)
(2) scheduler does not need additional input, the aggregated min_freq
requests from all guest should be sufficient.

Also want to add, 3) virtio based solution would definitely be better from performance POV as would avoid expense vmexits which we have with hypercalls.

Thanks,
Pankaj





[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux