Actually, we have rejected commit 87c00572ba05aa8c ("kvm: x86: emulate monitor and mwait instructions as nop"), so when we intercept MONITOR/MWAIT, we synthesize #UD. Perhaps it is this difference from vanilla kvm that motivates the following idea... Since we're still not going to report MONITOR support in CPUID, the only guests of consequence are paravirtual guests. What if a paravirtual guest was aware of the fact that sometimes MONITOR/MWAIT would work as architected, and sometimes they would raise #UD (or do something else that's guest-visible, to indicate that the hypevisor is intercepting the instructions). Such a guest could first try a MONITOR/MWAIT-based idle loop and then fall back on a HLT-based idle loop if the hypervisor rejected its use of MONITOR/MWAIT. We already have the loose concept of "this pCPU has other things to do," which is encoded in the variable-sized PLE window. With MONITOR/MWAIT, the choice is binary, but a simple implementation could tie the two together, by allowing the guest to use MONITOR/MWAIT whenever the PLE window exceeds a certain threshold. Or the decision could be left to the userspace agent. On Tue, Apr 11, 2017 at 11:23 AM, Alexander Graf <agraf@xxxxxxx> wrote: > > >> Am 11.04.2017 um 19:10 schrieb Jim Mattson <jmattson@xxxxxxxxxx>: >> >> This might be more useful if it could be dynamically toggled on and >> off, depending on system load. > > What would trapping mwait (currently) buy you? > > As it stands today, before this patch, mwait is simply implemented as a nop, so enabling the trap just means you're wasting as much cpu time, but never send the pCPU idle. With this patch, the CPU at least has the chance to go idle. > > Keep in mind that this patch does *not* advertise the mwait cpuid feature bit to the guest. > > What you're referring to I guess is actual mwait emulation. That is indeed more useful, but a bigger patch than this and needs some more thought on how to properly cache the monitor'ed pages. > > > Alex > >