On Monday, January 13, 2020 11:43:14 AM CET Peter Zijlstra wrote: > > Preserved most (+- edits) for the people added to Cc. > > On Thu, Jan 09, 2020 at 07:53:51PM +0800, Wanpeng Li wrote: > > On Thu, 9 Jan 2020 at 01:15, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: > > > On 08/01/20 16:50, Peter Zijlstra wrote: > > > > On Wed, Jan 08, 2020 at 09:50:01AM +0800, Wanpeng Li wrote: > > > >> From: Wanpeng Li <wanpengli@xxxxxxxxxxx> > > > >> > > > >> To deliver all of the resources of a server to instances in cloud, there are no > > > >> housekeeping cpus reserved. libvirtd, qemu main loop, kthreads, and other agent/tools > > > >> etc which can't be offloaded to other hardware like smart nic, these stuff will > > > >> contend with vCPUs even if MWAIT/HLT instructions executed in the guest. > > > > > > ^^ this is the problem statement: > > > > > > He has VCPU threads which are being pinned 1:1 to physical CPUs. He > > > needs to have various housekeeping threads preempting those vCPU > > > threads, but he'd rather preempt vCPU threads that are doing HLT/MWAIT > > > than those that are keeping the CPU busy. > > > > > > >> The is no trap and yield the pCPU after we expose mwait/hlt to the guest [1][2], > > > >> the top command on host still observe 100% cpu utilization since qemu process is > > > >> running even though guest who has the power management capability executes mwait. > > > >> Actually we can observe the physical cpu has already enter deeper cstate by > > > >> powertop on host. > > > >> > > > >> For virtualization, there is a HLT activity state in CPU VMCS field which indicates > > > >> the logical processor is inactive because it executed the HLT instruction, but > > > >> SDM 24.4.2 mentioned that execution of the MWAIT instruction may put a logical > > > >> processor into an inactive state, however, this VMCS field never reflects this > > > >> state. > > > > > > > > So far I think I can follow, however it does not explain who consumes > > > > this VMCS state if it is set and how that helps. Also, this: > > > > > > I think what Wanpeng was saying is: "KVM could gather this information > > > using the activity state field in the VMCS. However, when the guest > > > does MWAIT the processor can go into an inactive state without updating > > > the VMCS." Hence looking at the APERFMPERF ratio. > > > > > > >> This patch avoids fine granularity intercept and reschedule vCPU if MWAIT/HLT > > > >> instructions executed, because it can worse the message-passing workloads which > > > >> will switch between idle and running frequently in the guest. Lets penalty the > > > >> vCPU which is long idle through tick-based sampling and preemption. > > > > > > > > is just complete gibberish. And I have no idea what problem you're > > > > trying to solve how. > > > > > > This is just explaining why MWAIT and HLT is not being trapped in his > > > setup. (Because vmexit on HLT or MWAIT is awfully expensive). > > > > > > > Also, I don't think the TSC/MPERF ratio is architected, we can't assume > > > > this is true for everything that has APERFMPERF. > > > > > > Right, you have to look at APERF/MPERF, not TSC/MPERF. > > > Peterz, do you have nicer solution for this? > > So as you might've seen, we're going to go read the APERF/MPERF thingies > in the tick anyway: > > https://lkml.kernel.org/r/20191002122926.385-1-ggherdovich@xxxxxxx > > (your proposed patch even copied some naming off of that, so I'm > assuming you've actually seen that) > > So the very first thing we need to get sorted is that MPERF/TSC ratio > thing. TurboStat does it, but has 'funny' hacks on like: > > b2b34dfe4d9a ("tools/power turbostat: KNL workaround for %Busy and Avg_MHz") > > and I imagine that there's going to be more exceptions there. You're > basically going to have to get both Intel and AMD to commit to this. > > IFF we can get concensus on MPERF/TSC, then yes, that is a reasonable > way to detect a VCPU being idle I suppose. I've added a bunch of people > who seem to know about this. > > Anyone, what will it take to get MPERF/TSC 'working' ? The same thing that intel_pstate does. Generally speaking, it shifts the mperf values by a number of positions depending on the CPU model, but that is 1 except for KNL. See get_target_pstate().