Re: [PATCH] smp/hotplug, x86/vmware: Put offline vCPUs in halt instead of mwait

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23.09.22 09:05, Peter Zijlstra wrote:
On Thu, Jul 21, 2022 at 01:44:33PM -0700, Srivatsa S. Bhat wrote:
From: Srivatsa S. Bhat (VMware) <srivatsa@xxxxxxxxxxxxx>

VMware ESXi allows enabling a passthru mwait CPU-idle state in the
guest using the following VMX option:

monitor_control.mwait_in_guest = "TRUE"

This lets a vCPU in mwait to remain in guest context (instead of
yielding to the hypervisor via a VMEXIT), which helps speed up
wakeups from idle.

However, this runs into problems with CPU hotplug, because the Linux
CPU offline path prefers to put the vCPU-to-be-offlined in mwait
state, whenever mwait is available. As a result, since a vCPU in mwait
remains in guest context and does not yield to the hypervisor, an
offline vCPU *appears* to be 100% busy as viewed from ESXi, which
prevents the hypervisor from running other vCPUs or workloads on the
corresponding pCPU (particularly when vCPU - pCPU mappings are
statically defined by the user).

I would hope vCPU pinning is a mandatory thing when MWAIT passthrough it
set?

[ Note that such a vCPU is not
actually busy spinning though; it remains in mwait idle state in the
guest ].

Fix this by overriding the CPU offline play_dead() callback for VMware
hypervisor, by putting the CPU in halt state (which actually yields to
the hypervisor), even if mwait support is available.

Signed-off-by: Srivatsa S. Bhat (VMware) <srivatsa@xxxxxxxxxxxxx>
---

+static void vmware_play_dead(void)
+{
+	play_dead_common();
+	tboot_shutdown(TB_SHUTDOWN_WFS);
+
+	/*
+	 * Put the vCPU going offline in halt instead of mwait (even
+	 * if mwait support is available), to make sure that the
+	 * offline vCPU yields to the hypervisor (which may not happen
+	 * with mwait, for example, if the guest's VMX is configured
+	 * to retain the vCPU in guest context upon mwait).
+	 */
+	hlt_play_dead();
+}
  #endif
static __init int activate_jump_labels(void)
@@ -349,6 +365,7 @@ static void __init vmware_paravirt_ops_setup(void)
  #ifdef CONFIG_SMP
  		smp_ops.smp_prepare_boot_cpu =
  			vmware_smp_prepare_boot_cpu;
+		smp_ops.play_dead = vmware_play_dead;
  		if (cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
  					      "x86/vmware:online",
  					      vmware_cpu_online,

No real objection here; but would not something like the below fix the
problem more generally? I'm thinking MWAIT passthrough for *any*
hypervisor doesn't want play_dead to use it.

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f24227bc3220..166cb3aaca8a 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -1759,6 +1759,8 @@ static inline void mwait_play_dead(void)
  		return;
  	if (!this_cpu_has(X86_FEATURE_CLFLUSH))
  		return;
+	if (this_cpu_has(X86_FEATURE_HYPERVISOR))
+		return;
  	if (__this_cpu_read(cpu_info.cpuid_level) < CPUID_MWAIT_LEAF)
  		return;

With my Xen hat on I agree with this approach.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux