[PATCH RFC 1/1] KVM: x86: Don't set preempted when vCPU does HLT VMEXIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Change kvm_arch_cpu_put() so that it does not set st->preempted as 1
when a vCPU does HLT VMEXIT. As a result, is_vcpu_preempted(vCPU) becomes
0, and the vCPU becomes a candidate for CFS load balancing.

Signed-off-by: Masanori Misono <m.misono760@xxxxxxxxx>
---
 arch/x86/kvm/x86.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bbc4e04e67ad..b3f50b9f2e96 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4170,19 +4170,26 @@ static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu)
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
 	int idx;
+	bool hlt;
 
 	if (vcpu->preempted && !vcpu->arch.guest_state_protected)
 		vcpu->arch.preempted_in_kernel = !static_call(kvm_x86_get_cpl)(vcpu);
 
+	hlt = lapic_in_kernel(vcpu) ?
+		      vcpu->arch.mp_state == KVM_MP_STATE_HALTED :
+		      vcpu->run->exit_reason == KVM_EXIT_HLT;
+
 	/*
 	 * Take the srcu lock as memslots will be accessed to check the gfn
 	 * cache generation against the memslots generation.
 	 */
 	idx = srcu_read_lock(&vcpu->kvm->srcu);
-	if (kvm_xen_msr_enabled(vcpu->kvm))
-		kvm_xen_runstate_set_preempted(vcpu);
-	else
-		kvm_steal_time_set_preempted(vcpu);
+	if (!hlt) {
+		if (kvm_xen_msr_enabled(vcpu->kvm))
+			kvm_xen_runstate_set_preempted(vcpu);
+		else
+			kvm_steal_time_set_preempted(vcpu);
+	}
 	srcu_read_unlock(&vcpu->kvm->srcu, idx);
 
 	static_call(kvm_x86_vcpu_put)(vcpu);
-- 
2.31.1




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux