Re: [PATCH v2 2/9] KVM: x86: remove return value of kvm_vcpu_block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/17/22 18:41, Sean Christopherson wrote:
On Wed, Aug 17, 2022, Paolo Bonzini wrote:
On 8/17/22 01:34, Sean Christopherson wrote:
Isn't freeing up the return from kvm_vcpu_check_block() unnecessary?  Can't we
just do:

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9f11b505cbee..ccb9f8bdeb18 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10633,7 +10633,7 @@ static inline int vcpu_block(struct kvm_vcpu *vcpu)
                  if (hv_timer)
                          kvm_lapic_switch_to_hv_timer(vcpu);

-               if (!kvm_check_request(KVM_REQ_UNHALT, vcpu))
+               if (!kvm_arch_vcpu_runnable(vcpu))
                          return 1;
          }


which IMO is more intuitive and doesn't require reworking halt-polling (again).

This was my first idea indeed.  However I didn't like calling
kvm_arch_vcpu_runnable() again and "did it schedule()" seemed to be a less
interesting result from kvm_vcpu_block() (and in fact kvm_vcpu_halt() does
not bother passing it up the return chain).

The flip side of calling kvm_arch_vcpu_runnable() again is that KVM will immediately
wake the vCPU if it becomes runnable after kvm_vcpu_check_block().  The edge cases
where the vCPU becomes runnable late are unlikely to truly matter in practice, but
on the other hand manually re-checking kvm_arch_vcpu_runnable() means KVM gets both
cases "right" (waited=true iff vCPU actually waited, vCPU awakened ASAP), whereas
squishing the information into the return of kvm_vcpu_check_block() means KVM gets
both cases "wrong" (waited=true even if schedule() was never called, vCPU left in
a non-running state even though it's runnable).

My only hesitation with calling kvm_arch_vcpu_runnable() again is that it could be
problematic if KVM somehow managed to consume the event that caused kvm_vcpu_has_events()
to return true, but I don't see how that could happen without it being a KVM bug.

No, I agree that it cannot happen, and especially so after getting rid of the kvm_check_nested_events() call in kvm_arch_vcpu_runnable().

I'll reorder the patches and apply your suggestion.

Paolo




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux