Re: [PATCH v3 0/5] Add support for the Idle HLT intercept feature

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sean,

Thank you for reviewing my patches. Sorry for the delay in response.

On 8/13/2024 9:49 PM, Sean Christopherson wrote:
> On Tue, Jun 04, 2024, Manali Shukla wrote:
>> On 5/28/2024 3:52 PM, Paolo Bonzini wrote:
>>> Does this have an effect on the number of vmexits for KVM, unless AVIC
>>> is enabled?
> 
> Ah, I suspect it will (as Manali's trace shows), because KVM will pend a V_INTR
> (V_IRQ in KVM's world) in order to detect the interrupt window.  And while KVM
> will still exit on the V_INTR, it'll avoid an exit on HLT.
> 
> Of course, we could (should?) address that in KVM by clearing the V_INTR (and its
> intercept) when there are no pending, injectable IRQs at the end of
> kvm_check_and_inject_events().  VMX would benefit from that change as well.
> 
> I think it's just this?  Because enabling an IRQ window for userspace happens
> after this.
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index af6c8cf6a37a..373c850cc325 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10556,9 +10556,11 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
>                                 WARN_ON(kvm_x86_call(interrupt_allowed)(vcpu, true) < 0);
>                         }
>                 }
> -               if (kvm_cpu_has_injectable_intr(vcpu))
> -                       kvm_x86_call(enable_irq_window)(vcpu);
>         }
> +       if (kvm_cpu_has_injectable_intr(vcpu))
> +               kvm_x86_call(enable_irq_window)(vcpu);
> +       else
> +               kvm_x86_call(disable_irq_window)(vcpu);
>  
>         if (is_guest_mode(vcpu) &&
>             kvm_x86_ops.nested_ops->has_events &&
> 
> 

IIUC, this is already addressed in [2].

>> Snippet of the Test case:
>> +static void idle_hlt_test(void)
>> +{
>> +       x = 0;
>> +       cli();
>> +       apic_self_ipi(IPI_TEST_VECTOR);
>> +       safe_halt();
>> +       if (x != 1) printf("%d", x);
>> +}
> 
> This isn't very representative of real world behavior.  In practice, the window
> for a wake event to arrive between CLI and STI;HLT is quite small, i.e. having a
> V_INTR (or V_NMI) pending when HLT is executed is fairly uncommon.
> 
> A more compelling benchmark would be something like a netperf latency test.
> 
> I honestly don't know how high of a bar we should set for this feature.  On one
> hand, it's a tiny amount of enabling.  On the other hand, it would be extremely
> unfortunate if this somehow caused latency/throughput regressions, which seems
> highly improbably, but never say never...

I have added netperf data for normal guest and nested guest in V4 [1].

[1]: https://lore.kernel.org/kvm/20241022054810.23369-1-manali.shukla@xxxxxxx/T/#m2e755334c327bb1b479fb65e293bfe3f476d2852

[2]: https://lore.kernel.org/all/20240802195120.325560-1-seanjc@xxxxxxxxxx/

- Manali




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux