Re: [PATCH v3 0/5] Add support for the Idle HLT intercept feature

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 04, 2024, Manali Shukla wrote:
> On 5/28/2024 3:52 PM, Paolo Bonzini wrote:
> > Does this have an effect on the number of vmexits for KVM, unless AVIC
> > is enabled?

Ah, I suspect it will (as Manali's trace shows), because KVM will pend a V_INTR
(V_IRQ in KVM's world) in order to detect the interrupt window.  And while KVM
will still exit on the V_INTR, it'll avoid an exit on HLT.

Of course, we could (should?) address that in KVM by clearing the V_INTR (and its
intercept) when there are no pending, injectable IRQs at the end of
kvm_check_and_inject_events().  VMX would benefit from that change as well.

I think it's just this?  Because enabling an IRQ window for userspace happens
after this.

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index af6c8cf6a37a..373c850cc325 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10556,9 +10556,11 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,
                                WARN_ON(kvm_x86_call(interrupt_allowed)(vcpu, true) < 0);
                        }
                }
-               if (kvm_cpu_has_injectable_intr(vcpu))
-                       kvm_x86_call(enable_irq_window)(vcpu);
        }
+       if (kvm_cpu_has_injectable_intr(vcpu))
+               kvm_x86_call(enable_irq_window)(vcpu);
+       else
+               kvm_x86_call(disable_irq_window)(vcpu);
 
        if (is_guest_mode(vcpu) &&
            kvm_x86_ops.nested_ops->has_events &&


> Snippet of the Test case:
> +static void idle_hlt_test(void)
> +{
> +       x = 0;
> +       cli();
> +       apic_self_ipi(IPI_TEST_VECTOR);
> +       safe_halt();
> +       if (x != 1) printf("%d", x);
> +}

This isn't very representative of real world behavior.  In practice, the window
for a wake event to arrive between CLI and STI;HLT is quite small, i.e. having a
V_INTR (or V_NMI) pending when HLT is executed is fairly uncommon.

A more compelling benchmark would be something like a netperf latency test.

I honestly don't know how high of a bar we should set for this feature.  On one
hand, it's a tiny amount of enabling.  On the other hand, it would be extremely
unfortunate if this somehow caused latency/throughput regressions, which seems
highly improbably, but never say never...




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux