+tglx On Tue, Jan 05, 2021, Nitesh Narayan Lal wrote: > This reverts commit d7a08882a0a4b4e176691331ee3f492996579534. > > After the introduction of the patch: > > 87fa7f3e9: x86/kvm: Move context tracking where it belongs > > since we have moved guest_exit_irqoff closer to the VM-Exit, explicit > enabling of irqs to process pending interrupts should not be required > within vcpu_enter_guest anymore. Ugh, except that commit completely broke tick-based accounting, on both Intel and AMD. With guest_exit_irqoff() being called immediately after VM-Exit, any tick that happens after IRQs are disabled will be accounted to the host. E.g. on Intel, even an IRQ VM-Exit that has already been acked by the CPU isn't processed until kvm_x86_ops.handle_exit_irqoff(), well after PF_VCPU has been cleared. CONFIG_VIRT_CPU_ACCOUNTING_GEN=y should still work (I didn't bother to verify). Thomas, any clever ideas? Handling IRQs in {vmx,svm}_vcpu_enter_exit() isn't an option as KVM hasn't restored enough state to handle an IRQ, e.g. PKRU and XCR0 are still guest values. Is it too heinous to fudge PF_VCPU across KVM's "pending" IRQ handling? E.g. this god-awful hack fixes the accounting: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 836912b42030..5a777fd35b4b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9028,6 +9028,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) vcpu->mode = OUTSIDE_GUEST_MODE; smp_wmb(); + current->flags |= PF_VCPU; kvm_x86_ops.handle_exit_irqoff(vcpu); /* @@ -9042,6 +9043,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) ++vcpu->stat.exits; local_irq_disable(); kvm_after_interrupt(vcpu); + current->flags &= ~PF_VCPU; if (lapic_in_kernel(vcpu)) { s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta; > Conflicts: > arch/x86/kvm/svm.c > > Signed-off-by: Nitesh Narayan Lal <nitesh@xxxxxxxxxx> > --- > arch/x86/kvm/svm/svm.c | 9 +++++++++ > arch/x86/kvm/x86.c | 11 ----------- > 2 files changed, 9 insertions(+), 11 deletions(-) > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index cce0143a6f80..c9b2fbb32484 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4187,6 +4187,15 @@ static int svm_check_intercept(struct kvm_vcpu *vcpu, > > static void svm_handle_exit_irqoff(struct kvm_vcpu *vcpu) > { > + kvm_before_interrupt(vcpu); > + local_irq_enable(); > + /* > + * We must have an instruction with interrupts enabled, so > + * the timer interrupt isn't delayed by the interrupt shadow. > + */ > + asm("nop"); > + local_irq_disable(); > + kvm_after_interrupt(vcpu); > } > > static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 3f7c1fc7a3ce..3e17c9ffcad8 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -9023,18 +9023,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > > kvm_x86_ops.handle_exit_irqoff(vcpu); > > - /* > - * Consume any pending interrupts, including the possible source of > - * VM-Exit on SVM and any ticks that occur between VM-Exit and now. > - * An instruction is required after local_irq_enable() to fully unblock > - * interrupts on processors that implement an interrupt shadow, the > - * stat.exits increment will do nicely. > - */ > - kvm_before_interrupt(vcpu); > - local_irq_enable(); > ++vcpu->stat.exits; > - local_irq_disable(); > - kvm_after_interrupt(vcpu); > > if (lapic_in_kernel(vcpu)) { > s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta; > -- > 2.27.0 >