On Fri, Feb 14, 2025, Sean Christopherson wrote: > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 7640a84e554a..fa0687711c48 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4189,6 +4189,18 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in > > guest_state_enter_irqoff(); > > + /* > + * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of > + * VMRUN controls whether or not physical IRQs are masked (KVM always > + * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the > + * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow > + * into guest state if delivery of an event during VMRUN triggers a > + * #VMEXIT, and the guest_state transitions already tell lockdep that > + * IRQs are being enabled/disabled. Note! GIF=0 for the entirety of > + * this path, so IRQs aren't actually unmasked while running host code. > + */ > + local_irq_enable(); Courtesy of the kernel test bot[*], these need to use the raw_ variants to avoid tracing. guest_state_{enter,exit}_irqoff() does all of the necessary tracing updates, so we should be good on that front. svm_vcpu_enter_exit+0x39: call to trace_hardirqs_on() leaves .noinstr.text section [*] https://lore.kernel.org/all/202502170739.2WX98OXk-lkp@xxxxxxxxx > + > amd_clear_divider(); > > if (sev_es_guest(vcpu->kvm)) > @@ -4197,6 +4209,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in > else > __svm_vcpu_run(svm, spec_ctrl_intercepted); > > + local_irq_disable(); > + > guest_state_exit_irqoff(); > }