On Fri, 2023-11-24 at 17:07 +0100, Paolo Bonzini wrote: > On 9/28/23 12:36, Maxim Levitsky wrote: > > @@ -4176,6 +4176,9 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu) > > clgi(); > > kvm_load_guest_xsave_state(vcpu); > > > > + if (vcpu->arch.req_immediate_exit) > > + smp_send_reschedule(vcpu->cpu); > > + > > This code is in a non-standard situation where IF=1 but interrupts are > effectively disabled. Better something like: > > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index beea99c8e8e0..3b945de2d880 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4148,8 +4148,11 @@ static __no_kcsan fastpath_t svm_vcpu_run( > * is enough to force an immediate vmexit. > */ > disable_nmi_singlestep(svm); > + vcpu->arch.req_immediate_exit = true; > + } > + > + if (vcpu->arch.req_immediate_exit) > smp_send_reschedule(vcpu->cpu); > - } > > pre_svm_run(vcpu); > > > Paolo > Actually IF=0 at that point. We disable IF before we call svm_vcpu_run, at vcpu_enter_guest(). Then we disable GIF, and we re-enable IF only right before VMRUN, in fact vmrun is in the interrupt shadow although that doesn't really matter since GIF=0. However I don't mind implementing the change you suggested, I don't think it will affect anything. Best regards, Maxim Levitsky