On Thu, Aug 10, 2023 at 04:17:41PM +0200, Paolo Bonzini wrote: > On 8/5/23 02:55, Peter Zijlstra wrote: > > > + * Clobbering BP here is mostly ok since GIF will block NMIs and with > > > + * the exception of #MC and the kvm_rebooting _ASM_EXTABLE()s below > > > + * nothing untoward will happen until BP is restored. > > > + * > > > + * The kvm_rebooting exceptions should not want to unwind stack, and > > > + * while #MV might want to unwind stack, it is ultimately fatal. > > > + */ > > Aside from me not being able to type #MC, I did realize that the > > kvm_reboot exception will go outside noinstr code and can hit > > tracing/instrumentation and do unwinds from there. > > Asynchronously disabling SVM requires an IPI, so kvm_rebooting cannot change > within CLGI/STGI. We can check it after CLGI instead of waiting for a #GP: Seems fair; thanks! > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 956726d867aa..e3755f5eaf81 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -4074,7 +4074,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct > kvm_vcpu *vcpu) > if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > x86_spec_ctrl_set_guest(svm->virt_spec_ctrl); > > - svm_vcpu_enter_exit(vcpu, spec_ctrl_intercepted); > + if (unlikely(kvm_rebooting)) > + svm->vmcb->control.exit_code = SVM_EXIT_PAUSE; > + else > + svm_vcpu_enter_exit(vcpu, spec_ctrl_intercepted); > > if (!static_cpu_has(X86_FEATURE_V_SPEC_CTRL)) > x86_spec_ctrl_restore_host(svm->virt_spec_ctrl); > diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S > index 8e8295e774f0..34641b3a6823 100644 > --- a/arch/x86/kvm/svm/vmenter.S > +++ b/arch/x86/kvm/svm/vmenter.S > @@ -270,23 +270,12 @@ SYM_FUNC_START(__svm_vcpu_run) > RESTORE_GUEST_SPEC_CTRL_BODY > RESTORE_HOST_SPEC_CTRL_BODY > > -10: cmpb $0, kvm_rebooting > - jne 2b > - ud2 > -30: cmpb $0, kvm_rebooting > - jne 4b > - ud2 > -50: cmpb $0, kvm_rebooting > - jne 6b > - ud2 > -70: cmpb $0, kvm_rebooting > - jne 8b > - ud2 > +10: ud2 > > _ASM_EXTABLE(1b, 10b) > - _ASM_EXTABLE(3b, 30b) > - _ASM_EXTABLE(5b, 50b) > - _ASM_EXTABLE(7b, 70b) > + _ASM_EXTABLE(3b, 10b) > + _ASM_EXTABLE(5b, 10b) > + _ASM_EXTABLE(7b, 10b) > > SYM_FUNC_END(__svm_vcpu_run) > > > Paolo >