On Thu, Aug 11, 2022, Paolo Bonzini wrote: > Interrupts, NMIs etc. sent while in guest mode are already handled > properly by the *_interrupt_allowed callbacks, but other events can > cause a vCPU to be runnable that are specific to guest mode. > > In the case of VMX there are two, the preemption timer and the > monitor trap. The VMX preemption timer is already special cased via > the hv_timer_pending callback, but the purpose of the callback can be > easily extended to MTF or in fact any other event that can occur only > in guest mode. > > Rename the callback and add an MTF check; kvm_arch_vcpu_runnable() > now will return true if an MTF is pending, without relying on > kvm_vcpu_running()'s call to kvm_check_nested_events(). Until that call > is removed, however, the patch introduces no functional change. > > Reported-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> > Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 2 +- > arch/x86/kvm/vmx/nested.c | 9 ++++++++- > arch/x86/kvm/x86.c | 8 ++++---- > 3 files changed, 13 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index 5ffa578cafe1..293ff678fff5 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1636,7 +1636,7 @@ struct kvm_x86_nested_ops { > int (*check_events)(struct kvm_vcpu *vcpu); > bool (*handle_page_fault_workaround)(struct kvm_vcpu *vcpu, > struct x86_exception *fault); > - bool (*hv_timer_pending)(struct kvm_vcpu *vcpu); > + bool (*has_events)(struct kvm_vcpu *vcpu); > void (*triple_fault)(struct kvm_vcpu *vcpu); > int (*get_state)(struct kvm_vcpu *vcpu, > struct kvm_nested_state __user *user_kvm_nested_state, > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index ddd4367d4826..9631cdcdd058 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -3876,6 +3876,13 @@ static bool nested_vmx_preemption_timer_pending(struct kvm_vcpu *vcpu) > to_vmx(vcpu)->nested.preemption_timer_expired; > } > > +static bool vmx_has_nested_events(struct kvm_vcpu *vcpu) > +{ > + struct vcpu_vmx *vmx = to_vmx(vcpu); > + > + return nested_vmx_preemption_timer_pending(vcpu) || vmx->nested.mtf_pending; How about: return nested_vmx_preemption_timer_pending(vcpu) || to_vmx(vcpu)->nested.mtf_pending; to use less lines and honor the 80 char soft-limit?