On Mon, Nov 26, 2018 at 11:23:31AM -0800, Jim Mattson wrote: > According to the SDM, "interrupt-window exiting" VM-exits wake a > logical processor from the same inactive states as would an external > interrupt. Specifically, they wake a logical processor from the states > entered using the HLT and MWAIT instructions. > > Fixes: 6dfacadd5858 ("KVM: nVMX: Add support for activity state HLT") > Signed-off-by: Jim Mattson <jmattson@xxxxxxxxxx> > Reviewed-by: Peter Shier <pshier@xxxxxxxxxx> > --- > arch/x86/kvm/vmx.c | 17 ++++++++++++++--- > 1 file changed, 14 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 358e96f0c21d..a29921b216f5 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -13498,6 +13498,16 @@ static bool nested_vmx_nmi_window_exit(struct vmcs12 *vmcs12) > (GUEST_INTR_STATE_NMI | GUEST_INTR_STATE_MOV_SS)); > } > > +static bool nested_vmx_intr_window_exit(struct vmcs12 *vmcs12) > +{ > + return (vmcs12->cpu_based_vm_exec_control & > + CPU_BASED_VIRTUAL_INTR_PENDING) && > + vmcs12->guest_activity_state != GUEST_ACTIVITY_WAIT_SIPI && Technically this also needs to weed out GUEST_ACTIVITY_SHUTDOWN as INTR doesn't break shutdown. That being said, I wonder if it makes sense to remove the activity state and guest_interruptibility_info checks altogether from both the NMI and INTR helpers. The current use case is only for guest_activity_state==HLT, which makes both checks redundant since guest_activity_state obviously can't be two things at once and STI/MOVSS blocking are only legal when going to ACTIVE state. Event blocking and prioritization is so complex that it seems unlikely that a generic kitchen-sink helper will be useful, e.g. handle_invalid_guest_state() already has this code and more by way of vmx_interrupt_allowed(). At that point it's probably easiest to just open code the checks, e.g.: exec_controls = vmcs12->cpu_based_vm_exec_control; if ((vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) && !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) && !(exec_controls & CPU_BASED_VIRTUAL_NMI_PENDING) && !((exec_controls & CPU_BASED_VIRTUAL_INTR_PENDING) && (vmcs12->guest_rflags & X86_EFLAGS_IF))) { vmx->nested.nested_run_pending = 0; return kvm_vcpu_halt(vcpu); } return 1; > + (vmcs12->guest_rflags & X86_EFLAGS_IF) && > + !(vmcs12->guest_interruptibility_info & > + (GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS)); > +} > + > /* > * nested_vmx_run() handles a nested entry, i.e., a VMLAUNCH or VMRESUME on L1 > * for running an L2 nested guest. > @@ -13590,12 +13600,13 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) > > /* > * If we're entering a halted L2 vcpu and the L2 vcpu won't be > - * awakened by event injection or by an NMI-window VM-exit, > - * halt the vcpu. > + * awakened by event injection or by an NMI-window VM-exit or > + * by an interrupt-window VM-exit, halt the vcpu. > */ > if ((vmcs12->guest_activity_state == GUEST_ACTIVITY_HLT) && > !(vmcs12->vm_entry_intr_info_field & INTR_INFO_VALID_MASK) && > - !nested_vmx_nmi_window_exit(vmcs12)) { > + !nested_vmx_nmi_window_exit(vmcs12) && > + !nested_vmx_intr_window_exit(vmcs12)) { > vmx->nested.nested_run_pending = 0; > return kvm_vcpu_halt(vcpu); > } > -- > 2.20.0.rc0.387.gc7a69e6b6c-goog >