vmx_guest_apic_has_interrupt implicitly depends on the virtual APIC page being present + mapped into the kernel address space. Normally, upon VMLAUNCH/VMRESUME, we get the vmcs12 pages directly. However, if a live migration were to occur before reaching vcpu_block, the virtual APIC will not be restored on the target host. Fix this by getting vmcs12 pages before inspecting the virtual APIC page. Cc: kvm@xxxxxxxxxxxxxxx Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> Signed-off-by: Oliver Upton <oupton@xxxxxxxxxx> Reviewed-by: Jim Mattson <jmattson@xxxxxxxxxx> Reviewed-by: Peter Shier <pshier@xxxxxxxxxx> --- Parent commit: 7c67f54661fc ("KVM: SVM: do not allow VMRUN inside SMM") arch/x86/kvm/x86.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 8c0b77ac8dc6..edd3b75ad578 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -8494,6 +8494,16 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu) { + /* + * We must first get the vmcs12 pages before checking for interrupts + * (done in kvm_arch_vcpu_runnable) in case L1 is using + * virtual-interrupt delivery. + */ + if (kvm_check_request(KVM_REQ_GET_VMCS12_PAGES, vcpu)) { + if (unlikely(!kvm_x86_ops.nested_ops->get_vmcs12_pages(vcpu))) + return 0; + } + if (!kvm_arch_vcpu_runnable(vcpu) && (!kvm_x86_ops.pre_block || kvm_x86_ops.pre_block(vcpu) == 0)) { srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); -- 2.26.2.526.g744177e7f7-goog