> From: Nadav Har'El > Sent: Tuesday, May 17, 2011 3:57 AM > > This patch adds correct handling of IDT_VECTORING_INFO_FIELD for the > nested > case. > > When a guest exits while delivering an interrupt or exception, we get this > information in IDT_VECTORING_INFO_FIELD in the VMCS. When L2 exits to L1, > there's nothing we need to do, because L1 will see this field in vmcs12, and > handle it itself. However, when L2 exits and L0 handles the exit itself and > plans to return to L2, L0 must inject this event to L2. > > In the normal non-nested case, the idt_vectoring_info case is discovered after > the exit, and the decision to inject (though not the injection itself) is made > at that point. However, in the nested case a decision of whether to return > to L2 or L1 also happens during the injection phase (see the previous > patches), so in the nested case we can only decide what to do about the > idt_vectoring_info right after the injection, i.e., in the beginning of > vmx_vcpu_run, which is the first time we know for sure if we're staying in > L2. > > Therefore, when we exit L2 (is_guest_mode(vcpu)), we disable the regular > vmx_complete_interrupts() code which queues the idt_vectoring_info for > injection on next entry - because such injection would not be appropriate > if we will decide to exit to L1. Rather, we just save the idt_vectoring_info > and related fields in vmcs12 (which is a convenient place to save these > fields). On the next entry in vmx_vcpu_run (*after* the injection phase, > potentially exiting to L1 to inject an event requested by user space), if > we find ourselves in L1 we don't need to do anything with those values > we saved (as explained above). But if we find that we're in L2, or rather > *still* at L2 (it's not nested_run_pending, meaning that this is the first > round of L2 running after L1 having just launched it), we need to inject > the event saved in those fields - by writing the appropriate VMCS fields. > > Signed-off-by: Nadav Har'El <nyh@xxxxxxxxxx> > --- > arch/x86/kvm/vmx.c | 30 ++++++++++++++++++++++++++++++ > 1 file changed, 30 insertions(+) > > --- .before/arch/x86/kvm/vmx.c 2011-05-16 22:36:50.000000000 +0300 > +++ .after/arch/x86/kvm/vmx.c 2011-05-16 22:36:50.000000000 +0300 > @@ -5804,6 +5804,8 @@ static void __vmx_complete_interrupts(st > > static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > { > + if (is_guest_mode(&vmx->vcpu)) > + return; > __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, > VM_EXIT_INSTRUCTION_LEN, > IDT_VECTORING_ERROR_CODE); > @@ -5811,6 +5813,8 @@ static void vmx_complete_interrupts(stru > > static void vmx_cancel_injection(struct kvm_vcpu *vcpu) > { > + if (is_guest_mode(vcpu)) > + return; > __vmx_complete_interrupts(to_vmx(vcpu), > vmcs_read32(VM_ENTRY_INTR_INFO_FIELD), > VM_ENTRY_INSTRUCTION_LEN, > @@ -5831,6 +5835,21 @@ static void __noclone vmx_vcpu_run(struc > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > > + if (is_guest_mode(vcpu) && !vmx->nested.nested_run_pending) { > + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); > + if (vmcs12->idt_vectoring_info_field & > + VECTORING_INFO_VALID_MASK) { > + vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, > + vmcs12->idt_vectoring_info_field); > + vmcs_write32(VM_ENTRY_INSTRUCTION_LEN, > + vmcs12->vm_exit_instruction_len); > + if (vmcs12->idt_vectoring_info_field & > + VECTORING_INFO_DELIVER_CODE_MASK) > + vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE, > + vmcs12->idt_vectoring_error_code); > + } > + } Here got one question. How about L2 has interrupt exiting disabled? That way it's expect to have L0 directly inject virtual interrupt into L2, and thus simply overwrite interrupt info field here looks incorrect. Though as you said typical hypervisor doesn't turn interrupt exiting off, but it does be an architectural correct thing. I think here you should compare current INTR_INFO_FIELD with saved IDT_VECTOR and choose a higher priority, when L2 has interrupt exiting disabled. Thanks Kevin -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html