On Wed, Feb 20, 2013 at 05:48:40PM +0100, Jan Kiszka wrote: > On 2013-02-20 17:46, Gleb Natapov wrote: > > On Wed, Feb 20, 2013 at 02:01:47PM +0100, Jan Kiszka wrote: > >> This aligns VMX more with SVM regarding event injection and recovery for > >> nested guests. The changes allow to inject interrupts directly from L0 > >> to L2. > >> > >> One difference to SVM is that we always transfer the pending event > >> injection into the architectural state of the VCPU and then drop it from > >> there if it turns out that we left L2 to enter L1. > >> > >> VMX and SVM are now identical in how they recover event injections from > >> unperformed vmlaunch/vmresume: We detect that VM_ENTRY_INTR_INFO_FIELD > >> still contains a valid event and, if yes, transfer the content into L1's > >> idt_vectoring_info_field. > >> > >> To avoid that we incorrectly leak an event into the architectural VCPU > >> state that L1 wants to inject, we skip cancellation on nested run. > >> > >> Signed-off-by: Jan Kiszka <jan.kiszka@xxxxxxxxxxx> > >> --- > >> > >> Survived moderate testing here and (currently) makes sense to me, but > >> please review very carefully. I wouldn't be surprised if I'm still > >> missing some subtle corner case. > >> > >> arch/x86/kvm/vmx.c | 57 +++++++++++++++++++++++---------------------------- > >> 1 files changed, 26 insertions(+), 31 deletions(-) > >> > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >> index dd3a8a0..7d2fbd2 100644 > >> --- a/arch/x86/kvm/vmx.c > >> +++ b/arch/x86/kvm/vmx.c > >> @@ -6489,8 +6489,6 @@ static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, > >> > >> static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > >> { > >> - if (is_guest_mode(&vmx->vcpu)) > >> - return; > >> __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, > >> VM_EXIT_INSTRUCTION_LEN, > >> IDT_VECTORING_ERROR_CODE); > >> @@ -6498,7 +6496,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > >> > >> static void vmx_cancel_injection(struct kvm_vcpu *vcpu) > >> { > >> - if (is_guest_mode(vcpu)) > >> + if (to_vmx(vcpu)->nested.nested_run_pending) > >> return; > > Why is this needed here? > > Please check if my reply to Nadav explains this sufficiently. > Ah, sorry. Will follow up there if it is not. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html