Re: [PATCH] KVM: nVMX: Rework event injection and recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 20, 2013 at 06:50:50PM +0100, Jan Kiszka wrote:
> On 2013-02-20 18:24, Jan Kiszka wrote:
> > On 2013-02-20 18:01, Gleb Natapov wrote:
> >> On Wed, Feb 20, 2013 at 03:37:51PM +0100, Jan Kiszka wrote:
> >>> On 2013-02-20 15:14, Nadav Har'El wrote:
> >>>> Hi,
> >>>>
> >>>> By the way, if you haven't seen my description of why the current code
> >>>> did what it did, take a look at
> >>>> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg54478.html
> >>>> Another description might also come in handy:
> >>>> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg54476.html
> >>>>
> >>>> On Wed, Feb 20, 2013, Jan Kiszka wrote about "[PATCH] KVM: nVMX: Rework event injection and recovery":
> >>>>> This aligns VMX more with SVM regarding event injection and recovery for
> >>>>> nested guests. The changes allow to inject interrupts directly from L0
> >>>>> to L2.
> >>>>>
> >>>>> One difference to SVM is that we always transfer the pending event
> >>>>> injection into the architectural state of the VCPU and then drop it from
> >>>>> there if it turns out that we left L2 to enter L1.
> >>>>
> >>>> Last time I checked, if I'm remembering correctly, the nested SVM code did
> >>>> something a bit different: After the exit from L2 to L1 and unnecessarily
> >>>> queuing the pending interrupt for injection, it skipped one entry into L1,
> >>>> and as usual after the entry the interrupt queue is cleared so next time
> >>>> around, when L1 one is really entered, the wrong injection is not attempted.
> >>>>
> >>>>> VMX and SVM are now identical in how they recover event injections from
> >>>>> unperformed vmlaunch/vmresume: We detect that VM_ENTRY_INTR_INFO_FIELD
> >>>>> still contains a valid event and, if yes, transfer the content into L1's
> >>>>> idt_vectoring_info_field.
> >>>>
> >>>>> To avoid that we incorrectly leak an event into the architectural VCPU
> >>>>> state that L1 wants to inject, we skip cancellation on nested run.
> >>>>
> >>>> I didn't understand this last point.
> >>>
> >>> - prepare_vmcs02 sets event to be injected into L2
> >>> - while trying to enter L2, a cancel condition is met
> >>> - we call vmx_cancel_interrupts but should now avoid filling L1's event
> >>>   into the arch event queues - it's kept in vmcs12
> >>>
> >> But what if we put it in arch event queue? It will be reinjected during
> >> next entry attempt, so nothing bad happens and we have one less if() to explain,
> >> or do I miss something terrible that will happen?
> > 
> > I started without that if but ran into troubles with KVM-on-KVM (L1
> > locks up). Let me dig out the instrumentation and check the event flow
> > again.
> 
> OK, got it again: If we transfer an IRQ that L1 wants to send to L2 into
> the architectural VCPU state, we will also trigger enable_irq_window.
> And that raises KVM_REQ_IMMEDIATE_EXIT again as it thinks L0 wants
> inject. That will send us into an endless loop.
> 
Why would we trigger enable_irq_window()? enable_irq_window() triggers
only if interrupt is pending in one of irq chips, not in architectural
VCPU state.

> Not sure if we can and should handle this scenario in enable_irq_window
> in a nicer way. Open for suggestions.
> 
> Jan
> 
> -- 
> Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
> Corporate Competence Center Embedded Linux

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux