RE: [PATCH 23/31] nVMX: Correct handling of interrupt injection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Nadav Har'El
> Sent: Tuesday, May 17, 2011 3:56 AM
> 
> The code in this patch correctly emulates external-interrupt injection
> while a nested guest L2 is running.
> 
> Because of this code's relative un-obviousness, I include here a longer-than-
> usual justification for what it does - much longer than the code itself ;-)
> 
> To understand how to correctly emulate interrupt injection while L2 is
> running, let's look first at what we need to emulate: How would things look
> like if the extra L0 hypervisor layer is removed, and instead of L0 injecting
> an interrupt, we had hardware delivering an interrupt?
> 
> Now we have L1 running on bare metal with a guest L2, and the hardware
> generates an interrupt. Assuming that L1 set PIN_BASED_EXT_INTR_MASK to
> 1, and
> VM_EXIT_ACK_INTR_ON_EXIT to 0 (we'll revisit these assumptions below),
> what
> happens now is this: The processor exits from L2 to L1, with an external-
> interrupt exit reason but without an interrupt vector. L1 runs, with
> interrupts disabled, and it doesn't yet know what the interrupt was. Soon
> after, it enables interrupts and only at that moment, it gets the interrupt
> from the processor. when L1 is KVM, Linux handles this interrupt.
> 
> Now we need exactly the same thing to happen when that L1->L2 system runs
> on top of L0, instead of real hardware. This is how we do this:
> 
> When L0 wants to inject an interrupt, it needs to exit from L2 to L1, with
> external-interrupt exit reason (with an invalid interrupt vector), and run L1.
> Just like in the bare metal case, it likely can't deliver the interrupt to
> L1 now because L1 is running with interrupts disabled, in which case it turns
> on the interrupt window when running L1 after the exit. L1 will soon enable
> interrupts, and at that point L0 will gain control again and inject the
> interrupt to L1.
> 
> Finally, there is an extra complication in the code: when nested_run_pending,
> we cannot return to L1 now, and must launch L2. We need to remember the
> interrupt we wanted to inject (and not clear it now), and do it on the
> next exit.
> 
> The above explanation shows that the relative strangeness of the nested
> interrupt injection code in this patch, and the extra interrupt-window
> exit incurred, are in fact necessary for accurate emulation, and are not
> just an unoptimized implementation.
> 
> Let's revisit now the two assumptions made above:
> 
> If L1 turns off PIN_BASED_EXT_INTR_MASK (no hypervisor that I know
> does, by the way), things are simple: L0 may inject the interrupt directly
> to the L2 guest - using the normal code path that injects to any guest.
> We support this case in the code below.
> 
> If L1 turns on VM_EXIT_ACK_INTR_ON_EXIT (again, no hypervisor that I know
> does), things look very different from the description above: L1 expects

Type-1 bare metal hypervisor may enable this bit, such as Xen. This bit is
really prepared for L2 hypervisor since normally L2 hypervisor is tricky to
touch generic interrupt logic, and thus better to not ack it until interrupt
is enabled and then hardware will gear to the kernel interrupt handler
automatically.

> to see an exit from L2 with the interrupt vector already filled in the exit
> information, and does not expect to be interrupted again with this interrupt.
> The current code does not (yet) support this case, so we do not allow the
> VM_EXIT_ACK_INTR_ON_EXIT exit-control to be turned on by L1.

Then just fill the interrupt vector field with the highest unmasked bit
from pending vIRR.

> 
> Signed-off-by: Nadav Har'El <nyh@xxxxxxxxxx>
> ---
>  arch/x86/kvm/vmx.c |   36 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
> 
> --- .before/arch/x86/kvm/vmx.c	2011-05-16 22:36:49.000000000 +0300
> +++ .after/arch/x86/kvm/vmx.c	2011-05-16 22:36:49.000000000 +0300
> @@ -1788,6 +1788,7 @@ static __init void nested_vmx_setup_ctls
> 
>  	/* exit controls */
>  	nested_vmx_exit_ctls_low = 0;
> +	/* Note that guest use of VM_EXIT_ACK_INTR_ON_EXIT is not supported.
> */
>  #ifdef CONFIG_X86_64
>  	nested_vmx_exit_ctls_high = VM_EXIT_HOST_ADDR_SPACE_SIZE;
>  #else
> @@ -3733,9 +3734,25 @@ out:
>  	return ret;
>  }
> 
> +/*
> + * In nested virtualization, check if L1 asked to exit on external interrupts.
> + * For most existing hypervisors, this will always return true.
> + */
> +static bool nested_exit_on_intr(struct kvm_vcpu *vcpu)
> +{
> +	return get_vmcs12(vcpu)->pin_based_vm_exec_control &
> +		PIN_BASED_EXT_INTR_MASK;
> +}
> +

could be a similar common wrapper like nested_cpu_has...

Thanks,
Kevin
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux