Re: [PATCH 22/24] Correct handling of idt vectoring info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jun 13, 2010 at 03:33:50PM +0300, Nadav Har'El wrote:
> This patch adds correct handling of IDT_VECTORING_INFO_FIELD for the nested
> case.
> 
> When a guest exits while handling an interrupt or exception, we get this
> information in IDT_VECTORING_INFO_FIELD in the VMCS. When L2 exits to L1,
> there's nothing we need to do, because L1 will see this field in vmcs12, and
> handle it itself. However, when L2 exits and L0 handles the exit itself and
> plans to return to L2, L0 must inject this event to L2.
> 
> In the normal non-nested case, the idt_vectoring_info case is treated after
> the exit. However, in the nested case a decision of whether to return to L2
This is not correct. On the normal non-nested case the idt_vectoring_info is
parsed into vmx/svm independent data structure (which is saved/restored during
VM migartion) after exit. The reinjection happens on vmentry path.

> or L1 also happens during the injection phase (see the previous patches), so
> in the nested case we have to treat the idt_vectoring_info right after the
> injection, i.e., in the beginning of vmx_vcpu_run, which is the first time
> we know for sure if we're staying in L2 (i.e., nested_mode is true).
> 
> Signed-off-by: Nadav Har'El <nyh@xxxxxxxxxx>
> ---
> --- .before/arch/x86/kvm/vmx.c	2010-06-13 15:01:30.000000000 +0300
> +++ .after/arch/x86/kvm/vmx.c	2010-06-13 15:01:30.000000000 +0300
> @@ -320,6 +320,10 @@ struct nested_vmx {
>  	struct vmcs *l1_vmcs;
>  	/* L2 must run next, and mustn't decide to exit to L1. */
>  	bool nested_run_pending;
> +	/* true if last exit was of L2, and had a valid idt_vectoring_info */
> +	bool valid_idt_vectoring_info;
> +	/* These are saved if valid_idt_vectoring_info */
> +	u32 vm_exit_instruction_len, idt_vectoring_error_code;
>  };
>  
>  enum vmcs_field_type {
> @@ -5460,6 +5464,22 @@ static void fixup_rmode_irq(struct vcpu_
>  		| vmx->rmode.irq.vector;
>  }
>  
> +static void nested_handle_valid_idt_vectoring_info(struct vcpu_vmx *vmx)
> +{
> +	int irq  = vmx->idt_vectoring_info & VECTORING_INFO_VECTOR_MASK;
> +	int type = vmx->idt_vectoring_info & VECTORING_INFO_TYPE_MASK;
> +	int errCodeValid = vmx->idt_vectoring_info &
> +		VECTORING_INFO_DELIVER_CODE_MASK;
> +	vmcs_write32(VM_ENTRY_INTR_INFO_FIELD,
> +		irq | type | INTR_INFO_VALID_MASK | errCodeValid);
> +
> +	vmcs_write32(VM_ENTRY_INSTRUCTION_LEN,
> +		vmx->nested.vm_exit_instruction_len);
> +	if (errCodeValid)
> +		vmcs_write32(VM_ENTRY_EXCEPTION_ERROR_CODE,
> +			vmx->nested.idt_vectoring_error_code);
> +}
> +
Why can't you do that using existing exception/nmi/interrupt queues that
we have, but instead you effectively disable vmx_complete_interrupts()
by patch 18 when in nested mode and add logically same code in this
patch. I.e after exit you save info about idt event into nested_vmx
and reinject it on vm entry.

>  static inline void sync_cached_regs_to_vmcs(struct kvm_vcpu *vcpu)
>  {
>  	if (test_bit(VCPU_REGS_RSP, (unsigned long *)&vcpu->arch.regs_dirty))
> @@ -5481,6 +5501,9 @@ static void vmx_vcpu_run(struct kvm_vcpu
>  {
>  	struct vcpu_vmx *vmx = to_vmx(vcpu);
>  
> +	if (vmx->nested.nested_mode && vmx->nested.valid_idt_vectoring_info)
> +		nested_handle_valid_idt_vectoring_info(vmx);
> +
>  	/* Record the guest's net vcpu time for enforced NMI injections. */
>  	if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked))
>  		vmx->entry_time = ktime_get();
> @@ -5600,6 +5623,16 @@ static void vmx_vcpu_run(struct kvm_vcpu
>  				  | (1 << VCPU_EXREG_PDPTR));
>  
>  	vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
> +
> +	vmx->nested.valid_idt_vectoring_info = vmx->nested.nested_mode &&
> +		(vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK);
> +	if (vmx->nested.valid_idt_vectoring_info) {
> +		vmx->nested.vm_exit_instruction_len =
> +			vmcs_read32(VM_EXIT_INSTRUCTION_LEN);
> +		vmx->nested.idt_vectoring_error_code =
> +			vmcs_read32(IDT_VECTORING_ERROR_CODE);
> +	}
> +
>  	if (vmx->rmode.irq.pending)
>  		fixup_rmode_irq(vmx);
>  
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux