RE: [PATCH v3 1/2] KVM: VMX: enable acknowledge interupt on vmexit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Gleb Natapov wrote on 2013-02-20:
> On Wed, Feb 20, 2013 at 02:46:05AM +0000, Zhang, Yang Z wrote:
>> Avi Kivity wrote on 2013-02-20:
>>> On Tue, Feb 19, 2013 at 3:39 PM, Yang Zhang <yang.z.zhang@xxxxxxxxx>
> wrote:
>>>> From: Yang Zhang <yang.z.zhang@xxxxxxxxx>
>>>> 
>>>> The "acknowledge interrupt on exit" feature controls processor behavior
>>>> for external interrupt acknowledgement. When this control is set, the
>>>> processor acknowledges the interrupt controller to acquire the
>>>> interrupt vector on VM exit.
>>>> 
>>>> After enabling this feature, an interrupt which arrived when target cpu
>>>> is running in vmx non-root mode will be handled by vmx handler instead
>>>> of handler in idt. Currently, vmx handler only fakes an interrupt stack
>>>> and jump to idt table to let real handler to handle it. Further, we
>>>> will recognize the interrupt and only delivery the interrupt which not
>>>> belong to current vcpu through idt table. The interrupt which belonged
>>>> to current vcpu will be handled inside vmx handler. This will reduce
>>>> the interrupt handle cost of KVM.
>>>> 
>>>> Also, interrupt enable logic is changed if this feature is turnning
>>>> on: Before this patch, hypervior call local_irq_enable() to enable it
>>>> directly. Now IF bit is set on interrupt stack frame, and will be
>>>> enabled on a return from interrupt handler if exterrupt interrupt
>>>> exists. If no external interrupt, still call local_irq_enable() to
>>>> enable it.
>>>> 
>>>> Refer to Intel SDM volum 3, chapter 33.2.
>>>> 
>>>> 
>>>> +static void vmx_handle_external_intr(struct kvm_vcpu *vcpu) +{ +
>>>> u32 exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO); + +       /* +
>>>>    * If external interrupt exists, IF bit is set in rflags/eflags on
>>>> the +        * interrupt stack frame, and interrupt will be enabled on
>>>> a return +        * from interrupt handler. +        */ +       if
>>>> ((exit_intr_info & (INTR_INFO_VALID_MASK |
> INTR_INFO_INTR_TYPE_MASK)) +
>>>>                       == (INTR_INFO_VALID_MASK |
> INTR_TYPE_EXT_INTR)) {
>>>> +               unsigned int vector; +               unsigned long
>>>> entry; +               gate_desc *desc; +               struct
>>>> vcpu_vmx *vmx = to_vmx(vcpu); + +               vector = 
>>>> exit_intr_info & INTR_INFO_VECTOR_MASK; +#ifdef CONFIG_X86_64 + desc
>>>> = (void *)vmx->host_idt_base + vector * 16; +#else +              
>>>> desc = (void *)vmx->host_idt_base + vector * 8; +#endif + + entry =
>>>> gate_offset(*desc); +               asm( +
>>>>  "mov %0, %%" _ASM_DX " \n\t" +#ifdef CONFIG_X86_64 +
>>>>     "mov %%" _ASM_SP ", %%" _ASM_BX " \n\t" +
>>>> "and $0xfffffffffffffff0, %%" _ASM_SP " \n\t" +
>>>> "mov %%ss, %%" _ASM_AX " \n\t" +                       "push %%"
>>>> _ASM_AX " \n\t" +                       "push %%" _ASM_BX " \n\t"
>>>> +#endif
>>> 
>>> Are we sure no interrupts are using the IST feature?  I guess it's unlikely.
>> Linux uses IST for NMI, stack fault, machine-check, double fault and
>> debug interrupt . No external interrupt will use it. This feature is
>> only for external interrupt. So we don't need to check it here.
>> 
>>> 
>>>> +                       "pushf \n\t"
>>>> +                       "pop %%" _ASM_AX " \n\t"
>>>> +                       "or $0x200, %%" _ASM_AX " \n\t"
>>>> +                       "push %%" _ASM_AX " \n\t"
>>> 
>>> Can simplify to pushf; orl $0x200, %%rsp.
>> Sure.
>> 
>>>> +                       "mov %%cs, %%" _ASM_AX " \n\t"
>>>> +                       "push %%" _ASM_AX " \n\t"
>>> 
>>> push %%cs
>> "push %%cs" is invalid in x86_64.
>> 
>>>> +                       "push intr_return \n\t"
>>> 
>>> push $1f.  Or even combine with the next instruction, and call %rdx.
>> Which is faster? jmp or call?
>> 
> Wrong question. You need to compare push+jmp with call. I do not see why
Sorry, I didn't express it clearly.  Yes, I want to compare push+jmp with call.

> later will be slower.
I think so. If push+jmp is not faster than call, I will use the latter.

>>>> +                       "jmp *%% " _ASM_DX " \n\t"
>>>> +                       "1: \n\t"
>>>> +                       ".pushsection .rodata \n\t"
>>>> +                       ".global intr_return \n\t"
>>>> +                       "intr_return: " _ASM_PTR " 1b \n\t"
>>>> +                       ".popsection \n\t"
>>>> +                       : : "m"(entry) :
>>>> +#ifdef CONFIG_X86_64
>>>> +                       "rax", "rbx", "rdx"
>>>> +#else
>>>> +                       "eax", "edx"
>>>> +#endif
>>>> +                       );
>>>> +       } else
>>>> +               local_irq_enable();
>>>> +}
>>>> +
>> 
>> 
>> Best regards,
>> Yang
>> 
> 
> --
> 			Gleb.


Best regards,
Yang

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux