Re: [PATCH v2 2/5] KVM: nVMX: Rework event injection and recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013-03-17 16:14, Gleb Natapov wrote:
> On Sun, Mar 17, 2013 at 04:02:07PM +0100, Jan Kiszka wrote:
>> On 2013-03-17 14:45, Gleb Natapov wrote:
>>> On Sat, Mar 16, 2013 at 11:23:16AM +0100, Jan Kiszka wrote:
>>>> From: Jan Kiszka <jan.kiszka@xxxxxxxxxxx>
>>>>
>>>> The basic idea is to always transfer the pending event injection on
>>>> vmexit into the architectural state of the VCPU and then drop it from
>>>> there if it turns out that we left L2 to enter L1.
>>>>
>>>> VMX and SVM are now identical in how they recover event injections from
>>>> unperformed vmlaunch/vmresume: We detect that VM_ENTRY_INTR_INFO_FIELD
>>>> still contains a valid event and, if yes, transfer the content into L1's
>>>> idt_vectoring_info_field.
>>>>
>>> But how this can happens with VMX code? VMX has this nested_run_pending
>>> thing that prevents #vmexit emulation from happening without vmlaunch.
>>> This means that VM_ENTRY_INTR_INFO_FIELD should never be valid during
>>> #vmexit emulation since it is marked invalid during vmlaunch.
>>
>> Now that nmi/interrupt_allowed is strict /wrt nested_run_pending again,
>> it may indeed no longer happen. It was definitely a problem before, also
>> with direct vmexit on pending INIT. Requires a second thought, maybe
>> also a WARN_ON(vmx->nested.nested_run_pending) in nested_vmx_vmexit.
>>
>>>
>>>> However, we differ on how to deal with events that L0 wanted to inject
>>>> into L2. Likely, this case is still broken in SVM. For VMX, the function
>>>> vmcs12_save_pending_events deals with transferring pending L0 events
>>>> into the queue of L1. That is mandatory as L1 may decide to switch the
>>>> guest state completely, invalidating or preserving the pending events
>>>> for later injection (including on a different node, once we support
>>>> migration).
>>>>
>>>> Note that we treat directly injected NMIs differently as they can hit
>>>> both L1 and L2. In this case, we let L0 try to injection again also over
>>>> L1 after leaving L2.
>>>>
>>> Hmm, where SDM says NMI behaves this way?
>>
>> NMIs are only blocked in root mode if we took an NMI-related vmexit (or,
>> of course, an NMI is being processed). Thus, every arriving NMI can
>> either hit the guest or the host - pure luck.
>>
>> However, I have missed the fact that an NMI may have been injected from
>> L1 as well. If injection triggers a vmexit, that NMI could now leak into
>> L1. So we have to process them as well in vmcs12_save_pending_events.
>>
> You mean "should not leak into L0" not L1?

No, L1. If we keep the NMI in the architectural queue, L0 will try to
reinject it over L1 after the vmexit to it.

Jan


Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux