Re: [kvm-unit-tests PATCH] x86: Remove test_multiple_nmi()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Apr 24, 2019, at 1:55 PM, Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote:
> 
> On Tue, Apr 23, 2019 at 09:50:59PM -0700, nadav.amit@xxxxxxxxx wrote:
>> From: Nadav Amit <nadav.amit@xxxxxxxxx>
>> 
>> According to the discussion in [1], expecting nested NMIs never to be
> 
> s/nested/multiple pending
> 
>> collapsed is wrong.
> 
> It'd also be helpful to quote the SDM or APM, although it's admittedly
> difficult to find a relevant blurb in the SDM.  The only explict statement
> regarding the number of latched/pending NMIs I could find was for SMM:
> 
> 34.8 NMI HANDLING WHILE IN SMM
>  NMI interrupts are blocked upon entry to the SMI handler. If an NMI request
>  occurs during the SMI handler, it is latched and serviced after the processor
>  exits SMM. Only one NMI request will be latched during the SMI handler.  If an
>  NMI request is pending when the processor executes the RSM instruction, the NMI
>  is serviced before the next instruction of the interrupted code sequence. This
>  assumes that NMIs were not blocked before the SMI occurred. If NMIs were
>  blocked before the SMI occurred, they are blocked after execution of RSM
> 
> All that being said, removing the test is correct as it's blatantly
> subject to a race condition between vCPUs.
> 
> It probably makes sense add a single threaded test that pends an NMI from
> inside the NMI handler to ensure that KVM pends NMIs correctly.  I'll send
> a patch.

Thanks, Sean. I thought that quoting you should be enough. ;-)

Paolo, please let me know if you have any further feedback, so I will know
what to include in v2.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux