On Wed, Sep 07, 2011 at 08:09:37PM +0300, Avi Kivity wrote: > On 09/07/2011 07:52 PM, Don Zickus wrote: > >> > >> May I ask how? Detecting a back-to-back NMI? > > > >Pretty boring actually. Currently we execute an NMI handler until one of > >them returns handled. Then we stop. This may cause us to miss an NMI in > >the case of multiple NMIs at once. Now we are changing it to execute > >_all_ the handlers to make sure we didn't miss one. > > That's going to be pretty bad for kvm - those handlers become a lot > more expensive since they involve reading MSRs. Even worse if we > start using NMIs as a wakeup for pv spinlocks as provided by this > patchset. Oh. > > >But then the downside > >here is we accidentally handle an NMI that was latched. This would cause > >a 'Dazed on confused' message as that NMI was already handled by the > >previous NMI. > > > >We are working on an algorithm to detect this condition and flag it > >(nothing complicated). But it may never be perfect. > > > >On the other hand, what else are we going to do with an edge-triggered > >shared interrupt line? > > > > How about, during NMI, save %rip to a per-cpu variable. Handle just > one cause. If, on the next NMI, we hit the same %rip, assume > back-to-back NMI has occured and now handle all causes. I had a similar idea a couple of months ago while debugging a continuous flow of back-to-back NMIs from a stress-test perf application and I couldn't get it to work. But let me try it again, because it does make sense as an optimization. Thanks, Don -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html