On Tue, Oct 03, 2023 at 11:21:46AM -0700, Jim Mattson wrote: > On Tue, Oct 3, 2023 at 8:23 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > > Since you steal the whole PMU, can't you re-route the PMI to something > > > that's virt friendly too? > > > > Hmm, actually, we probably could. It would require modifying the host's APIC_LVTPC > > entry when context switching the PMU, e.g. to replace the NMI with a dedicated IRQ > > vector. As gross as that sounds, it might actually be cleaner overall than > > deciphering whether an NMI belongs to the host or guest, and it would almost > > certainly yield lower latency for guest PMIs. > > Ugh. Can't KVM just install its own NMI handler? Either way, it's > possible for late PMIs to arrive in the wrong context. I don't think you realize what a horrible trainwreck the NMI handler is. Every handler has to be able to determine if the NMI is theirs to handle. If we go do this whole swizzle thing we must find a sequence of PMU 'instructions' that syncs against the PMI, because otherwise we're going to loose PMIs and that's going to be a *TON* of pain. I'll put it on the agenda for the next time I talk with the hardware folks. But IIRC the AMD thing is *much* worse in this regards than the Intel one.