On Tue, May 14, 2019 at 02:55:18PM -0700, Andy Lutomirski wrote: > > > On May 14, 2019, at 2:06 PM, Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > > >> On Tue, May 14, 2019 at 01:33:21PM -0700, Andy Lutomirski wrote: > >> I suspect that the context switch is a bit of a red herring. A > >> PCID-don't-flush CR3 write is IIRC under 300 cycles. Sure, it's slow, > >> but it's probably minor compared to the full cost of the vm exit. The > >> pain point is kicking the sibling thread. > > > > Speaking of PCIDs, a separate mm for KVM would mean consuming another > > ASID, which isn't good. > > I’m not sure we care. We have many logical address spaces (two per mm plus a > few more). We have 4096 PCIDs, but we only use ten or so. And we have some > undocumented number of *physical* ASIDs with some undocumented mechanism by > which PCID maps to a physical ASID. Yeah, I was referring to physical ASIDs. > I don’t suppose you know how many physical ASIDs we have? Limited number of physical ASIDs. I'll leave it at that so as not to disclose something I shouldn't. > And how it interacts with the VPID stuff? VPID and PCID get factored into the final ASID, i.e. changing either one results in a new ASID. The SDM's oblique way of saying that: VPIDs and PCIDs (see Section 4.10.1) can be used concurrently. When this is done, the processor associates cached information with both a VPID and a PCID. Such information is used only if the current VPID and PCID both match those associated with the cached information. E.g. enabling PTI in both the host and guest consumes four ASIDs just to run a single task in the guest: - VPID=0, PCID=kernel - VPID=0, PCID=user - VPID=1, PCID=kernel - VPID=1, PCID=user The impact of consuming another ASID for KVM would likely depend on both the guest and host configurations/worloads, e.g. if the guest is using a lot of PCIDs then it's probably a moot point. It's something to keep in mind though if we go down this path.