Re: [RFC PATCH 01/13] KVM: nSVM: Track the ASID per-VMCB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 28, 2025 at 04:03:08PM -0800, Sean Christopherson wrote:
> +Jim, for his input on VPIDs.
> 
> On Wed, Feb 05, 2025, Yosry Ahmed wrote:
> > The ASID is currently tracked per-vCPU, because the same ASID is used by
> > L1 and L2. That ASID is flushed on every transition between L1 and L2.
> > 
> > Track the ASID separately for each VMCB (similar to the
> > asid_generation), giving L2 a separate ASID. This is in preparation for
> > doing fine-grained TLB flushes on nested transitions instead of
> > unconditional full flushes.
> 
> After having some time to think about this, rather than track ASIDs per VMCB, I
> think we should converge on a single approach for nVMX (VPID) and nSVM (ASID).
> 
> Per **VM**, one VPID/ASID for L1, and one VPID/ASID for L2.
> 
> For SVM, the dynamic ASID crud is a holdover from KVM's support for CPUs that
> don't support FLUSHBYASID, i.e. needed to purge the entire TLB in order to flush
> guest mappings.  FLUSHBYASID was added in 2010, and AFAIK has been supported by
> all AMD CPUs since.

This means that for these old CPUs, every TLB flush done for the guest
will also flush the TLB entries of all other guests and the host IIUC. I
am not sure what CPUs around do not support FLUSHBYASID, but this sounds
like a big regression for them.

I am all for simplifying the code and converging nVMX and nSVM, but I am
a bit worried about this. Sounds like you are not though, so maybe I am
missing something :P

I initially that that the ASID space is too small, but it turns out I
was confused by the ASID messages from the SEV code. The max number of
ASIDs seems to be (1 << 15) on Rome, Milan, and Genoa CPUs. That's half
of VMX_NR_VPIDS, and probably good enough.

> 
> KVM already mostly keeps the same ASID, except for when a vCPU is migrated, in
> which case KVM assigns a new ASID.  I suspect that following VMX's lead and
> simply doing a TLB flush in this situation would be an improvement for modern
> CPUs, as it would flush the entries that need to be flushed, and not pollute the
> TLBs with stale, unused entries.
> 
> Using a static per-VM ASID would also allow using broadcast invalidations[*],
> would simplify the SVM code base, and I think/hope would allow us to move much
> of the TLB flushing logic, e.g. for task migration, to common code.
> 
> For VPIDs, maybe it's because it's Friday afternoon, but for the life of me I
> can't think of any reason why KVM needs to assign VPIDs per vCPU.  Especially
> since KVM is ridiculously conservative and flushes _all_ EPT/VPID contexts when
> running a different vCPU on a pCPU (which I suspect we can trim down?).

I think for the purpose of this series we can switch SVM to use one ASID
per vCPU to match the current nVMX behavior and simplify things. Moving
both nSVM and nVMX to use a single ASID per VM instead of per vCPU, and
potentially moving some of the logic to the common code, could be a
separate followup effort (maybe something that I can work on later this
year if no one picks it up :) ).

WDYT?

> 
> Am I forgetting something?
> 
> [*] https://lore.kernel.org/all/Z8HdBg3wj8M7a4ts@xxxxxxxxxx




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux