On Fri, 2022-04-29 at 17:00 +0000, Sean Christopherson wrote: > On Tue, Apr 26, 2022, Maxim Levitsky wrote: > > On Tue, 2022-04-26 at 10:06 +0300, Maxim Levitsky wrote: > > BTW, can I ask you to check something on the AMD side of things of AVIC? > > > > I noticed that AMD's manual states that: > > > > "Multiprocessor VM requirements. When running a VM which has multiple virtual CPUs, and the > > VMM runs a virtual CPU on a core which had last run a different virtual CPU from the same VM, > > regardless of the respective ASID values, care must be taken to flush the TLB on the VMRUN using a > > TLB_CONTROL value of 3h. Failure to do so may result in stale mappings misdirecting virtual APIC > > accesses to the previous virtual CPU's APIC backing page." > > > > It it relevant to KVM? I don't fully understand why it was mentioned that ASID doesn't matter, > > what makes it special about 'virtual CPU from the same VM' if ASID doesn't matter? > > I believe it's calling out that, because vCPUs from the same VM likely share an ASID, > the magic TLB entry for the APIC-access page, which redirects to the virtual APIC page, > will be preserved. And so if the hypervisor doesn't flush the ASID/TLB, accelerated > xAPIC accesses for the new vCPU will go to the previous vCPU's virtual APIC page. This is what I want to think as well, but the manual says explicitly "regardless of the respective ASID values" On the face value of it, the only logical way to read this IMHO, is that every time apic backing page is changed, we need to issue a TLB flush. Best regards, Maxim Levitsky > > Intel has the same requirement, though this specific scenario isn't as well documented. > E.g. even if using EPT and VPID, the EPT still needs to be invalidated because the > TLB can cache guest-physical mappings, which are not associated with a VPID. > > Huh. I was going to say that KVM does the necessary flushes in vmx_vcpu_load_vmcs() > and pre_svm_run(), but I don't think that's true. KVM flushes if the _new_ VMCS/VMCB > is being migrated to a different pCPU, but neither VMX nor SVM flush when switching > between vCPUs that are both "loaded" on the current pCPU. > > Switching between vmcs01 and vmcs02 is ok, because KVM always forces a different > EPTP, even if L1 is using shadow paging (the guest_mode bit in the role prevents > reusing a root). nSVM is "ok" because it flushes on every transition anyways. >