On Tue, Oct 10, 2023, David Woodhouse wrote: > If I understand things correctly, the point of the TDP MMU is to use > page tables such as EPT for GPA → HPA translations, but let the > virtualization support in the CPU handle all of the *virtual* > addressing and page tables, including the non-root mode %cr3/%cr4. > > I have a guest which loves to flip the SMEP bit on and off in %cr4 all > the time. The guest is actually Xen, in its 'PV shim' mode which > enables it to support a single PV guest, while running in a true > hardware virtual machine: > https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00497.html > > The performance is *awful*, since as far as I can tell, on every flip > KVM flushes the entire EPT. I understand why that might be necessary > for the mode where KVM is building up a set of shadow page tables to > directly map GVA → HPA and be loaded into %cr3 of a CPU that doesn't > support native EPT translations. But I don't understand why the TDP MMU > would need to do it. Surely we don't have to change anything in the EPT > just because the stuff in the non-root-mode %cr3/%cr4 changes? > > So I tried this, and it went faster and nothing appears to have blown > up. > > Am I missing something? Is this stupidly wrong? Heh, you're in luck, because regardless of what your darn pronoun "this" refers to, the answer is yes, "this" is stupidly wrong. The below is stupidly wrong. KVM needs to at least reconfigure the guest's paging metadata that is used to translate GVAs to GPAs during emulation. But the TDP MMU behavior *was* also stupidly wrong. The reason that two vCPUs suck less is because KVM would zap SPTEs (EPT roots) if and only if *both* vCPUs unloaded their roots at the same time. Commit edbdb43fc96b ("KVM: x86: Preserve TDP MMU roots until they are explicitly invalidated") should fix the behavior you're seeing. And if we want to try and make SMEP blazing fast on Intel, we can probably let the guest write it directly, i.e. give SMEP the same treatment as CR0.WP. See commits cf9f4c0eb169 ("KVM: x86/mmu: Refresh CR0.WP prior to checking for emulated permission faults") and fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit"). Oh, and if your userspace is doing something silly like constantly creating and deleting memslots, see commit 0df9dab891ff ("KVM: x86/mmu: Stop zapping invalidated TDP MMU roots asynchronously"). > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -1072,7 +1074,8 @@ int kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned > long cr4) > if (kvm_x86_ops.set_cr4(vcpu, cr4)) > return 1; > > - kvm_post_set_cr4(vcpu, old_cr4, cr4); > + if (!vcpu->kvm->arch.tdp_mmu_enabled) > + kvm_post_set_cr4(vcpu, old_cr4, cr4); > > if ((cr4 ^ old_cr4) & (X86_CR4_OSXSAVE | X86_CR4_PKE)) > kvm_update_cpuid_runtime(vcpu); > > > Also... if I have *two* vCPUs it doesn't go quite as slowly while Xen > starts Grub and then Grub boots a Linux kernel. Until the kernel brings > up its second vCPU and *then* it starts going really slowly again. Is > that because the TDP roots are refcounted, and that idle vCPU holds > onto the unused one and prevents it from being completely thrown away? > Until the vCPU stops being idle and starts flipping SMEP on/off on > Linux←→Xen transitions too? > > In practice, there's not a lot of point in Xen using SMEP when it's > purely acting as a library for its *one* guest, living in an HVM > container. The above patch speeds things up but telling Xen not to use > SMEP at all makes things go *much* faster. But if I'm not being > *entirely* stupid, there may be some generic improvements for > KVM+TDPMMU here somewhere so it's worth making a fool of myself by > asking...?