On Tue, May 21, 2024 at 1:32 AM Isaku Yamahata <isaku.yamahata@xxxxxxxxx> wrote: > +static void vt_adjust_max_pa(void) > +{ > + u64 tme_activate; > + > + mmu_max_gfn = __kvm_mmu_max_gfn(); > + rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate); > + if (!(tme_activate & TME_ACTIVATE_LOCKED) || > + !(tme_activate & TME_ACTIVATE_ENABLED)) > + return; > + > + mmu_max_gfn -= (gfn_t)TDX_RESERVED_KEYID_BITS(tme_activate); This would be be >>=, not "-=". But I think this should not look at TME MSRs directly, instead it can use boot_cpu_data.x86_phys_bits. You can use it instead of shadow_phys_bits in __kvm_mmu_max_gfn() and then VMX does not need any adjustment. That said, this is not a bugfix, it's just an optimization. Paolo > + } > > out: > /* kfree() accepts NULL. */ > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 7f89405c8bc4..c519bb9c9559 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -12693,6 +12693,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) > if (ret) > goto out; > > + kvm->arch.mmu_max_gfn = __kvm_mmu_max_gfn(); > kvm_mmu_init_vm(kvm); > > ret = static_call(kvm_x86_vm_init)(kvm); > @@ -13030,7 +13031,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > return -EINVAL; > > if (change == KVM_MR_CREATE || change == KVM_MR_MOVE) { > - if ((new->base_gfn + new->npages - 1) > kvm_mmu_max_gfn()) > + if ((new->base_gfn + new->npages - 1) > kvm_mmu_max_gfn(kvm)) > return -EINVAL; > > #if 0 > > -- > Isaku Yamahata <isaku.yamahata@xxxxxxxxx> >