Sean Christopherson <sean.j.christopherson@xxxxxxxxx> writes: > Stop propagating MMU large page support into a memslot's disallow_lpage > now that the MMU's max_page_level handles the scenario where VMX's EPT is > enabled and EPT doesn't support 2M pages. > > No functional change intended. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > --- > arch/x86/kvm/vmx/vmx.c | 3 --- > arch/x86/kvm/x86.c | 6 ++---- > 2 files changed, 2 insertions(+), 7 deletions(-) > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 3ad24ca692a6..e349689ac0cf 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -7633,9 +7633,6 @@ static __init int hardware_setup(void) > if (!cpu_has_vmx_tpr_shadow()) > kvm_x86_ops->update_cr8_intercept = NULL; > > - if (enable_ept && !cpu_has_vmx_ept_2m_page()) > - kvm_disable_largepages(); > - > #if IS_ENABLED(CONFIG_HYPERV) > if (ms_hyperv.nested_features & HV_X64_NESTED_GUEST_MAPPING_FLUSH > && enable_ept) { > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 144143a57d0b..b40488fd2969 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -9884,11 +9884,9 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, > ugfn = slot->userspace_addr >> PAGE_SHIFT; > /* > * If the gfn and userspace address are not aligned wrt each > - * other, or if explicitly asked to, disable large page > - * support for this slot > + * other, disable large page support for this slot. > */ > - if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) || > - !kvm_largepages_enabled()) { > + if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1)) { > unsigned long j; > > for (j = 0; j < lpages; ++j) MMU code always explodes my brain, this left me wondering why wasn't the original vmx_get_lpage_level() adjusted before... FWIW, Reviewed-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> -- Vitaly