On Fri, 2022-03-04 at 11:48 -0800, isaku.yamahata@xxxxxxxxx wrote: > From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > > In the existing x86 KVM MMU code, there is already max_level member in > struct kvm_page_fault with KVM_MAX_HUGEPAGE_LEVEL initial value. The KVM > page fault handler denies page size larger than max_level. > > Add per-VM member to indicate the allowed maximum page size with > KVM_MAX_HUGEPAGE_LEVEL as default value and initialize max_level in struct > kvm_page_fault with it. > > For the guest TD, the set per-VM value for allows maximum page size to 4K > page size. Then only allowed page size is 4K. It means large page is > disabled. Do not support large page for TD is the reason that you want this change, but not the result. Please refine a little bit. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx> > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/mmu.h | 2 +- > arch/x86/kvm/mmu/mmu.c | 2 ++ > 3 files changed, 4 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index d8b78d6abc10..d33d79f2af2d 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -1053,6 +1053,7 @@ struct kvm_arch { > unsigned long n_requested_mmu_pages; > unsigned long n_max_mmu_pages; > unsigned int indirect_shadow_pages; > + int tdp_max_page_level; > u8 mmu_valid_gen; > struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; > struct list_head active_mmu_pages; > diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h > index 0ae91b8b25df..650989c37f2e 100644 > --- a/arch/x86/kvm/mmu.h > +++ b/arch/x86/kvm/mmu.h > @@ -192,7 +192,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, > .is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault), > .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(), > > - .max_level = KVM_MAX_HUGEPAGE_LEVEL, > + .max_level = vcpu->kvm->arch.tdp_max_page_level, > .req_level = PG_LEVEL_4K, > .goal_level = PG_LEVEL_4K, > }; > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index a474f2e76d78..e9212394a530 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5782,6 +5782,8 @@ void kvm_mmu_init_vm(struct kvm *kvm) > node->track_write = kvm_mmu_pte_write; > node->track_flush_slot = kvm_mmu_invalidate_zap_pages_in_memslot; > kvm_page_track_register_notifier(kvm, node); > + > + kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL; > } > > void kvm_mmu_uninit_vm(struct kvm *kvm)