On 1/23/2024 7:53 AM, isaku.yamahata@xxxxxxxxx wrote:
From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
TDX requires special handling to support large private page. For
simplicity, only support 4K page for TD guest for now. Add per-VM maximum
page level support to support different maximum page sizes for TD guest and
conventional VMX guest.
Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
Acked-by: Kai Huang <kai.huang@xxxxxxxxx>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu/mmu.c | 2 ++
arch/x86/kvm/mmu/mmu_internal.h | 2 +-
3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 430d7bd7c37c..313519edd79e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1283,6 +1283,7 @@ struct kvm_arch {
unsigned long n_requested_mmu_pages;
unsigned long n_max_mmu_pages;
unsigned int indirect_shadow_pages;
+ int tdp_max_page_level;
Although only TDX need special handling for now, and TDX always use TDP,
but it doesn't necessarily to be TDP, right?
When the value is assigned to kvm_page_fault.max_level, it is also used for
non-TDP code path.
u8 mmu_valid_gen;
struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
struct list_head active_mmu_pages;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 54d4c8f1ba68..e93bc16a5e9b 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6307,6 +6307,8 @@ void kvm_mmu_init_vm(struct kvm *kvm)
kvm->arch.split_desc_cache.kmem_cache = pte_list_desc_cache;
kvm->arch.split_desc_cache.gfp_zero = __GFP_ZERO;
+
+ kvm->arch.tdp_max_page_level = KVM_MAX_HUGEPAGE_LEVEL;
}
static void mmu_free_vm_memory_caches(struct kvm *kvm)
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index 0443bfcf5d9c..2b9377442927 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -296,7 +296,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
.nx_huge_page_workaround_enabled =
is_nx_huge_page_enabled(vcpu->kvm),
- .max_level = KVM_MAX_HUGEPAGE_LEVEL,
+ .max_level = vcpu->kvm->arch.tdp_max_page_level,
.req_level = PG_LEVEL_4K,
.goal_level = PG_LEVEL_4K,
};