On 7/26/2023 6:23 AM, isaku.yamahata@xxxxxxxxx wrote:
From: Xiaoyao Li <xiaoyao.li@xxxxxxxxx>
Cannot map a private page as large page if any smaller mapping exists.
It has to wait for all the not-mapped smaller page to be mapped and
promote it to larger mapping.
Signed-off-by: Xiaoyao Li <xiaoyao.li@xxxxxxxxx>
Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
---
arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 95ba78944712..a9f0f4ade2d0 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1293,7 +1293,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
int r;
- if (fault->nx_huge_page_workaround_enabled)
+ if (fault->nx_huge_page_workaround_enabled ||
+ kvm_gfn_shared_mask(vcpu->kvm))
disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
/*
The implementation of disallowed_hugepage_adjust() is as following:
void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte,
int cur_level)
{
if (cur_level > PG_LEVEL_4K &&
cur_level == fault->goal_level &&
is_shadow_present_pte(spte) &&
!is_large_pte(spte) &&
spte_to_child_sp(spte)->nx_huge_page_disallowed) {
...
}
}
One condition is spte_to_child_sp(spte)->nx_huge_page_disallowed should be
true to decrease the goal level of the fault.
Does this condition make the change of this patch invalid?