On 11/7/2023 11:00 PM, isaku.yamahata@xxxxxxxxx wrote:
From: Xiaoyao Li <xiaoyao.li@xxxxxxxxx>
Cannot map a private page as large page if any smaller mapping exists.
It has to wait for all the not-mapped smaller page to be mapped and
promote it to larger mapping.
Signed-off-by: Xiaoyao Li <xiaoyao.li@xxxxxxxxx>
---
arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 2c5257628881..d806574f7f2d 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1287,7 +1287,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
int r;
- if (fault->nx_huge_page_workaround_enabled)
+ if (fault->nx_huge_page_workaround_enabled ||
+ kvm_gfn_shared_mask(vcpu->kvm))
As I mentioned in
https://lore.kernel.org/kvm/fef75d54-e319-5170-5f76-f5abc4856315@xxxxxxxxxxxxxxx/,
The change of this patch will not take effect.
If "fault->nx_huge_page_workaround_enabled" is false, the condition
"spte_to_child_sp(spte)->nx_huge_page_disallowed" will not be true.
IIUC, the function disallowed_hugepage_adjust() currently is only to handle
nx_huge_page_workaround, it seems no special handling needed for TD.
disallowed_hugepage_adjust(fault, iter.old_spte, iter.level);
/*