[PATCH 2/3] KVM: x86/mmu: Pass account_nx to tdp_mmu_split_huge_page()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In preparation for splitting huge pages during fault, pass account_nx to
tdp_mmu_split_huge_page(). Eager page splitting hard-codes account_nx to
false because the splitting is being done for dirty-logging rather than
vCPU execution faults.

No functional change intended.

Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx>
---
 arch/x86/kvm/mmu/tdp_mmu.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index d71d177ae6b8..9263765c8068 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1456,7 +1456,8 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm,
 }
 
 static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
-				   struct kvm_mmu_page *sp, bool shared)
+				   struct kvm_mmu_page *sp, bool shared,
+				   bool account_nx)
 {
 	const u64 huge_spte = iter->old_spte;
 	const int level = iter->level;
@@ -1479,7 +1480,7 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
 	 * correctness standpoint since the translation will be the same either
 	 * way.
 	 */
-	ret = tdp_mmu_link_sp(kvm, iter, sp, false, shared);
+	ret = tdp_mmu_link_sp(kvm, iter, sp, account_nx, shared);
 	if (ret)
 		goto out;
 
@@ -1539,7 +1540,7 @@ static int tdp_mmu_split_huge_pages_root(struct kvm *kvm,
 				continue;
 		}
 
-		if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared))
+		if (tdp_mmu_split_huge_page(kvm, &iter, sp, shared, false))
 			goto retry;
 
 		sp = NULL;
-- 
2.35.1.1094.g7c7d902a7c-goog




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux