[PATCH] KVM: MMU: Use ptep_user for cmpxchg_gpte()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Takuya Yoshikawa <yoshikawa.takuya@xxxxxxxxxxxxx>

The address of the gpte was already calculated and stored in ptep_user
before entering cmpxchg_gpte().

This patch makes cmpxchg_gpte() to use that to make it clear that we
are using the same address during walk_addr_generic().

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@xxxxxxxxxxxxx>
---
 Note: I tested this but could not see cmpxchg_gpte() being called.
   Is there any good test case?

 arch/x86/kvm/paging_tmpl.h |   32 +++++++++++---------------------
 1 files changed, 11 insertions(+), 21 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 52450a6..73cdf98 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -79,20 +79,16 @@ static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl)
 }
 
 static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
-			 gfn_t table_gfn, unsigned index,
-			 pt_element_t orig_pte, pt_element_t new_pte)
+			       pt_element_t __user *ptep_user, unsigned index,
+			       pt_element_t orig_pte, pt_element_t new_pte)
 {
+	int npages;
 	pt_element_t ret;
 	pt_element_t *table;
 	struct page *page;
-	gpa_t gpa;
 
-	gpa = mmu->translate_gpa(vcpu, table_gfn << PAGE_SHIFT,
-				 PFERR_USER_MASK|PFERR_WRITE_MASK);
-	if (gpa == UNMAPPED_GVA)
-		return -EFAULT;
-
-	page = gfn_to_page(vcpu->kvm, gpa_to_gfn(gpa));
+	npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, &page);
+	BUG_ON(npages != 1);
 
 	table = kmap_atomic(page, KM_USER0);
 	ret = CMPXCHG(&table[index], orig_pte, new_pte);
@@ -234,12 +230,9 @@ walk:
 			int ret;
 			trace_kvm_mmu_set_accessed_bit(table_gfn, index,
 						       sizeof(pte));
-			ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn,
-					index, pte, pte|PT_ACCESSED_MASK);
-			if (ret < 0) {
-				present = false;
-				break;
-			} else if (ret)
+			ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index,
+						  pte, pte|PT_ACCESSED_MASK);
+			if (ret)
 				goto walk;
 
 			mark_page_dirty(vcpu->kvm, table_gfn);
@@ -293,12 +286,9 @@ walk:
 		int ret;
 
 		trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
-		ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn, index, pte,
-			    pte|PT_DIRTY_MASK);
-		if (ret < 0) {
-			present = false;
-			goto error;
-		} else if (ret)
+		ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index,
+					  pte, pte|PT_DIRTY_MASK);
+		if (ret)
 			goto walk;
 
 		mark_page_dirty(vcpu->kvm, table_gfn);
-- 
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux