Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 07/12/2016 01:46, Junaid Shahid wrote:
> This change implements lockless access tracking for Intel CPUs without EPT
> A bits. This is achieved by marking the PTEs as not-present (but not
> completely clearing them) when clear_flush_young() is called after marking
> the pages as accessed. When an EPT Violation is generated as a result of
> the VM accessing those pages, the PTEs are restored to their original values.
> 
> Signed-off-by: Junaid Shahid <junaids@xxxxxxxxxx>

Just a few changes bordering on aesthetics...

Please review and let me know if you agree:

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 6ba62200530a..6b5d8ff66026 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask)
 
 static inline bool is_access_track_spte(u64 spte)
 {
-	return shadow_acc_track_mask != 0 &&
-	       (spte & shadow_acc_track_mask) == shadow_acc_track_value;
+	/* Always false if shadow_acc_track_mask is zero.  */
+	return (spte & shadow_acc_track_mask) == shadow_acc_track_value;
 }
 
 /*
@@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 spte)
 
 static bool spte_has_volatile_bits(u64 spte)
 {
+	if (is_shadow_present_pte(spte))
+		return false;
+
 	/*
 	 * Always atomically update spte if it can be updated
 	 * out of mmu-lock, it can ensure dirty bit is not lost,
 	 * also, it can help us to get a stable is_writable_pte()
 	 * to ensure tlb flush is not missed.
 	 */
-	if (spte_can_locklessly_be_made_writable(spte))
+	if (spte_can_locklessly_be_made_writable(spte) ||
+	    is_access_track_spte(spte))
 		return true;
 
-	if (!is_shadow_present_pte(spte))
-		return false;
-
-	if (!shadow_accessed_mask)
-		return is_access_track_spte(spte);
+	if ((spte & shadow_accessed_mask) == 0 ||
+     	    (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0))
+		return true;
 
-	return (spte & shadow_accessed_mask) == 0 ||
-		(is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0);
+	return false;
 }
 
 static bool is_accessed_spte(u64 spte)
@@ -726,16 +727,13 @@ static bool mmu_spte_age(u64 *sptep)
 {
 	u64 spte = mmu_spte_get_lockless(sptep);
 
-	if (spte & shadow_accessed_mask) {
+	if (!is_accessed_spte(spte))
+		return false;
+
+	if (shadow_accessed_mask) {
 		clear_bit((ffs(shadow_accessed_mask) - 1),
 			  (unsigned long *)sptep);
-		return true;
-	}
-
-	if (shadow_accessed_mask == 0) {
-		if (is_access_track_spte(spte))
-			return false;
-
+	} else {
 		/*
 		 * Capture the dirty status of the page, so that it doesn't get
 		 * lost when the SPTE is marked for access tracking.
@@ -745,10 +743,9 @@ static bool mmu_spte_age(u64 *sptep)
 
 		spte = mark_spte_for_access_track(spte);
 		mmu_spte_update_no_track(sptep, spte);
-		return true;
 	}
 
-	return false;
+	return true;
 }
 
 static void walk_shadow_page_lockless_begin(struct kvm_vcpu *vcpu)


Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux