Re: [PATCH v3 7/8] kvm: x86: mmu: Lockless access tracking for Intel CPUs without EPT A bits.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday, December 14, 2016 05:28:48 PM Paolo Bonzini wrote:
> 
> Just a few changes bordering on aesthetics...
> 
> Please review and let me know if you agree:
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 6ba62200530a..6b5d8ff66026 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -213,8 +213,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask)
>  
>  static inline bool is_access_track_spte(u64 spte)
>  {
> -	return shadow_acc_track_mask != 0 &&
> -	       (spte & shadow_acc_track_mask) == shadow_acc_track_value;
> +	/* Always false if shadow_acc_track_mask is zero.  */
> +	return (spte & shadow_acc_track_mask) == shadow_acc_track_value;
>  }

This looks good.

>  
>  /*
> @@ -526,23 +526,24 @@ static bool spte_can_locklessly_be_made_writable(u64 spte)
>  
>  static bool spte_has_volatile_bits(u64 spte)
>  {
> +	if (is_shadow_present_pte(spte))
> +		return false;
> +

This should be !is_shadow_present_pte

>  	/*
>  	 * Always atomically update spte if it can be updated
>  	 * out of mmu-lock, it can ensure dirty bit is not lost,
>  	 * also, it can help us to get a stable is_writable_pte()
>  	 * to ensure tlb flush is not missed.
>  	 */
> -	if (spte_can_locklessly_be_made_writable(spte))
> +	if (spte_can_locklessly_be_made_writable(spte) ||
> +	    is_access_track_spte(spte))
>  		return true;
>  
> -	if (!is_shadow_present_pte(spte))
> -		return false;
> -
> -	if (!shadow_accessed_mask)
> -		return is_access_track_spte(spte);
> +	if ((spte & shadow_accessed_mask) == 0 ||
> +     	    (is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0))
> +		return true;

We also need a shadow_accessed_mask != 0 check here, otherwise it will always return true when shadow_accessed_mask is 0.

>  
> -	return (spte & shadow_accessed_mask) == 0 ||
> -		(is_writable_pte(spte) && (spte & shadow_dirty_mask) == 0);
> +	return false;
>  }
>  
>  static bool is_accessed_spte(u64 spte)
> @@ -726,16 +727,13 @@ static bool mmu_spte_age(u64 *sptep)
>  {
>  	u64 spte = mmu_spte_get_lockless(sptep);
>  
> -	if (spte & shadow_accessed_mask) {
> +	if (!is_accessed_spte(spte))
> +		return false;
> +
> +	if (shadow_accessed_mask) {
>  		clear_bit((ffs(shadow_accessed_mask) - 1),
>  			  (unsigned long *)sptep);
> -		return true;
> -	}
> -
> -	if (shadow_accessed_mask == 0) {
> -		if (is_access_track_spte(spte))
> -			return false;
> -
> +	} else {
>  		/*
>  		 * Capture the dirty status of the page, so that it doesn't get
>  		 * lost when the SPTE is marked for access tracking.
> @@ -745,10 +743,9 @@ static bool mmu_spte_age(u64 *sptep)
>  
>  		spte = mark_spte_for_access_track(spte);
>  		mmu_spte_update_no_track(sptep, spte);
> -		return true;
>  	}
>  
> -	return false;
> +	return true;
>  }
>  

This looks good as well.

Thanks,
Junaid


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux