Re: [PATCH 2/3] KVM: x86: fix nested guest live migration with PML

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/26/19 10:18 AM, Paolo Bonzini wrote:
> @@ -1597,8 +1615,11 @@ static bool spte_clear_dirty(u64 *sptep)
>  
>  	rmap_printk("rmap_clear_dirty: spte %p %llx\n", sptep, *sptep);
>  
> -	spte &= ~shadow_dirty_mask;
> +	WARN_ON(!spte_ad_enabled(spte));
> +	if (spte_ad_need_write_protect(spte))
> +		return spte_write_protect(sptep, false);
>  
> +	spte &= ~shadow_dirty_mask;
>  	return mmu_spte_update(sptep, spte);
>  }
>  

I think that it would be a bit cleaner to move the spte_ad_need_write_protect() check to the if statement inside __rmap_clear_dirty() instead, since it already checks for spte_ad_enabled() to decide between write-protection and dirty-clearing.


Reviewed-by: Junaid Shahid <junaids@xxxxxxxxxx>



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux