Re: [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Looks good. Thanks.

On Wednesday, January 25, 2017 12:49:06 PM Paolo Bonzini wrote:
> 
> > -		remove_acc_track = is_access_track_spte(spte);
> > -
> > -		/* Verify that the fault can be handled in the fast path */
> > -		if (!remove_acc_track && !remove_write_prot)
> > -			break;
> > +		new_spte = is_access_track_spte(spte)
> > +			   ? restore_acc_track_spte(spte)
> > +			   : spte;
> 
> Just a tiny stylistic change to match the "new_spte |= PT_WRITABLE_MASK"
> in the block below:
> 
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index aa3b0d14c019..2fd7586aad4d 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -3135,9 +3135,10 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
>  			break;
>  		}
>  
> -		new_spte = is_access_track_spte(spte)
> -			   ? restore_acc_track_spte(spte)
> -			   : spte;
> +		new_spte = spte;
> +
> +		if (is_access_track_spte(spte))
> +			new_spte = restore_acc_track_spte(new_spte);
>  
>  		/*
>  		 * Currently, to simplify the code, write-protection can
> 
> Paolo
> 
> >  		/*
> > -		 * Do not fix write-permission on the large spte since we only
> > -		 * dirty the first page into the dirty-bitmap in
> > -		 * fast_pf_fix_direct_spte() that means other pages are missed
> > -		 * if its slot is dirty-logged.
> > -		 *
> > -		 * Instead, we let the slow page fault path create a normal spte
> > -		 * to fix the access.
> > -		 *
> > -		 * See the comments in kvm_arch_commit_memory_region().
> > +		 * Currently, to simplify the code, write-protection can
> > +		 * be removed in the fast path only if the SPTE was
> > +		 * write-protected for dirty-logging or access tracking.
> >  		 */
> > -		if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot)
> > +		if ((error_code & PFERR_WRITE_MASK) &&
> > +		    spte_can_locklessly_be_made_writable(spte))
> > +		{
> > +			new_spte |= PT_WRITABLE_MASK;
> > +
> > +			/*
> > +			 * Do not fix write-permission on the large spte since
> > +			 * we only dirty the first page into the dirty-bitmap in
> > +			 * fast_pf_fix_direct_spte() that means other pages are
> > +			 * missed if its slot is dirty-logged.
> > +			 *
> > +			 * Instead, we let the slow page fault path create a
> > +			 * normal spte to fix the access.
> > +			 *
> > +			 * See the comments in kvm_arch_commit_memory_region().
> > +			 */
> > +			if (sp->role.level > PT_PAGE_TABLE_LEVEL)
> > +				break;
> > +		}
> > +
> > +		/* Verify that the fault can be handled in the fast path */
> > +		if (new_spte == spte ||
> > +		    !is_access_allowed(error_code, new_spte))
> >  			break;
> >  
> >  		/*
> > @@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level,
> >  		 */
> >  		fault_handled = fast_pf_fix_direct_spte(vcpu, sp,
> >  							iterator.sptep, spte,
> > -							remove_write_prot,
> > -							remove_acc_track);
> > +							new_spte);
> >  		if (fault_handled)
> >  			break;
> >  
> > 



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux