Re: KVM: MMU: remove prefault from invlpg handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 05, 2009 at 06:57:44PM +0200, Avi Kivity wrote:
> On 12/05/2009 04:34 PM, Marcelo Tosatti wrote:
>> The invlpg prefault optimization breaks Windows 2008 R2 occasionally.
>>
>> The visible effect is that the invlpg handler instantiates a pte which
>> is, microseconds later, written with a different gfn by another vcpu.
>>
>> The OS could have other mechanisms to prevent a present translation from
>> being used, which the hypervisor is unaware of.
>>
>> Fix by making invlpg emulation follow documented behaviour.
>>
>>    
>
> Good catch.  How did you track it down?

Lots of tracing.

> I don't think the OS has "other mechanisms", though - the processor can  
> speculate the tlb so that would be an OS bug.

Can it? I figured it relied on the fact that no access (therefore no TLB
entry instantiation) meant there is no need to invlpg (since there is
nothing in the TLB to invalidate), before updating a particular pte.

The documentation states that invlpg invalidates any entries for the
linear address.

> It looks like a race:
>
>> Signed-off-by: Marcelo Tosatti<mtosatti@xxxxxxxxxx>
>>
>>
>> diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
>> index a601713..58a0f1e 100644
>> --- a/arch/x86/kvm/paging_tmpl.h
>> +++ b/arch/x86/kvm/paging_tmpl.h
>> @@ -455,8 +455,6 @@ out_unlock:
>>   static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
>>   {
>>   	struct kvm_shadow_walk_iterator iterator;
>> -	pt_element_t gpte;
>> -	gpa_t pte_gpa = -1;
>>   	int level;
>>   	u64 *sptep;
>>   	int need_flush = 0;
>> @@ -470,10 +468,6 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
>>   		if (level == PT_PAGE_TABLE_LEVEL  ||
>>   		    ((level == PT_DIRECTORY_LEVEL&&  is_large_pte(*sptep))) ||
>>   		    ((level == PT_PDPE_LEVEL&&  is_large_pte(*sptep)))) {
>> -			struct kvm_mmu_page *sp = page_header(__pa(sptep));
>> -
>> -			pte_gpa = (sp->gfn<<  PAGE_SHIFT);
>> -			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
>>
>>   			if (is_shadow_present_pte(*sptep)) {
>>   				rmap_remove(vcpu->kvm, sptep);
>> @@ -492,18 +486,6 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
>>   	if (need_flush)
>>   		kvm_flush_remote_tlbs(vcpu->kvm);
>>   	spin_unlock(&vcpu->kvm->mmu_lock);
>> -
>> -	if (pte_gpa == -1)
>> -		return;
>> -	if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa,&gpte,
>> -				  sizeof(pt_element_t)))
>> -		return;
>>    
>
>
> Here, another vcpu updates the gpte and issues a new invlpg.
>
>
>> -	if (is_present_gpte(gpte)&&  (gpte&  PT_ACCESSED_MASK)) {
>> -		if (mmu_topup_memory_caches(vcpu))
>> -			return;
>> -		kvm_mmu_pte_write(vcpu, pte_gpa, (const u8 *)&gpte,
>> -				  sizeof(pt_element_t), 0);
>> -	}
>>    
>
>
> And here we undo the correct invlpg with the outdated gpte.
>
> Looks like we considered this, since kvm_read_guest_atomic() is only  
> needed if inside the spinlock, but some other change moved the  
> spin_unlock() upwards.  Will investigate history.

Isnt it the OS responsability to serialize pte updates + invlpg between
CPUs?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux