Re: KVM: MMU: remove prefault from invlpg handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/05/2009 09:42 PM, Marcelo Tosatti wrote:

I don't think the OS has "other mechanisms", though - the processor can
speculate the tlb so that would be an OS bug.

Can it? I figured it relied on the fact that no access (therefore no TLB
entry instantiation) meant there is no need to invlpg (since there is
nothing in the TLB to invalidate), before updating a particular pte.

The documentation states that invlpg invalidates any entries for the
linear address.


4.10.1.3 says, "The processor may cache translations required for prefetches and for accesses that are a result of speculative execution that would never actually occur in the executed code path.", so there is no way for the OS to ensure no access has occurred. If you change a present pte, you must execute invlpg afterwards to ensure speculation hasn't instantiated the old pte.


It looks like a race:

Signed-off-by: Marcelo Tosatti<mtosatti@xxxxxxxxxx>


diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index a601713..58a0f1e 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -455,8 +455,6 @@ out_unlock:
   static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
   {
   	struct kvm_shadow_walk_iterator iterator;
-	pt_element_t gpte;
-	gpa_t pte_gpa = -1;
   	int level;
   	u64 *sptep;
   	int need_flush = 0;
@@ -470,10 +468,6 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
   		if (level == PT_PAGE_TABLE_LEVEL  ||
   		    ((level == PT_DIRECTORY_LEVEL&&   is_large_pte(*sptep))) ||
   		    ((level == PT_PDPE_LEVEL&&   is_large_pte(*sptep)))) {
-			struct kvm_mmu_page *sp = page_header(__pa(sptep));
-
-			pte_gpa = (sp->gfn<<   PAGE_SHIFT);
-			pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);

   			if (is_shadow_present_pte(*sptep)) {
   				rmap_remove(vcpu->kvm, sptep);
@@ -492,18 +486,6 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
   	if (need_flush)
   		kvm_flush_remote_tlbs(vcpu->kvm);
   	spin_unlock(&vcpu->kvm->mmu_lock);
-
-	if (pte_gpa == -1)
-		return;
-	if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa,&gpte,
-				  sizeof(pt_element_t)))
-		return;



Here, another vcpu updates the gpte and issues a new invlpg.


-	if (is_present_gpte(gpte)&&   (gpte&   PT_ACCESSED_MASK)) {
-		if (mmu_topup_memory_caches(vcpu))
-			return;
-		kvm_mmu_pte_write(vcpu, pte_gpa, (const u8 *)&gpte,
-				  sizeof(pt_element_t), 0);
-	}



And here we undo the correct invlpg with the outdated gpte.

Looks like we considered this, since kvm_read_guest_atomic() is only
needed if inside the spinlock, but some other change moved the
spin_unlock() upwards.  Will investigate history.

Isnt it the OS responsability to serialize pte updates + invlpg between
CPUs?

It is. Do you still have a trace of the error? Maybe we can understand what the guest thought it was doing.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux