Re: [PATCH] KVM: Defer remote tlb flushes on invlpg (v2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrea Arcangeli wrote:
 static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
 {
+	bool need_flush;
+
 	if (sp->role.glevels != vcpu->arch.mmu.root_level) {
 		kvm_mmu_zap_page(vcpu->kvm, sp);
 		return 1;
 	}
- if (rmap_write_protect(vcpu->kvm, sp->gfn))
-		kvm_flush_remote_tlbs(vcpu->kvm);
+	need_flush = rmap_write_protect(vcpu->kvm, sp->gfn);
+	kvm_flush_remote_tlbs_cond(vcpu->kvm, need_flush);

Why exactly do we need to flush remote and local tlbs here (in the cr3
overwrite path which is a local flush) if no change happened to sptes
related to the current process? How is it relevant that a previous
invlpg has run and we did only a local flush at the time invlpg run?

An invlpg on a remote cpu may have not done a local_tlb_flush() because the spte was not present. So we have a poisoned tlb remotely which needs to be flushed when protecting pages.

However a better fix for this is to make the local tlb flush on invlpg unconditional.

(historical note - the kvm mmu used to depend on the shadow page tables and guest page tables being consistent, so we were pretty paranoid about guest tlb management bugs (or exploits) killing the host. but with the gfn tables maintained in shadow pages, that's no longer the case)

Same here.
Same here.


Same explanations apply.

@@ -475,8 +475,10 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
 			break;
 	}
- if (need_flush)
-		kvm_flush_remote_tlbs(vcpu->kvm);
+	if (need_flush) {
+		vcpu->kvm->remote_tlbs_dirty = true;
+		kvm_x86_ops->tlb_flush(vcpu);
+	}
 	spin_unlock(&vcpu->kvm->mmu_lock);
if (pte_gpa == -1)

I think it'd be more consistent to set_bit(KVM_REQ_TLB_FLUSH,
&vcpu->requests) instead of invoking tlb_flush from a different place.

Agree, it can also fold an IPI as the remote cpu will notice the request bit is set.

@@ -758,10 +758,22 @@ static bool make_all_cpus_request(struct kvm *kvm, unsigned int req)
void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
+	kvm->remote_tlbs_dirty = false;
+	wmb();
 	if (make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
 		++kvm->stat.remote_tlb_flush;
 }
+void kvm_flush_remote_tlbs_cond(struct kvm *kvm, bool cond)
+{
+	if (!cond) {
+		rmb();
+		cond = kvm->remote_tlbs_dirty;
+	}
+	if (cond)
+		kvm_flush_remote_tlbs(kvm);
+}
+
 void kvm_reload_remote_mmus(struct kvm *kvm)
 {
 	make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
@@ -840,8 +852,9 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
 	need_tlb_flush = kvm_unmap_hva(kvm, address);
 	spin_unlock(&kvm->mmu_lock);
+ rmb();
 	/* we've to flush the tlb before the pages can be freed */
-	if (need_tlb_flush)
+	if (need_tlb_flush || kvm->remote_tlbs_dirty)
 		kvm_flush_remote_tlbs(kvm);
}
@@ -865,8 +878,9 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
 		need_tlb_flush |= kvm_unmap_hva(kvm, start);
 	spin_unlock(&kvm->mmu_lock);
+ rmb();
 	/* we've to flush the tlb before the pages can be freed */
-	if (need_tlb_flush)
+	if (need_tlb_flush || kvm->remote_tlbs_dirty)
 		kvm_flush_remote_tlbs(kvm);
 }

I think the only memory barrier required is a smp_mb() between setting
remote_tlbs_dirty true and make_all_cpus_request.

All other memory barriers seems unnecessary. It can't be a invlpg
relative to the page the mmu notifier is working on that is setting
remote_tlbs_dirty after mmu_lock is released in the mmu notifier
method, so as long as the remote_tlbs_dirty bit isn't lost we're
ok. We basically have the remote_tlbs_dirty randomly going up from a
mmu_lock protected section and the mmu notifier only must make sure
that after clearing it, it always flushes (so only race is if we flush
before clearing the flag).

However the setting must be done with set_bit and the clearing with
test_and_clear_bit, otherwise when one cpus sets it inside mmu_lock
protected section and other cpu clears it without any lock held from
the mmu notifier, the result is undefined. AFIK without lock prefix on
the mov instruction what can happen is like:

cpu0		     	 cpu1
--------	     	 ---------
remote_tlbs_dirty = 0
			 remote_tlbs_dirty = 1
remote_tlbs_dirty == 0

I don't think it only applies to bitops, surely it would more
obviously apply to bitops but I think it also applies to plain mov.

I think we no longer need remote_tlbs_dirty. I'll send a new version.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux