Re: cr3 OOS optimisation breaks 32-bit GNU/kFreeBSD guest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marcelo Tosatti wrote:
On Tue, Mar 24, 2009 at 11:47:33AM +0200, Avi Kivity wrote:
index 2ea8262..48169d7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3109,6 +3109,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
 			kvm_write_guest_time(vcpu);
 		if (test_and_clear_bit(KVM_REQ_MMU_SYNC, &vcpu->requests))
 			kvm_mmu_sync_roots(vcpu);
+		if (test_and_clear_bit(KVM_REQ_MMU_GLOBAL_SYNC, &vcpu->requests))
+			kvm_mmu_sync_global(vcpu);
 		if (test_and_clear_bit(KVM_REQ_TLB_FLUSH, &vcpu->requests))
 			kvm_x86_ops->tlb_flush(vcpu);
 		if (test_and_clear_bit(KVM_REQ_REPORT_TPR_ACCESS
Windows will (I think) write a PDE on every context switch, so this effectively disables global unsync for that guest.

What about recursively syncing the newly linked page in FNAME(fetch)()? If the page isn't global, this becomes a no-op, so no new overhead. The only question is the expense when linking a populated top-level page, especially in long mode.

How about this?

KVM: MMU: sync global pages on fetch()

If an unsync global page becomes unreachable via the shadow tree, which
can happen if one its parent pages is zapped, invlpg will fail to
invalidate translations for gvas contained in such unreachable pages.

So sync global pages in fetch().

Signed-off-by: Marcelo Tosatti <mtosatti@xxxxxxxxxx>

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 09782a9..728be72 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -308,8 +308,14 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 			break;
 		}
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep))
+		if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) {
+			if (level-1 == PT_PAGE_TABLE_LEVEL) {
+				shadow_page = page_header(__pa(sptep));
+				if (shadow_page->unsync && shadow_page->global)
+					kvm_sync_page(vcpu, shadow_page);
+			}
 			continue;
+		}
if (is_large_pte(*sptep)) {
 			rmap_remove(vcpu->kvm, sptep);

But here the shadow page is already linked? Isn't the root cause that an invlpg was called when the page wasn't linked, so it wasn't seen by invlpg?

So I thought the best place would be in fetch(), after kvm_mmu_get_page(). If we're linking a page which contains global ptes, they might be unsynced due to invlpgs that we've missed.

Or am I missing something about the root cause?

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux