Marcelo Tosatti wrote:
Maybe it's best to resync when relinking a global page?
How about this. It will shorten the unsync period of global pages,
unfortunately.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2a36f7f..bccdcc7 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1238,6 +1238,10 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests);
kvm_mmu_mark_parents_unsync(vcpu, sp);
}
+ if (role.level != PT_PAGE_TABLE_LEVEL &&
+ !list_empty(&vcpu->kvm->arch.oos_global_pages))
+ set_bit(KVM_REQ_MMU_GLOBAL_SYNC, &vcpu->requests);
+
pgprintk("%s: found\n", __func__);
return sp;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2ea8262..48169d7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3109,6 +3109,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
kvm_write_guest_time(vcpu);
if (test_and_clear_bit(KVM_REQ_MMU_SYNC, &vcpu->requests))
kvm_mmu_sync_roots(vcpu);
+ if (test_and_clear_bit(KVM_REQ_MMU_GLOBAL_SYNC, &vcpu->requests))
+ kvm_mmu_sync_global(vcpu);
if (test_and_clear_bit(KVM_REQ_TLB_FLUSH, &vcpu->requests))
kvm_x86_ops->tlb_flush(vcpu);
if (test_and_clear_bit(KVM_REQ_REPORT_TPR_ACCESS
Windows will (I think) write a PDE on every context switch, so this
effectively disables global unsync for that guest.
What about recursively syncing the newly linked page in FNAME(fetch)()?
If the page isn't global, this becomes a no-op, so no new overhead. The
only question is the expense when linking a populated top-level page,
especially in long mode.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html