On 05/12/2010 10:31 AM, Sheng Yang wrote:
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b59fc67..971a295 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -416,6 +416,10 @@ out:
static int __kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
{
+ unsigned long old_cr0 = kvm_read_cr0(vcpu);
+ unsigned long update_bits = X86_CR0_PG | X86_CR0_PE |
+ X86_CR0_CD | X86_CR0_NW;
PE doesn't affect paging, CD, NW don't either?
Yes, PE can't affect alone.
Marcelo has commented on CD/NW, because we need to reload pdptrs if they changed,
then we need to reload MMU.
Ah, correct.
What about WP?
How WP would affect?
If cr0.wp=0 then we can have a pte with gpte.rw=0 but spte.rw=1 (since
the guest always runs with cr0.wp=1). So we need to reload the mmu to
switch page tables.
This won't work now, I'll post a patch adding cr0.wp to sp->role. But
please add cr0.wp to the set of bits requiring reload so we won't have a
regression.
@@ -722,6 +730,9 @@ static int set_efer(struct kvm_vcpu *vcpu, u64 efer)
vcpu->arch.mmu.base_role.nxe = (efer& EFER_NX)&& !tdp_enabled;
+ if ((efer ^ old_efer)& EFER_NX)
+ update_rsvd_bits_mask(vcpu);
+
return 0;
}
I think it's fine to reset the entire mmu context here, most guests
won't toggle nx all the time. But it needs to be in patch 3, otherwise
we have a regression between 3 and 4.
OK. Would drop patch 3 and keep mmu reset if you like...
Yes please.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html