On Mon, May 17, 2021, Reiji Watanabe wrote: > > void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) > > { > > + unsigned long old_cr0 = kvm_read_cr0(vcpu); > > + unsigned long old_cr4 = kvm_read_cr4(vcpu); > > + > > kvm_lapic_reset(vcpu, init_event); > > > > vcpu->arch.hflags = 0; > > @@ -10483,6 +10485,10 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) > > vcpu->arch.ia32_xss = 0; > > > > static_call(kvm_x86_vcpu_reset)(vcpu, init_event); > > + > > + if (kvm_cr0_mmu_role_changed(old_cr0, kvm_read_cr0(vcpu)) || > > + kvm_cr4_mmu_role_changed(old_cr4, kvm_read_cr4(vcpu))) > > + kvm_mmu_reset_context(vcpu); > > } > > I'm wondering if kvm_vcpu_reset() should call kvm_mmu_reset_context() > for a change in EFER.NX as well. Oooh. So there _should_ be no need. Paging has to be enabled for EFER.NX to be relevant, and INIT toggles CR0.PG 1=>0 if paging was enabled and so is guaranteed to trigger a context reset. And we do want to skip the context reset, e.g. INIT-SIPI-SIPI when the vCPU has paging disabled should continue using the same MMU. But, kvm_calc_mmu_role_common() neglects to ignore NX if CR0.PG=0, and so the MMU role will be stale if INIT clears EFER.NX without forcing a context reset. However, that's benign from a functionality perspective because the context itself correctly incorporates CR0.PG, it's only the role that's borked. I.e. KVM will fail to reuse a page/context due to the spurious role.nxe, but the permission checks are always be correct. I'll add a comment here and send a patch to fix the role calculation.