On Thu, Mar 30, 2023, Sean Christopherson wrote: > On Thu, Mar 30, 2023, Mathias Krause wrote: > > On 22.03.23 02:37, Mathias Krause wrote: > > I'm leaning to make CR0.WP guest owned only iff we're running on bare > > metal or the VMM is KVM to not play whack-a-mole for all the VMMs that > > might have similar bugs. (Will try to test a few others here as well.) > > However, that would prevent them from getting the performance gain, so > > I'd rather have this fixed / worked around in KVM instead. > > Before we start putting bandaids on this, let's (a) confirm this appears to be > an issue with ESXi and (b) pull in VMware folks to get their input. > > > Any ideas how to investigate this further? > > Does the host in question support UMIP? > > Can you capture a tracepoint log from L1 to verify that L1 KVM is _not_ injecting > any kind of exception? E.g. to get the KVM kitchen sink: > > echo 1 > /sys/kernel/debug/tracing/tracing_on > echo 1 > /sys/kernel/debug/tracing/events/kvm/enable > > cat /sys/kernel/debug/tracing/trace > log > > Or if that's too noisy, a more targeted trace (exception injection + emulation) > woud be: > > echo 1 > /sys/kernel/debug/tracing/tracing_on > > echo 1 > /sys/kernel/debug/tracing/events/kvm/kvm_emulate_insn/enable > echo 1 > /sys/kernel/debug/tracing/events/kvm/kvm_inj_exception/enable > echo 1 > /sys/kernel/debug/tracing/events/kvm/kvm_entry/enable > echo 1 > /sys/kernel/debug/tracing/events/kvm/kvm_exit/enable Duh, this is likely a KVM bug. I expect the issue will go away if you revert fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") KVM doesn't consume CR0.WP for _its_ MMU, but it does consume CR0.WP for the guest walker. By passing through CR0.WP, toggling only CR0.WP will not trap (obviously) and thus won't run through kvm_post_set_cr0(), thus resulting in stle information due to not invoking kvm_init_mmu(). I'm preetty sure I even called that we needed to refresh the permissions, but then obviously forgot to actually make that happen. I believe this will remedy the issue. If it does, I'll post a proper patch (likely won't be until next week). Compile tested only. --- arch/x86/kvm/mmu.h | 8 +++++++- arch/x86/kvm/mmu/mmu.c | 14 ++++++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 89f532516a45..4a303aa735dd 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -113,6 +113,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, u64 fault_address, char *insn, int insn_len); +void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu); int kvm_mmu_load(struct kvm_vcpu *vcpu); void kvm_mmu_unload(struct kvm_vcpu *vcpu); @@ -184,8 +185,13 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, u64 implicit_access = access & PFERR_IMPLICIT_ACCESS; bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC; int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1; - bool fault = (mmu->permissions[index] >> pte_access) & 1; u32 errcode = PFERR_PRESENT_MASK; + bool fault; + + if (tdp_enabled) + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); + + fault = (mmu->permissions[index] >> pte_access) & 1; WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); if (unlikely(mmu->pkru_mask)) { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4c874d4ec68f..2a63b5725f36 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -5186,6 +5186,20 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu, return role; } +void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) +{ + const bool cr0_wp = kvm_is_cr0_bit_set(vcpu, X86_CR0_WP); + + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); + + if (is_cr0_wp(mmu) == cr0_wp) + return; + + mmu->cpu_role.base.cr0_wp = cr0_wp; + reset_guest_paging_metadata(vcpu, mmu); +} + static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) { /* tdp_root_level is architecture forced level, use it if nonzero */ base-commit: 27d6845d258b67f4eb3debe062b7dacc67e0c393 --