On 30.03.23 22:55, Mathias Krause wrote: > On 30.03.23 22:33, Sean Christopherson wrote: >> I believe this will remedy the issue. If it does, I'll post a proper patch >> (likely won't be until next week). Compile tested only. >> >> --- >> arch/x86/kvm/mmu.h | 8 +++++++- >> arch/x86/kvm/mmu/mmu.c | 14 ++++++++++++++ >> 2 files changed, 21 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h >> index 89f532516a45..4a303aa735dd 100644 >> --- a/arch/x86/kvm/mmu.h >> +++ b/arch/x86/kvm/mmu.h >> @@ -113,6 +113,7 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly, >> bool kvm_can_do_async_pf(struct kvm_vcpu *vcpu); >> int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code, >> u64 fault_address, char *insn, int insn_len); >> +void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu); >> >> int kvm_mmu_load(struct kvm_vcpu *vcpu); >> void kvm_mmu_unload(struct kvm_vcpu *vcpu); >> @@ -184,8 +185,13 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, >> u64 implicit_access = access & PFERR_IMPLICIT_ACCESS; >> bool not_smap = ((rflags & X86_EFLAGS_AC) | implicit_access) == X86_EFLAGS_AC; >> int index = (pfec + (not_smap << PFERR_RSVD_BIT)) >> 1; >> - bool fault = (mmu->permissions[index] >> pte_access) & 1; >> u32 errcode = PFERR_PRESENT_MASK; >> + bool fault; >> + >> + if (tdp_enabled) >> + kvm_mmu_refresh_passthrough_bits(vcpu, mmu); >> + >> + fault = (mmu->permissions[index] >> pte_access) & 1; >> >> WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK)); >> if (unlikely(mmu->pkru_mask)) { >> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >> index 4c874d4ec68f..2a63b5725f36 100644 >> --- a/arch/x86/kvm/mmu/mmu.c >> +++ b/arch/x86/kvm/mmu/mmu.c >> @@ -5186,6 +5186,20 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kvm_vcpu *vcpu, >> return role; >> } >> >> +void kvm_mmu_refresh_passthrough_bits(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) >> +{ >> + const bool cr0_wp = kvm_is_cr0_bit_set(vcpu, X86_CR0_WP); >> + >> + BUILD_BUG_ON((KVM_MMU_CR0_ROLE_BITS & KVM_POSSIBLE_CR0_GUEST_BITS) != X86_CR0_WP); >> + BUILD_BUG_ON((KVM_MMU_CR4_ROLE_BITS & KVM_POSSIBLE_CR4_GUEST_BITS)); >> + >> + if (is_cr0_wp(mmu) == cr0_wp) >> + return; >> + >> + mmu->cpu_role.base.cr0_wp = cr0_wp; >> + reset_guest_paging_metadata(vcpu, mmu); >> +} >> + >> static inline int kvm_mmu_get_tdp_level(struct kvm_vcpu *vcpu) >> { >> /* tdp_root_level is architecture forced level, use it if nonzero */ >> >> base-commit: 27d6845d258b67f4eb3debe062b7dacc67e0c393 > > I tested a backported version of this patch on v6.1 as that's what I was > testing with and it worked fine. :) > > I'll do more thorough tests tomorrow and actually on kvm-x86/next's HEAD. I extended the KUT CR0.WP tests and could confirm that (a) commit fb509f76acc8 ("KVM: VMX: Make CR0.WP a guest owned bit") without the above change triggers errors both ways in the emulator (denies access while it should be allowed when CR0.WP=0; allows access while it should trigger a page fault when CR0.WP=1) and (b) the above patch fixes these. The tests are available here: https://lore.kernel.org/kvm/20230331135709.132713-1-minipli@xxxxxxxxxxxxxx/ Thanks, Mathias