On Fri, Aug 05, 2022, Sean Christopherson wrote: > Account and track NX huge pages for nonpaging MMUs so that a future > enhancement to precisely check if a shadow page can't be replaced by a NX > huge page doesn't get false positives. Without correct tracking, KVM can > get stuck in a loop if an instruction is fetching and writing data on the > same huge page, e.g. KVM installs a small executable page on the fetch > fault, replaces it with an NX huge page on the write fault, and faults > again on the fetch. > > Alternatively, and perhaps ideally, KVM would simply not enforce the > workaround for nonpaging MMUs. The guest has no page tables to abuse > and KVM is guaranteed to switch to a different MMU on CR0.PG being > toggled so there's no security or performance concerns. However, getting > make_spte() to play nice now and in the future is unnecessarily complex. > > In the current code base, make_spte() can enforce the mitigation if TDP > is enabled or the MMU is indirect, but make_spte() may not always have a > vCPU/MMU to work with, e.g. if KVM were to support in-line huge page > promotion when disabling dirty logging. > > Without a vCPU/MMU, KVM could either pass in the correct information > and/or derive it from the shadow page, but the former is ugly and the > latter subtly non-trivial due to the possibility of direct shadow pages > in indirect MMUs. Given that using shadow paging with an unpaged guest > is far from top priority _and_ has been subjected to the workaround since > its inception, keep it simple and just fix the accounting glitch. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > Reviewed-by: David Matlack <dmatlack@xxxxxxxxxx> Reviewed-by: Mingwei Zhang <mizhang@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > arch/x86/kvm/mmu/spte.c | 12 ++++++++++++ > 2 files changed, 13 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 53d0dafa68ff..345b6b22ab68 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -3123,7 +3123,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) > continue; > > link_shadow_page(vcpu, it.sptep, sp); > - if (fault->is_tdp && fault->huge_page_disallowed) > + if (fault->huge_page_disallowed) > account_nx_huge_page(vcpu->kvm, sp, > fault->req_level >= it.level); > } > diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c > index 7314d27d57a4..52186b795bce 100644 > --- a/arch/x86/kvm/mmu/spte.c > +++ b/arch/x86/kvm/mmu/spte.c > @@ -147,6 +147,18 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, > if (!prefetch) > spte |= spte_shadow_accessed_mask(spte); > > + /* > + * For simplicity, enforce the NX huge page mitigation even if not > + * strictly necessary. KVM could ignore the mitigation if paging is > + * disabled in the guest, as the guest doesn't have an page tables to > + * abuse. But to safely ignore the mitigation, KVM would have to > + * ensure a new MMU is loaded (or all shadow pages zapped) when CR0.PG > + * is toggled on, and that's a net negative for performance when TDP is > + * enabled. When TDP is disabled, KVM will always switch to a new MMU > + * when CR0.PG is toggled, but leveraging that to ignore the mitigation > + * would tie make_spte() further to vCPU/MMU state, and add complexity > + * just to optimize a mode that is anything but performance critical. > + */ > if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) && > is_nx_huge_page_enabled(vcpu->kvm)) { > pte_access &= ~ACC_EXEC_MASK; > -- > 2.37.1.559.g78731f0fdb-goog >