This is a note to let you know that I've just added the patch titled KVM: x86/mmu: Fully re-evaluate MMIO caching when SPTE masks change to the 5.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: kvm-x86-mmu-fully-re-evaluate-mmio-caching-when-spte-masks-change.patch and it can be found in the queue-5.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From c3e0c8c2e8b17bae30d5978bc2decdd4098f0f99 Mon Sep 17 00:00:00 2001 From: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Wed, 3 Aug 2022 22:49:56 +0000 Subject: KVM: x86/mmu: Fully re-evaluate MMIO caching when SPTE masks change From: Sean Christopherson <seanjc@xxxxxxxxxx> commit c3e0c8c2e8b17bae30d5978bc2decdd4098f0f99 upstream. Fully re-evaluate whether or not MMIO caching can be enabled when SPTE masks change; simply clearing enable_mmio_caching when a configuration isn't compatible with caching fails to handle the scenario where the masks are updated, e.g. by VMX for EPT or by SVM to account for the C-bit location, and toggle compatibility from false=>true. Snapshot the original module param so that re-evaluating MMIO caching preserves userspace's desire to allow caching. Use a snapshot approach so that enable_mmio_caching still reflects KVM's actual behavior. Fixes: 8b9e74bfbf8c ("KVM: x86/mmu: Use enable_mmio_caching to track if MMIO caching is enabled") Reported-by: Michael Roth <michael.roth@xxxxxxx> Cc: Tom Lendacky <thomas.lendacky@xxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Tested-by: Michael Roth <michael.roth@xxxxxxx> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Reviewed-by: Kai Huang <kai.huang@xxxxxxxxx> Message-Id: <20220803224957.1285926-3-seanjc@xxxxxxxxxx> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/kvm/mmu/mmu.c | 4 ++++ arch/x86/kvm/mmu/spte.c | 19 +++++++++++++++++++ arch/x86/kvm/mmu/spte.h | 1 + 3 files changed, 24 insertions(+) --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6274,11 +6274,15 @@ static int set_nx_huge_pages(const char /* * nx_huge_pages needs to be resolved to true/false when kvm.ko is loaded, as * its default value of -1 is technically undefined behavior for a boolean. + * Forward the module init call to SPTE code so that it too can handle module + * params that need to be resolved/snapshot. */ void __init kvm_mmu_x86_module_init(void) { if (nx_huge_pages == -1) __set_nx_huge_pages(get_nx_auto_mode()); + + kvm_mmu_spte_module_init(); } /* --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -20,6 +20,7 @@ #include <asm/vmx.h> bool __read_mostly enable_mmio_caching = true; +static bool __ro_after_init allow_mmio_caching; module_param_named(mmio_caching, enable_mmio_caching, bool, 0444); EXPORT_SYMBOL_GPL(enable_mmio_caching); @@ -43,6 +44,18 @@ u64 __read_mostly shadow_nonpresent_or_r u8 __read_mostly shadow_phys_bits; +void __init kvm_mmu_spte_module_init(void) +{ + /* + * Snapshot userspace's desire to allow MMIO caching. Whether or not + * KVM can actually enable MMIO caching depends on vendor-specific + * hardware capabilities and other module params that can't be resolved + * until the vendor module is loaded, i.e. enable_mmio_caching can and + * will change when the vendor module is (re)loaded. + */ + allow_mmio_caching = enable_mmio_caching; +} + static u64 generation_mmio_spte_mask(u64 gen) { u64 mask; @@ -338,6 +351,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio BUG_ON((u64)(unsigned)access_mask != access_mask); WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask); + /* + * Reset to the original module param value to honor userspace's desire + * to (dis)allow MMIO caching. Update the param itself so that + * userspace can see whether or not KVM is actually using MMIO caching. + */ + enable_mmio_caching = allow_mmio_caching; if (!enable_mmio_caching) mmio_value = 0; --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -444,6 +444,7 @@ static inline u64 restore_acc_track_spte u64 kvm_mmu_changed_pte_notifier_make_spte(u64 old_spte, kvm_pfn_t new_pfn); +void __init kvm_mmu_spte_module_init(void); void kvm_mmu_reset_all_pte_masks(void); #endif Patches currently in stable-queue which might be from seanjc@xxxxxxxxxx are queue-5.19/kvm-x86-mmu-treat-nx-as-a-valid-spte-bit-for-npt.patch queue-5.19/kvm-x86-xen-initialize-xen-timer-only-once.patch queue-5.19/kvm-x86-tag-kvm_mmu_x86_module_init-with-__init.patch queue-5.19/kvm-x86-xen-stop-xen-timer-before-changing-irq.patch queue-5.19/kvm-put-the-extra-pfn-reference-when-reusing-a-pfn-in-the-gpc-cache.patch queue-5.19/kvm-drop-unused-gpa-param-from-gfn-pfn-cache-s-__release_gpc-helper.patch queue-5.19/kvm-nvmx-let-userspace-set-nvmx-msr-to-any-_host_-supported-value.patch queue-5.19/kvm-svm-disable-sev-es-support-if-mmio-caching-is-disable.patch queue-5.19/kvm-x86-mmu-fully-re-evaluate-mmio-caching-when-spte-masks-change.patch queue-5.19/kvm-x86-set-error-code-to-segment-selector-on-lldt-ltr-non-canonical-gp.patch queue-5.19/kvm-nvmx-inject-ud-if-vmxon-is-attempted-with-incompatible-cr0-cr4.patch queue-5.19/kvm-do-not-incorporate-page-offset-into-gfn-pfn-cache-user-address.patch queue-5.19/kvm-svm-don-t-bug-if-userspace-injects-an-interrupt-with-gif-0.patch queue-5.19/kvm-nvmx-snapshot-pre-vm-enter-bndcfgs-for-nested_run_pending-case.patch queue-5.19/kvm-x86-split-kvm_is_valid_cr4-and-export-only-the-non-vendor-bits.patch queue-5.19/kvm-fix-multiple-races-in-gfn-pfn-cache-refresh.patch queue-5.19/kvm-fully-serialize-gfn-pfn-cache-refresh-via-mutex.patch queue-5.19/kvm-x86-mark-tss-busy-during-ltr-emulation-_after_-all-fault-checks.patch queue-5.19/kvm-nvmx-account-for-kvm-reserved-cr4-bits-in-consistency-checks.patch queue-5.19/kvm-nvmx-snapshot-pre-vm-enter-debugctl-for-nested_run_pending-case.patch