This is a note to let you know that I've just added the patch titled KVM: RISC-V: Retry fault if vma_lookup() results become invalid to the 6.3-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: kvm-risc-v-retry-fault-if-vma_lookup-results-become-invalid.patch and it can be found in the queue-6.3 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 2ed90cb0938a45b12eb947af062d12c7af0067b3 Mon Sep 17 00:00:00 2001 From: David Matlack <dmatlack@xxxxxxxxxx> Date: Fri, 17 Mar 2023 14:11:06 -0700 Subject: KVM: RISC-V: Retry fault if vma_lookup() results become invalid From: David Matlack <dmatlack@xxxxxxxxxx> commit 2ed90cb0938a45b12eb947af062d12c7af0067b3 upstream. Read mmu_invalidate_seq before dropping the mmap_lock so that KVM can detect if the results of vma_lookup() (e.g. vma_shift) become stale before it acquires kvm->mmu_lock. This fixes a theoretical bug where a VMA could be changed by userspace after vma_lookup() and before KVM reads the mmu_invalidate_seq, causing KVM to install page table entries based on a (possibly) no-longer-valid vma_shift. Re-order the MMU cache top-up to earlier in user_mem_abort() so that it is not done after KVM has read mmu_invalidate_seq (i.e. so as to avoid inducing spurious fault retries). It's unlikely that any sane userspace currently modifies VMAs in such a way as to trigger this race. And even with directed testing I was unable to reproduce it. But a sufficiently motivated host userspace might be able to exploit this race. Note KVM/ARM had the same bug and was fixed in a separate, near identical patch (see Link). Link: https://lore.kernel.org/kvm/20230313235454.2964067-1-dmatlack@xxxxxxxxxx/ Fixes: 9955371cc014 ("RISC-V: KVM: Implement MMU notifiers") Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: David Matlack <dmatlack@xxxxxxxxxx> Tested-by: Anup Patel <anup@xxxxxxxxxxxxxx> Signed-off-by: Anup Patel <anup@xxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/riscv/kvm/mmu.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -628,6 +628,13 @@ int kvm_riscv_gstage_map(struct kvm_vcpu !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + /* We need minimum second+third level pages */ + ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); + if (ret) { + kvm_err("Failed to topup G-stage cache\n"); + return ret; + } + mmap_read_lock(current->mm); vma = vma_lookup(current->mm, hva); @@ -648,6 +655,15 @@ int kvm_riscv_gstage_map(struct kvm_vcpu if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) gfn = (gpa & huge_page_mask(hstate_vma(vma))) >> PAGE_SHIFT; + /* + * Read mmu_invalidate_seq so that KVM can detect if the results of + * vma_lookup() or gfn_to_pfn_prot() become stale priort to acquiring + * kvm->mmu_lock. + * + * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs + * with the smp_wmb() in kvm_mmu_invalidate_end(). + */ + mmu_seq = kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); if (vma_pagesize != PUD_SIZE && @@ -657,15 +673,6 @@ int kvm_riscv_gstage_map(struct kvm_vcpu return -EFAULT; } - /* We need minimum second+third level pages */ - ret = kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); - if (ret) { - kvm_err("Failed to topup G-stage cache\n"); - return ret; - } - - mmu_seq = kvm->mmu_invalidate_seq; - hfn = gfn_to_pfn_prot(kvm, gfn, is_write, &writable); if (hfn == KVM_PFN_ERR_HWPOISON) { send_sig_mceerr(BUS_MCEERR_AR, (void __user *)hva, Patches currently in stable-queue which might be from dmatlack@xxxxxxxxxx are queue-6.3/kvm-x86-preserve-tdp-mmu-roots-until-they-are-explicitly-invalidated.patch queue-6.3/kvm-risc-v-retry-fault-if-vma_lookup-results-become-invalid.patch