On Mon, Feb 22, 2021, David Stevens wrote: > --- > v3 -> v4: > - Skip prefetch while invalidations are in progress Oof, nice catch. ... > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 9ac0a727015d..f6aaac729667 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2758,6 +2758,13 @@ static void direct_pte_prefetch(struct kvm_vcpu *vcpu, u64 *sptep) > if (sp->role.level > PG_LEVEL_4K) > return; > > + /* > + * If addresses are being invalidated, skip prefetching to avoid > + * accidentally prefetching those addresses. > + */ > + if (unlikely(vcpu->kvm->mmu_notifier_count)) > + return; FNAME(pte_prefetch) needs the same check. Paolo, this brings up a good addition for the work to integrate the mmu notifier into the rest of KVM, e.g. for vmcs12 pages. Ideally, gfn_to_page_many_atomic() and __gfn_to_pfn_memslot() would WARN if mmu_notifier_count is non-zero, but that will fire all over the place until the nested code properly integrates the notifier. There are a few use cases where racing with the notifier is acceptable, e.g. reexecute_instruction(), but hopefully we can address those flows without things getting too ugly. > + > __direct_pte_prefetch(vcpu, sp, sptep); > }