On Tue, Jul 27, 2021, Paolo Bonzini wrote: > @@ -605,8 +597,13 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, > > /* > * .change_pte() must be surrounded by .invalidate_range_{start,end}(), > + * If mmu_notifier_count is zero, then start() didn't find a relevant > + * memslot and wasn't forced down the slow path; rechecking here is > + * unnecessary. Critiquing my own comment... Maybe elaborate on what's (not) being rechecked? And also clarify that rechecking the memslots on a false positive (due to a second invalidation) is not problematic? * If mmu_notifier_count is zero, then no in-progress invalidations, * including this one, found a relevant memslot at start(); rechecking * memslots here is unnecessary. Note, a false positive (count elevated * by a different invalidation) is sub-optimal but functionally ok. */ Thanks for doing the heavy lifting! > */ > WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count)); > + if (!kvm->mmu_notifier_count) > + return; > > kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); > }