On Wed, Sep 20, 2023, Xu Yilun wrote: > On 2023-09-13 at 18:55:00 -0700, Sean Christopherson wrote: > > +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) > > +{ > > + lockdep_assert_held_write(&kvm->mmu_lock); > > + > > + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); > > + > > if (likely(kvm->mmu_invalidate_in_progress == 1)) { > > kvm->mmu_invalidate_range_start = start; > > kvm->mmu_invalidate_range_end = end; > > IIUC, Now we only add or override a part of the invalidate range in > these fields, IOW only the range in last slot is stored when we unlock. Ouch. Good catch! > That may break mmu_invalidate_retry_gfn() cause it can never know the > whole invalidate range. > > How about we extend the mmu_invalidate_range_start/end everytime so that > it records the whole invalidate range: > > if (kvm->mmu_invalidate_range_start == INVALID_GPA) { > kvm->mmu_invalidate_range_start = start; > kvm->mmu_invalidate_range_end = end; > } else { > kvm->mmu_invalidate_range_start = > min(kvm->mmu_invalidate_range_start, start); > kvm->mmu_invalidate_range_end = > max(kvm->mmu_invalidate_range_end, end); > } Yeah, that does seem to be the easiest solution. I'll post a fixup patch, unless you want the honors.