On Fri, Dec 23, 2022 at 03:51:55PM +0900, Daisuke Matsuda wrote: > +static bool rxe_ib_invalidate_range(struct mmu_interval_notifier *mni, > + const struct mmu_notifier_range *range, > + unsigned long cur_seq) > +{ > + struct ib_umem_odp *umem_odp = > + container_of(mni, struct ib_umem_odp, notifier); > + unsigned long start; > + unsigned long end; > + > + if (!mmu_notifier_range_blockable(range)) > + return false; > + > + mutex_lock(&umem_odp->umem_mutex); > + mmu_interval_set_seq(mni, cur_seq); > + > + start = max_t(u64, ib_umem_start(umem_odp), range->start); > + end = min_t(u64, ib_umem_end(umem_odp), range->end); > + > + ib_umem_odp_unmap_dma_pages(umem_odp, start, end); After bob's xarray conversion this can be done alot faster, it just an xa_for_each_range and make the xarray items non-present non-present is probably just a null struct page in the xarray. Jason