On Tue, Dec 10, 2024 4:31 AM Jason Gunthorpe wrote: > On Wed, Oct 09, 2024 at 10:59:00AM +0900, Daisuke Matsuda wrote: > > > +static bool rxe_ib_invalidate_range(struct mmu_interval_notifier *mni, > > + const struct mmu_notifier_range *range, > > + unsigned long cur_seq) > > +{ > > + struct ib_umem_odp *umem_odp = > > + container_of(mni, struct ib_umem_odp, notifier); > > + struct rxe_mr *mr = umem_odp->private; > > + unsigned long start, end; > > + > > + if (!mmu_notifier_range_blockable(range)) > > + return false; > > + > > + mutex_lock(&umem_odp->umem_mutex); > > + mmu_interval_set_seq(mni, cur_seq); > > + > > + start = max_t(u64, ib_umem_start(umem_odp), range->start); > > + end = min_t(u64, ib_umem_end(umem_odp), range->end); > > + > > + rxe_mr_unset_xarray(mr, start, end); > > + > > + /* update umem_odp->dma_list */ > > + ib_umem_odp_unmap_dma_pages(umem_odp, start, end); > > This seems like a strange thing to do, rxe has the xarray so why does > it use the odp->dma_list? I tried to reuse existing rxe codes for RDMA operations, and that required to update the xarray also for ODP cases. I think using pfn_list only is technically feasible. > > I think what you want is to have rxe disable the odp->dma_list and use > its xarray instead > > Or use the odp lists as-is and don't include the xarray? As you pointed out in reply to the next patch, the current implementation introduces redundant copying overhead. We cannot avoid that with xarray, so I would rather use the odp lists only. Regards, Daisuke > > Jason