On Wed, Aug 21, 2019 at 02:15:02PM -0300, Jason Gunthorpe wrote: > On Mon, Aug 19, 2019 at 02:17:00PM +0300, Leon Romanovsky wrote: > > From: Jason Gunthorpe <jgg@xxxxxxxxxxxx> > > > > Instead of intersecting a full interval, just iterate over every element > > directly. This is faster and clearer. > > > > Signed-off-by: Jason Gunthorpe <jgg@xxxxxxxxxxxx> > > Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxxxx> > > drivers/infiniband/core/umem_odp.c | 51 ++++++++++++++++-------------- > > drivers/infiniband/hw/mlx5/odp.c | 41 +++++++++++------------- > > 2 files changed, 47 insertions(+), 45 deletions(-) > > > > diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c > > index 8358eb8e3a26..b9bebef00a33 100644 > > +++ b/drivers/infiniband/core/umem_odp.c > > @@ -72,35 +72,41 @@ static void ib_umem_notifier_end_account(struct ib_umem_odp *umem_odp) > > mutex_unlock(&umem_odp->umem_mutex); > > } > > > > -static int ib_umem_notifier_release_trampoline(struct ib_umem_odp *umem_odp, > > - u64 start, u64 end, void *cookie) > > -{ > > - /* > > - * Increase the number of notifiers running, to > > - * prevent any further fault handling on this MR. > > - */ > > - ib_umem_notifier_start_account(umem_odp); > > - umem_odp->dying = 1; > > This patch was not applied on top of the commit noted in the cover > letter Strange: git log --oneline on my submission queue. .... 39c10977a728 RDMA/odp: Iterate over the whole rbtree directly 779c1205d0e0 RDMA/odp: Use the common interval tree library instead of generic 25705cc22617 RDMA/mlx5: Fix MR npages calculation for IB_ACCESS_HUGETLB --- > > > - /* Make sure that the fact the umem is dying is out before we release > > - * all pending page faults. */ > > - smp_wmb(); > > - complete_all(&umem_odp->notifier_completion); > > - umem_odp->umem.context->invalidate_range( > > - umem_odp, ib_umem_start(umem_odp), ib_umem_end(umem_odp)); > > - return 0; > > -} > > - > > static void ib_umem_notifier_release(struct mmu_notifier *mn, > > struct mm_struct *mm) > > { > > struct ib_ucontext_per_mm *per_mm = > > container_of(mn, struct ib_ucontext_per_mm, mn); > > + struct rb_node *node; > > > > down_read(&per_mm->umem_rwsem); > > - if (per_mm->active) > > - rbt_ib_umem_for_each_in_range( > > - &per_mm->umem_tree, 0, ULLONG_MAX, > > - ib_umem_notifier_release_trampoline, true, NULL); > > + if (!per_mm->active) > > + goto out; > > + > > + for (node = rb_first_cached(&per_mm->umem_tree); node; > > + node = rb_next(node)) { > > + struct ib_umem_odp *umem_odp = > > + rb_entry(node, struct ib_umem_odp, interval_tree.rb); > > + > > + /* > > + * Increase the number of notifiers running, to prevent any > > + * further fault handling on this MR. > > + */ > > + ib_umem_notifier_start_account(umem_odp); > > + > > + umem_odp->dying = 1; > > So this ends up as a 'rebasing error' > > I fixed it > > Jason