On Thu, Aug 22, 2019 at 4:24 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Aug 22, 2019 at 10:42:39AM +0200, Daniel Vetter wrote: > > > > RDMA has a mutex: > > > > > > ib_umem_notifier_invalidate_range_end > > > rbt_ib_umem_for_each_in_range > > > invalidate_range_start_trampoline > > > ib_umem_notifier_end_account > > > mutex_lock(&umem_odp->umem_mutex); > > > > > > I'm working to delete this path though! > > > > > > nonblocking or not follows the start, the same flag gets placed into > > > the mmu_notifier_range struct passed to end. > > > > Ok, makes sense. > > > > I guess that also means the might_sleep (I started on that) in > > invalidate_range_end also needs to be conditional? Or not bother with > > a might_sleep in invalidate_range_end since you're working on removing > > the last sleep in there? > > I might suggest the same pattern as used for locked, the might_sleep > unconditionally on the start, and a 2nd might sleep after the IF in > __mmu_notifier_invalidate_range_end() > > Observing that by audit all the callers already have the same locking > context for start/end My question was more about enforcing that going forward, since you're working to remove all the sleeps from invalidate_range_end. I don't want to add debug annotations which are stricter than what the other side actually expects. But since currently there is still sleeping locks in invalidate_range_end I think I'll just stick them in both places. You can then (re)move it when the cleanup lands. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx