On Wed, Aug 21, 2019 at 9:33 AM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Tue, Aug 20, 2019 at 05:18:10PM +0200, Daniel Vetter wrote: > > > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > > > > index 538d3bb87f9b..856636d06ee0 100644 > > > > +++ b/mm/mmu_notifier.c > > > > @@ -181,7 +181,13 @@ int __mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) > > > > id = srcu_read_lock(&srcu); > > > > hlist_for_each_entry_rcu(mn, &range->mm->mmu_notifier_mm->list, hlist) { > > > > if (mn->ops->invalidate_range_start) { > > > > - int _ret = mn->ops->invalidate_range_start(mn, range); > > > > + int _ret; > > > > + > > > > + if (!mmu_notifier_range_blockable(range)) > > > > + non_block_start(); > > > > + _ret = mn->ops->invalidate_range_start(mn, range); > > > > + if (!mmu_notifier_range_blockable(range)) > > > > + non_block_end(); > > > > > > If someone Acks all the sched changes then I can pick this for > > > hmm.git, but I still think the existing pre-emption debugging is fine > > > for this use case. > > > > Ok, I'll ping Peter Z. for an ack, iirc he was involved. > > > > > Also, same comment as for the lockdep map, this needs to apply to the > > > non-blocking range_end also. > > > > Hm, I thought the page table locks we're holding there already prevent any > > sleeping, so would be redundant? > > AFAIK no. All callers of invalidate_range_start/end pairs do so a few > lines apart and don't change their locking in between - thus since > start can block so can end. > > Would love to know if that is not true?? Yeah I reviewed them, I think I mixed up a discussion I had a while ago with Jerome. It's a bit tricky to follow in the code since in some places ->invalidate_range and ->invalidate_range_end seem to be called from the same place, in others not at all. > Similarly I've also been idly wondering if we should add a > 'might_sleep()' to invalidate_rangestart/end() to make this constraint > clear & tested to the mm side? Hm, sounds like a useful idea. Since in general you wont test with mmu notifiers, but they could happen, and then they will block for at least some mutex usually. I'll throw that as an idea on top for the next round. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx