Since mmu notifiers don't exist for more processes, but could block in interesting places, add some annotations. This should help make sure core mm keeps up its end of the mmu notifier contract. The checks here are outside of all notifier checks because of that. They compile away without CONFIG_DEBUG_ATOMIC_SLEEP. Suggested by Jason. Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: David Rientjes <rientjes@xxxxxxxxxx> Cc: "Christian König" <christian.koenig@xxxxxxx> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> Cc: "Jérôme Glisse" <jglisse@xxxxxxxxxx> Cc: linux-mm@xxxxxxxxx Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> --- include/linux/mmu_notifier.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 3f9829a1f32e..8b71813417e7 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -345,6 +345,8 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm, static inline void mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range) { + might_sleep(); + lock_map_acquire(&__mmu_notifier_invalidate_range_start_map); if (mm_has_notifiers(range->mm)) { range->flags |= MMU_NOTIFIER_RANGE_BLOCKABLE; @@ -368,6 +370,9 @@ mmu_notifier_invalidate_range_start_nonblock(struct mmu_notifier_range *range) static inline void mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range) { + if (mmu_notifier_range_blockable(range)) + might_sleep(); + if (mm_has_notifiers(range->mm)) __mmu_notifier_invalidate_range_end(range, false); } -- 2.23.0 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel