The invalidate page callback use to happen outside the page table spinlock and thus callback use to be allow to sleep. This is no longer the case. However now all call to mmu_notifier_invalidate_page() are bracketed by call to mmu_notifier_invalidate_range_start/mmu_notifier_invalidate_range_end Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Cc: Bernhard Held <berny156@xxxxxx> Cc: Adam Borowski <kilobyte@xxxxxxxxxx> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> Cc: Wanpeng Li <kernellwp@xxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Takashi Iwai <tiwai@xxxxxxx> Cc: Nadav Amit <nadav.amit@xxxxxxxxx> Cc: Mike Galbraith <efault@xxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: axie <axie@xxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mmu_notifier.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index c91b3bcd158f..acc72167b9cb 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -100,6 +100,12 @@ struct mmu_notifier_ops { * pte because the page hasn't been freed yet and it won't be * freed until this returns. If required set_page_dirty has to * be called internally to this method. + * + * Note that previously this callback wasn't call from under + * a spinlock and thus you were able to sleep inside it. This + * is no longer the case. However now all call to this callback + * is either bracketed by call to range_start()/range_end() or + * follow by a call to invalidate_range(). */ void (*invalidate_page)(struct mmu_notifier *mn, struct mm_struct *mm, -- 2.13.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>