On Tue, Dec 04, 2018 at 10:17:48AM +0200, Mike Rapoport wrote: > On Mon, Dec 03, 2018 at 03:18:17PM -0500, jglisse@xxxxxxxxxx wrote: > > From: Jérôme Glisse <jglisse@xxxxxxxxxx> [...] > > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h > > index cbeece8e47d4..3077d487be8b 100644 > > --- a/include/linux/mmu_notifier.h > > +++ b/include/linux/mmu_notifier.h > > @@ -25,10 +25,43 @@ struct mmu_notifier_mm { > > spinlock_t lock; > > }; > > > > +/* > > + * What event is triggering the invalidation: > > Can you please make it kernel-doc comment? Sorry should have done that in the first place, Andrew i will post a v2 with that and fixing my one stupid bug. > > + * > > + * MMU_NOTIFY_UNMAP > > + * either munmap() that unmap the range or a mremap() that move the range > > + * > > + * MMU_NOTIFY_CLEAR > > + * clear page table entry (many reasons for this like madvise() or replacing > > + * a page by another one, ...). > > + * > > + * MMU_NOTIFY_PROTECTION_VMA > > + * update is due to protection change for the range ie using the vma access > > + * permission (vm_page_prot) to update the whole range is enough no need to > > + * inspect changes to the CPU page table (mprotect() syscall) > > + * > > + * MMU_NOTIFY_PROTECTION_PAGE > > + * update is due to change in read/write flag for pages in the range so to > > + * mirror those changes the user must inspect the CPU page table (from the > > + * end callback). > > + * > > + * > > + * MMU_NOTIFY_SOFT_DIRTY > > + * soft dirty accounting (still same page and same access flags) > > + */ > > +enum mmu_notifier_event { > > + MMU_NOTIFY_UNMAP = 0, > > + MMU_NOTIFY_CLEAR, > > + MMU_NOTIFY_PROTECTION_VMA, > > + MMU_NOTIFY_PROTECTION_PAGE, > > + MMU_NOTIFY_SOFT_DIRTY, > > +};