Hi Jason, > > On Tue, Jul 18, 2023 at 01:28:56AM -0700, Vivek Kasireddy wrote: > > Currently, there does not appear to be any mechanism for letting > > drivers or other kernel entities know about updates made in a > > mapping particularly when a new page is faulted in. Providing > > notifications for such situations is really useful when using > > memfds backed by ram-based filesystems such as shmem or hugetlbfs > > that also allow FALLOC_FL_PUNCH_HOLE. > > Huh? You get an invalidate when this happens and the address becomes > non-present. Yes, we do get an invalidate (range) but it is not going to help given my use-case. This is because the invalidate only indicates that the old pages are gone (and not about the availability of new pages). IIUC, after a hole gets punched, it appears the new pages are faulted in only when there are writes made to that region where the hole was punched. So, I think what would really help is to get notified when a new page becomes part of the mapping at a given offset. > > > More specifically, when a hole is punched in a memfd (that is > > backed by shmem or hugetlbfs), a driver can register for > > notifications associated with range invalidations. However, it > > would also be useful to have notifications when new pages are > > faulted in as a result of writes made to the mapping region that > > overlaps with a previously punched hole. > > If there is no change to the PTEs then it is hard to see why this > would be part of a mmu_notifier. IIUC, the PTEs do get changed but only when a new page is faulted in. For shmem, it looks like the PTEs are updated in handle_pte_fault() after shmem_fault() gets called and for hugetlbfs, this seems to happen in hugetlb_fault(). Instead of introducing a new notifier, I did think about reusing (or overloading) .change_pte() but I did not fully understand the impact it would have on KVM, the only user of .change_pte(). Thanks, Vivek > > Jason