On Thu, Nov 26, 2020 at 05:23:59PM -0500, Peter Xu wrote: > For missing mode uffds, fault around does not help because if the page cache > existed, then the page should be there already. If the page cache is not > there, nothing else we can do, either. If the fault-around code is destined to > be helpless for userfault-missing vmas, then ideally we can skip it. But it might have been faulted into the cache by another task, so skipping it is bad. > For wr-protected mode uffds, errornously fault in those pages around could lead > to threads accessing the pages without uffd server's awareness. For example, > when punching holes on uffd-wp registered shmem regions, we'll first try to > unmap all the pages before evicting the page cache but without locking the > page (please refer to shmem_fallocate(), where unmap_mapping_range() is called > before shmem_truncate_range()). When fault-around happens near a hole being > punched, we might errornously fault in the "holes" right before it will be > punched. Then there's a small window before the page cache was finally > dropped, and after the page will be writable again (NOTE: the uffd-wp protect > information is totally lost due to the pre-unmap in shmem_fallocate(), so the > page can be writable within the small window). That's severe data loss. Sounds like you have a missing page_mkwrite implementation. > This patch comes from debugging a data loss issue when working on the uffd-wp > support on shmem/hugetlbfs. I posted this out for early review and comments, > but also because it should already start to benefit missing mode userfaultfd to > avoid trying to fault around on reads. A measurable difference?