On Tue, Dec 01, 2020 at 05:30:33PM -0500, Peter Xu wrote: > On Tue, Dec 01, 2020 at 12:59:27PM +0000, Matthew Wilcox wrote: > > On Mon, Nov 30, 2020 at 06:06:03PM -0500, Peter Xu wrote: > > > Faulting around for reads are in most cases helpful for the performance so that > > > continuous memory accesses may avoid another trip of page fault. However it > > > may not always work as expected. > > > > > > For example, userfaultfd registered regions may not be the best candidate for > > > pre-faults around the reads. > > > > > > For missing mode uffds, fault around does not help because if the page cache > > > existed, then the page should be there already. If the page cache is not > > > there, nothing else we can do, either. If the fault-around code is destined to > > > be helpless for userfault-missing vmas, then ideally we can skip it. > > > > This sounds like you're thinking of a file which has exactly one user. > > If there are multiple processes mapping the same file, then no, there's > > no reason to expect a page to be already present in the page table, > > just because it's present in the page cache. > > > > > For wr-protected mode uffds, errornously fault in those pages around could lead > > > to threads accessing the pages without uffd server's awareness. For example, > > > when punching holes on uffd-wp registered shmem regions, we'll first try to > > > unmap all the pages before evicting the page cache but without locking the > > > page (please refer to shmem_fallocate(), where unmap_mapping_range() is called > > > before shmem_truncate_range()). When fault-around happens near a hole being > > > punched, we might errornously fault in the "holes" right before it will be > > > punched. Then there's a small window before the page cache was finally > > > dropped, and after the page will be writable again (NOTE: the uffd-wp protect > > > information is totally lost due to the pre-unmap in shmem_fallocate(), so the > > > page can be writable within the small window). That's severe data loss. > > > > This still doesn't make sense. If the page is Uptodate in the page > > cache, then userspace gets to access it. If you don't want the page to > > be accessible, ClearPageUptodate(). read() can also access it if it's > > marked Uptodate. A write fault on a page will call the filesystem's > > page_mkwrite() and you can block it there. > > I still don't think the page_mkwrite() could help here... Though Andrea pointed I tend to agree page_mkwrite won't help, there's no I/O, nor dirty memory pressure, not even VM_FAULT_MISSING can work on real filesystems yet. The uffd context is associated to certain virtual addresses in the "mm", read/write syscalls shouldn't notice any difference, as all other mm shouldn't notice anything either. It should be enough to check the bit in shmem_fault invoked in ->fault for this purpose, the problem is we need the bit to survive the invalidate. > out an more important issue against swap cache (in the v1 thread [1]). Indeed > if we have those figured out maybe we'll also rethink this patch then it could > become optional; while that seems to be required to allow shmem swap in/out > with uffd-wp which I haven't yet tested. As Hugh pointed out, purely reuse the > _PAGE_SWP_UFFD_WP in swap cache may not work trivially since uffd-wp is per-pte > rather than per-page, so I probably need to think a bit more on how to do > that... > > I don't know whether a patch like this could still be good in the future. For > now, let's drop this patch until we solve all the rest of the puzzle. > > My thanks to all the reviewers, and sorry for the noise! Thanks to you Peter! No noise here, it's great progress to have found the next piece of the puzzle. Any suggestions on how to have the per-vaddr per-mm _PAGE_UFFD_WP bit survive the pte invalidates in a way that remains associated to a certain vaddr in a single mm (so it can shoot itself in the foot if it wants, but it can't interfere with all other mm sharing the shmem file) would be welcome... Andrea