Now, calling page_mkwrite() by itself is not enough, since the moment you make the page dirty, the page cleaner could go ahead and call writepage() behind your back and clean it. In actual practice, with a Direct I/O read request racing with writeback, this is race was quite hard to hit, because the that would imply that the background writepage() call would have to complete ahead of the synchronous read request, and the block layer generally prioritizes synchronous reads ahead of background write requests. So in practice, this race was ***very*** hard to hit. Jan may have reported it in 2018, but I don't think I've ever seen it happen myself. For process_vm_writev() this is a case where user pages are pinned and then released in short order, so I suspect that race with the page cleaner would also be very hard to hit. But we could completely remove the potential for the race, and also make things kinder for f2fs and btrfs's compressed file write support, by making things work much like the write(2) system call. Imagine if we had a "pin_user_pages_local()" which calls write_begin(), and a "unpin_user_pages_local()" which calls write_end(), and the presumption with the "[un]pin_user_pages_local" API is that you don't hold the pinned pages for very long --- say, not across a system call boundary, and then it would work the same way the write(2) system call works does except that in the case of process_vm_writev(2) the pages are identified by another process's address space where they happen to be mapped. This obviously doesn't work when pinning pages for remote DMA, because in that case the time between pin_user_pages_remote() and unpin_user_pages_remote() could be a long, long time, so that means we can't use using write_begin/write_end; we'd need to call page_mkwrite() when the pages are first pinned and then somehow prevent the page cleaner from touching a dirty page which is pinned for use by the remote DMA. Does that make sense? - Ted