On Mon, Oct 5, 2015 at 10:18 PM, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > Your ext4 patch may well fix the issue, and be the right thing to do > (_regardless_ of the revert, in fact - while it might make the revert > unnecessary, it might also be a good idea even if we do revert). Thinking a bit more about your patch, I actually am getting more and more convinced that it's the wrong thing to do. Why? Because the whole "Setting copied=0 will tell the upper layers to repeat the write" just seems a nasty layering violation, where the low-level filesystem uses a magic return code to introduce a special case at the upper layers. But the upper layers are actually already *aware* of the special case, and in fact have a comment about it. So I think that the whole "setting copied to 0" would actually make a lot more sense in the *caller*. Just do it in generic_perform_write() instead. Then all the special cases and the restarting is all together. What do you guys think? This basically simplifies the low-level filesystem rules, and says: - the filesystem will only ever see a partial "->write_end()" for the case where the page was up-to-date, so that there is no issue with "oops, we now have part of the page that may not have been written at all" - if the page wasn't up-to-date before, ->write_end() will either be everything we said we'd do in ->write_begin(), or it will be nothing at all. Hmm? This would seem to keep the special cases at the right layer, and actually allow low-level filesystems to simplify things (ie the special "copied = 0" special case in ext4 goes away. The ext4 side still worries me, though. You made that "page_zero_new_buffers()" conditional on "copied" being non-zero, but I'm not convinced it can be conditional. Even if we retry, that retry may end up failing (for example, because the source isn't mapped, so we return -EFAULT rather than re-doing the write), but we have those new buffers that got allocated in write_begin(), and now nobody has ever written any data to them at all, so they have random stale contents. So I do think this needs more thought. Or at least somebody should explain to me better why it's all ok. I'm attaching the "copied = 0" special case thing at the generic_perform_write() level as a patch for comments. But for now I still think that reverting would seem to be the safer thing (which still possibly leaves things buggy with a racy unmap, but at least it's the old bug that we've never hit in practice). Dave? Ted? Comments? Linus
mm/filemap.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/mm/filemap.c b/mm/filemap.c index 72940fb38666..e8d01936817a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2493,6 +2493,20 @@ again: pagefault_enable(); flush_dcache_page(page); + /* + * If we didn't successfully copy all the data from user space, + * and the target page is not up-to-date, we will have to prefault + * the source. And if the page wasn't up-to-date before the write, + * the "write_end()" may need to *make* it up-to-date, and thus + * overwrite our partial copy. + * + * So for that case, thow away the whole thing and force a full + * restart (see comment above, and iov_iter_fault_in_readable() + * below). + */ + if (copied < bytes && !PageUptodate(page)) + copied = 0; + status = a_ops->write_end(file, mapping, pos, bytes, copied, page, fsdata); if (unlikely(status < 0))