On Tue, 2023-12-19 at 16:51 +0000, David Howells wrote: > Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > > > This can't be used with content encryption as that may require expansion of > > > the write RPC beyond the write being made. > > > > > > This doesn't affect writes via mmap - those are written back in the normal > > > way; similarly failed writethrough writes are marked dirty and left to > > > writeback to retry. Another option would be to simply invalidate them, but > > > the contents can be simultaneously accessed by read() and through mmap. > > > > > > > I do wish Linux were less of a mess in this regard. Different > > filesystems behave differently when writeback fails. > > Cifs is particularly, um, entertaining in this regard as it allows the write > to fail on the server due to a checksum failure if the source data changes > during the write and then just retries it later. > Should they be using bounce pages here? Maybe that's more efficient in the common case though and worth the extra hit if it happens seldom enough. > > That said, the modern consensus with local filesystems is to just leave > > the pages clean when buffered writeback fails, but set a writeback error > > on the inode. That at least keeps dirty pages from stacking up in the > > cache. In the case of something like a netfs, we usually invalidate the > > inode and the pages -- netfs's usually have to spontaneously deal with > > that anyway, so we might as well. > > > > Marking the pages dirty here should mean that they'll effectively get a > > second try at writeback, which is a change in behavior from most > > filesystems. I'm not sure it's a bad one, but writeback can take a long > > time if you have a laggy network. > > I'm not sure what the best thing to do is. If everything is doing > O_DSYNC/writethrough I/O on an inode and there is no mmap, then invalidating > the pages is probably not a bad way to deal with failure here. > That's a big if ;) > > When a write has already failed once, why do you think it'll succeed on > > a second attempt (and probably with page-aligned I/O, I guess)? > > See above with cifs. I wonder if the pages being written to should be made RO > and page_mkwrite() forced to lock against DSYNC writethrough. > That sounds pretty heavy handed, particularly if the server goes offline for a bit. Now you're stuck in some locking call in page_mkwrite... > > Another question: when the writeback is (re)attempted, will it end up > > just doing page-aligned I/O, or is the byte range still going to be > > limited to the written range? > > At the moment, it then happens exactly as it would if it wasn't doing > writethrough - so it will write partial folios if it's doing a streaming write > and will do full folios otherwise. > > > > The more I consider it, I think it might be a lot simpler to just "fail > > fast" here rather than remarking the write dirty. > > You may be right - but, again, mmap:-/ > There's nothing we can do about mmap -- we're stuck page-sized I/Os there. With normal buffered I/O I still think just leaving the pages clean is probably the least bad option. I think it's also sort of the Linux "standard" behavior (for better or worse). Willy, do you have any thoughts here? -- Jeff Layton <jlayton@xxxxxxxxxx>