On Tue, Sep 4, 2018 at 7:09 PM Jeff Layton <jlayton@xxxxxxxxxx> wrote: > > On Tue, 2018-09-04 at 16:58 +0800, Trol wrote: > > On Tue, Sep 4, 2018 at 3:53 PM Rogier Wolff <R.E.Wolff@xxxxxxxxxxxx> wrote: > > > > ... > > > > > > > > Jlayton's patch is simple but wonderful idea towards correct error > > > > reporting. It seems one crucial thing is still here to be fixed. Does > > > > anyone have some idea? > > > > > > > > The crucial thing may be that a read() after a successful > > > > open()-write()-close() may return old data. > > > > > > > > That may happen where an async writeback error occurs after close() > > > > and the inode/mapping get evicted before read(). > > > > > > Suppose I have 1Gb of RAM. Suppose I open a file, write 0.5Gb to it > > > and then close it. Then I repeat this 9 times. > > > > > > Now, when writing those files to storage fails, there is 5Gb of data > > > to remember and only 1Gb of RAM. > > > > > > I can choose any part of that 5Gb and try to read it. > > > > > > Please make a suggestion about where we should store that data? > > > > That is certainly not possible to be done. But at least, shall we report > > error on read()? Silently returning wrong data may cause further damage, > > such as removing wrong files since it was marked as garbage in the old file. > > > > Is the data wrong though? You tried to write and then that failed. > Eventually we want to be able to get at the data that's actually in the > file -- what is that point? The point is silently data corruption is dangerous. I would prefer getting an error back to receive wrong data. A practical and concrete example may be, A disk cleaner program that first searches for garbage files that won't be used anymore and save the list in a file (open()-write()-close()) and wait for the user to confirm the list of files to be removed. A writeback error occurs and the related page/inode/address_space gets evicted while the user is taking a long thought about it. Finally, the user hits enter and the cleaner begin to open() read() the list again. But what gets removed is the old list of files that was generated several months ago... Another example may be, An email editor and a busy mail sender. A well written mail to my boss is composed by this email editor and is saved in a file (open()-write()-close()). The mail sender gets notified with the path of the mail file to queue it and send it later. A writeback error occurs and the related page/inode/address_space gets evicted while the mail is still waiting in the queue of the mail sender. Finally, the mail file is open() read() by the sender, but what is sent is the mail to my girlfriend that was composed yesterday... In both cases, the files are not meant to be persisted onto the disk. So, fsync() is not likely to be called. > > If I get an error back on a read, why should I think that it has > anything at all to do with writes that previously failed? It may even > have been written by a completely separate process that I had nothing at > all to do with. > > > As I can see, that is all about error reporting. > > > > As for suggestion, maybe the error flag of inode/mapping, or the entire inode > > should not be evicted if there was an error. That hopefully won't take much > > memory. On extreme conditions, where too much error inode requires staying > > in memory, maybe we should panic rather then spread the error. > > > > > > > > In the easy case, where the data easily fits in RAM, you COULD write a > > > solution. But when the hardware fails, the SYSTEM will not be able to > > > follow the posix rules. > > > > Nope, we are able to follow the rules. The above is one way that follows the > > POSIX rules. > > > > This is something we discussed at LSF this year. > > We could attempt to keep dirty data around for a little while, at least > long enough to ensure that reads reflect earlier writes until the errors > can be scraped out by fsync. That would sort of redefine fsync from > being "ensure that my writes are flushed" to "synchronize my cache with > the current state of the file". > > The problem of course is that applications are not required to do fsync > at all. At what point do we give up on it, and toss out the pages that > can't be cleaned? > > We could allow for a tunable that does a kernel panic if writebacks fail > and the errors are never fetched via fsync, and we run out of memory. I > don't think that is something most users would want though. > > Another thought: maybe we could OOM kill any process that has the file > open and then toss out the page data in that situation? > > I'm wide open to (good) ideas here. As I said above, silently data corruption is dangerous and maybe we really should report errors to user space even in desperate cases. One possible approach may be: - When a writeback error occurs, mark the page clean and remember the error in the inode/address_space of the file. I think that is what the kernel is doing currently. - If the following read() could be served by a page in memory, just returns the data. If the following read() could not be served by a page in memory and the inode/address_space has a writeback error mark, returns EIO. If there is a writeback error on the file, and the request data could not be served by a page in memory, it means we are reading a (partically) corrupted (out-of-data) file. Receiving an EIO is expected. - We refuse to evict inodes/address_spaces that is writeback error marked. If the number of writeback error marked inodes reaches a limit, we shall just refuse to open new files (or refuse to open new files for writing) . That would NOT take as much memory as retaining the pages themselves as it is per file/inode rather than per byte of the file. Limiting the number of writeback error marked inodes is just like limiting the number of open files we're currently doing - Finally, after the system reboots, programs could see (partially) corrupted (out-of-data) files. Since user space programs didn't mean to persist these files (didn't call fsync()), that is fairly reasonable. > -- > Jeff Layton <jlayton@xxxxxxxxxx> >