On Thu, 2018-04-12 at 13:24 -0700, Andres Freund wrote: > On 2018-04-12 07:09:14 -0400, Jeff Layton wrote: > > On Wed, 2018-04-11 at 20:02 -0700, Matthew Wilcox wrote: > > > On Wed, Apr 11, 2018 at 07:17:52PM -0700, Andres Freund wrote: > > > > > > While there's some differing opinions on the referenced postgres thread, > > > > > > the fundamental problem isn't so much that a retry won't fix the > > > > > > problem, it's that we might NEVER see the failure. If writeback happens > > > > > > in the background, encounters an error, undirties the buffer, we will > > > > > > happily carry on because we've never seen that. That's when we're > > > > > > majorly screwed. > > > > > > > > > > > > > > > I think there are two issues here - "fsync() on an fd that was just opened" > > > > > and "persistent error state (without keeping dirty pages in memory)". > > > > > > > > > > If there is background data writeback *without an open file descriptor*, > > > > > there is no mechanism for the kernel to return an error to any application > > > > > which may exist, or may not ever come back. > > > > > > > > And that's *horrible*. If I cp a file, and writeback fails in the > > > > background, and I then cat that file before restarting, I should be able > > > > to see that that failed. Instead of returning something bogus. > > > > What are you expecting to happen in this case? Are you expecting a read > > error due to a writeback failure? Or are you just saying that we should > > be invalidating pages that failed to be written back, so that they can > > be re-read? > > Yes, I'd hope for a read error after a writeback failure. I think that's > sane behaviour. But I don't really care *that* much. > I'll have to respectfully disagree. Why should I interpret an error on a read() syscall to mean that writeback failed? Note that the data is still potentially intact. What _might_ make sense, IMO, is to just invalidate the pages that failed to be written back. Then you could potentially do a read to fault them in again (i.e. sync the pagecache and the backing store) and possibly redirty them for another try. Note that you can detect this situation by checking the return code from fsync. It should report the latest error once per file description. > At the very least *some* way to *know* that such a failure occurred from > userland without having to parse the kernel log. As far as I understand, > neither sync(2) (and thus sync(1)) nor syncfs(2) is guaranteed to report > an error if it was encountered by writeback in the background. > > If that's indeed true for syncfs(2), even if the fd has been opened > before (which I can see how it could happen from an implementation POV, > nothing would associate a random FD with failures on different files), > it's really impossible to detect this stuff from userland without text > parsing. > syncfs could use some work. I'm warming to willy's idea to add a per-sb errseq_t. I think that might be a simple way to get better semantics here. Not sure how we want to handle the reporting end yet though... We probably also need to consider how to better track metadata writeback errors (on e.g. ext2). We don't really do that properly at quite yet either. > Even if it'd were just a perf-fs /sys/$something file that'd return the > current count of unreported errors in a filesystem independent way, it'd > be better than what we have right now. > > 1) figure out /sys/$whatnot $directory belongs to > 2) oldcount=$(cat /sys/$whatnot/unreported_errors) > 3) filesystem operations in $directory > 4) sync;sync; > 5) newcount=$(cat /sys/$whatnot/unreported_errors) > 6) test "$oldcount" -eq "$newcount" || die-with-horrible-message > > Isn't beautiful to script, but it's also not absolutely terrible. -- Jeff Layton <jlayton@xxxxxxxxxx>