Re: [Lsf-pc] [LSF/MM TOPIC] I/O error handling and fsync()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2017-01-23 at 11:09 +0100, Kevin Wolf wrote:
> Am 23.01.2017 um 01:21 hat Theodore Ts'o geschrieben:
> > On Sun, Jan 22, 2017 at 06:31:57PM -0500, Jeff Layton wrote:
> > > 
> > > Ahh, sorry if I wasn't clear.
> > > 
> > > I know Kevin posed this topic in the context of QEMU/KVM, and I figure
> > > that running virt guests (themselves doing all sorts of workloads) is a
> > > pretty common setup these days. That was what I meant by "use case"
> > > here. Obviously there are many other workloads that could benefit from
> > > (or be harmed by) changes in this area.
> > > 
> > > Still, I think that looking at QEMU/KVM as a "application" and
> > > considering what we can do to help optimize that case could be helpful
> > > here (and might also be helpful for other workloads).
> > 
> > Well, except for QEMU/KVM, Kevin has already confirmed that using
> > Direct I/O is a completely viable solution.  (And I'll add it solves a
> > bunch of other problems, including page cache efficiency....)
> 

Sure, O_DIRECT does make this simpler (though it's not always the most
efficient way to do I/O). I'm more interested in whether we can improve
the error handling with buffered I/O.

Maybe it's possible to add new flags/behaviors to sync_file_range such
that you could more easily determine the status? Maybe a new syscall
would do it?

Either way, getting a good feel for how QEMU would like to handle this
situation would be informative. It might be possible to make things
better with small tweaks to existing interfaces.

> Yes, "don't ever use non-O_DIRECT in production" is probably workable as
> a solution to the "state after failed fsync()" problem, as long as it is
> consistently implemented throughout the stack. That is, if we use a
> network protocol in QEMU (NFS, gluster, etc.), the server needs to use
> O_DIRECT, too, if we don't want to get the same problem one level down
> the stack. I'm not sure if that's possible with all of them, but if it
> is, it's mostly just a matter of configuring them correctly.
> 
> However, if we look at the greater problem of hanging requests that came
> up in the more recent emails of this thread, it is only moved rather
> than solved. Chances are that already write() would hang now instead of
> only fsync(), but we still have a hard time dealing with this.
> 

Yeah, not much you can do there currently (at least when it comes to
NFS). If the you had your choice, what would you like to have happen in
this situation where there is a loss of communication between client and
server?

-- 
Jeff Layton <jlayton@xxxxxxxxxxxxxxx>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux