Re: [Lsf-pc] [LSF/MM TOPIC] I/O error handling and fsync()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 23, 2017 at 07:10:00AM -0500, Jeff Layton wrote:
> > > Well, except for QEMU/KVM, Kevin has already confirmed that using
> > > Direct I/O is a completely viable solution.  (And I'll add it solves a
> > > bunch of other problems, including page cache efficiency....)
> 
> Sure, O_DIRECT does make this simpler (though it's not always the most
> efficient way to do I/O). I'm more interested in whether we can improve
> the error handling with buffered I/O.

I just want to make sure we're designing a solution that will actually
be _used_, because it is a good fit for at least one real-world use
case.

Is QEMU/KVM using volumes that are stored over NFS really used in the
real world?  Especially one where you want a huge amount of
reliability and recovery after some kind network failure?  If we are
talking about customers who are going to suspend the VM and restart it
on another server, that presumes a fairly large installation size and
enough servers that would they *really* want to use a single point of
failure such as an NFS filer?  Even if it was a proprietary
purpose-built NFS filer?  Why wouldn't they be using RADOS and Ceph
instead, for example?

					- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux