Re: fsync() errors is unsafe and risks data loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2018-04-12 at 15:45 +1000, Dave Chinner wrote:
> On Wed, Apr 11, 2018 at 07:32:21PM -0700, Andres Freund wrote:
> > Hi,
> > 
> > On 2018-04-12 10:09:16 +1000, Dave Chinner wrote:
> > > To pound the broken record: there are many good reasons why Linux
> > > filesystem developers have said "you should use direct IO" to the PG
> > > devs each time we have this "the kernel doesn't do <complex things
> > > PG needs>" discussion.
> > 
> > I personally am on board with doing that. But you also gotta recognize
> > that an efficient DIO usage is a metric ton of work, and you need a
> > large amount of differing logic for different platforms. It's just not
> > realistic to do so for every platform.  Postgres is developed by a small
> > number of people, isn't VC backed etc. The amount of resources we can
> > throw at something is fairly limited.  I'm hoping to work on adding
> > linux DIO support to pg, but I'm sure as hell not going to do be able to
> > do the same on windows (solaris, hpux, aix, ...) etc.
> > 
> > And there's cases where that just doesn't help at all. Being able to
> > untar a database from backup / archive / timetravel / whatnot, and then
> > fsyncing the directory tree to make sure it's actually safe, is really
> > not an insane idea.
> 
> Yes it is. 
> 
> This is what syncfs() is for - making sure a large amount of of data
> and metadata spread across many files and subdirectories in a single
> filesystem is pushed to stable storage in the most efficient manner
> possible.
> 

Just note that the error return from syncfs is somewhat iffy. It doesn't
necessarily return an error when one inode fails to be written back. I
think it mainly returns errors when you get a metadata writeback error.


> > Or even just cp -r ing it, and then starting up a
> > copy of the database.  What you're saying is that none of that is doable
> > in a safe way, unless you use special-case DIO using tooling for the
> > whole operation (or at least tools that fsync carefully without ever
> > closing a fd, which certainly isn't the case for cp et al).
> 
> No, Just saying fsyncing individual files and directories is about
> the most inefficient way you could possible go about doing this.
> 

You can still use syncfs but what you'd probably have to do is call
syncfs while you still hold all of the fd's open, and then fsync each
one afterward to ensure that they all got written back properly. That
should work as you'd expect.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux