Re: GNU 'tar', Schilling's 'tar', write-cache/barrier

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ ... ]

>> #  (cd /tmp/ext4; rm -rf linux-2.6.32; sync; time star -no-fsync -x -f /tmp/linux-2.6.32.tar; egrep 'Dirty|Writeback' /proc/meminfo; time sync)
>> real    0m1.204s
>> Dirty:          419456 kB
>> real    0m5.012s

>> #  (cd /tmp/ext4; rm -rf linux-2.6.32; sync; time star -x -f /tmp/linux-2.6.32.tar; egrep 'Dirty|Writeback' /proc/meminfo; time sync)
>> real    23m29.346s
>> Dirty:             108 kB
>> real    0m0.236s

> But as a user, what guarantees do I *want* from tar?

Ahhhh, but that depends *a lot* on the application, that may or
may not be 'tar', and what you are using 'tar' for. Consider for
example restoring a backup using RSYNC instead of 'tar'.

> I think the only meaningful guarantee I might want is: "if the
> tar returns successfully, I want to know that all the files
> are persisted to disk".

Perhaps in some cases, but perhaps in others not. For example if
you are restoring 20TB, having to redo the whole 20TB or a
significant fraction may be undesirable, and you would like to
change the guarantee as tyou write later:

  > On the flip side, does fsync()ing each individual file [
  > ... ] you could safely restart an aborted untar [ ... ] the
  > last file which was unpacked may only have been partially
  > written to disk [ ... ]

to add "if the tar does not return successfully, I want to know
that most or or all the files are persisted, except the last one
that was only partially written, which I want to disappear, so I
can rerun 'tar -x -k' and only restore the rest of the files".

> And of course that's what your final "sync" does, although
> with the unfortunate side-effect of syncing all other dirty
> blocks in the system too.

Just to be sure: that was on a quiescent system, so in the
particular case of my tests it was just on the 'tar'.

[ ... ]

> I think what's needed is a group fsync which says "please
> ensure this set of files is all persisted to disk", which is
> done at the end, or after every N files.  If such an API
> exists I don't know of it.

That's in part what mentioned here:

[ ... ]

> If the above benchmark is typical, it suggests that fsyncing
> after every file is 4 times slower than untar followed by
> sync.

Depends on how often the flusher runs and how aggressively and
how much memory you get. In the comparison quoted above, GNU
'tar' on 'ext4' dumps 410MB into RAM in just over 1 second plus
5 seconds for 'sync', and Schilling's 'tar' persists the lot to
disk, incrementally, in 1409 seconds. The ratio is 227 times.

Because that's a typical disk drive that can either do around
100MB/s with bulk sequential IO (thus the 5 seconds 'sync') or
around 0.5-4MB/s with small random IO.

> So I reckon you would be better off using the fast/unsafe
> version, and simply restarting it from the beginning if the
> system crashed while you were running it. [ ... ]

That's in one very specific example with one application in one
context. As to this, for a better discussion, let's go back to
your original and very appropriate question:

  > But as a user, what guarantees do I *want* from tar?

The question is very sensible as far as it goes, but it does not
go far enough, because «from tar» and small 'tar' archives is
just happenstance: what you should ask yourself is:

  But as a user, what guarantees do I *want* from filesystems
  and the applications that use them?

That's in essence the O_PONIES question.

That question can have many answers each of them addressing a
different aspect of normative and positive situation, and I'll
try to list some.

The first answer is that you want to be able to choose different
guarantees and costs, and know which they are. In this respect
'delaylog' log, properly described as an improvement in both
unsafety and speed, is a good thing to have, because it is often
a useful option. So are 'sync', 'nobarrier', and 'eatmydata'.

The second answer is that as a rule users don't have the
knowledge or the desire to understand the tradeoffs offered by
filesystems and how they relate to the behavior of the programs
(including 'tar') that they use, so there needs to be a default
guarantee that most users would have chosen if they could, and
this should be about more safety rather than the more speed, and
this was what «XFS @ 2009-2010» was doing.

More to follow...

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux