Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> As to 'ext4' and doing (euphemism) insipid tests involving
> peculiar setups, there is an interesting story in this post:
>
>  http://oss.sgi.com/archives/xfs/2012-03/msg00465.html

I really don't see the connection to this thread. You're advocating
mostly that tar use fsync on every file, which to me seems absurd. If
the system goes down halfway through tar extraction, I would delete
the tree and untar again. What do I care if some files are corrupt,
when the entire tree is incomplete anyway?

Despite the somewhat inflammatory thread subject, I don't want to bash
anyone. It's just that untarring large source trees is a very typical
workload for me. And I just don't want to accept that XFS cannot do
better than being several orders of magnitude slower than ext4
(speaking of binary orders of magnitude). As I see it, both file
systems give the same guarantees:

1) That upon completion of sync, all data is readily available on
permanent storage.
2) That the file system metadata doesn't suffer corruption, should the
system lose power during the operation.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux