Re: LWN.net article: creating 1 billion files -> XFS looses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 19, 2010 at 01:12:45PM +0200, Michael Monnerie wrote:
> The subject is a bit harsh, but overall the article says:
> XFS is slowest on creating and deleting a billion files
> XFS fsck needs 30GB RAM to fsck that 100TB filesystem.
> 
> http://lwn.net/SubscriberLink/400629/3fb4bc34d6223b32/

The creation and deletion performance is a known issue, and too a large
extent fixes by the new delaylog code.  We're not quite as fast as ext4
yet, but it's getting close.

The repair result looks a lot like the pre-3.1.0 xfsprogs repair.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux