Re: LWN.net article: creating 1 billion files -> XFS looses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Thu, 19 Aug 2010 13:12:45 +0200
Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx> écrivait:

> The subject is a bit harsh, but overall the article says:
> XFS is slowest on creating and deleting a billion files
> XFS fsck needs 30GB RAM to fsck that 100TB filesystem.

Just to go on this subject : a colleague (following my suggestion :)
tried to create 1 billion files in the same XFS directory.
Unfortunately the directories themselves don't scale well that far :
after 1 million files in the first 30 minutes, file creation slows down
gradually, so after 100 hours we had about 230 million files. The
directory size at that point was 5,3 GB.

Now we're starting afresh with 1000 directories with 1 million files
each :)

(Kernel version used : vanilla 2.6.32.11 x86_64 smp)

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux