Re: LWN.net article: creating 1 billion files -> XFS looses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Thu, 19 Aug 2010 13:12:45 +0200
Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx> écrivait:

> The subject is a bit harsh, but overall the article says:
> XFS is slowest on creating and deleting a billion files
> XFS fsck needs 30GB RAM to fsck that 100TB filesystem.

Too bad I haven't got a 100 TB machine at hand. However I have a 24TB
system dedicated to tests. I'm pretty sure we can do much better with
XFS and the proper mount options :)

In fact, I have an unused 40 TB array too. See this one later...

Stay tuned :)

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

Attachment: signature.asc
Description: PGP signature

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux