Re: LWN.net article: creating 1 billion files -> XFS looses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Tue, 7 Sep 2010 08:04:10 +1000 vous écriviez:

> Oh, that's larger than I've ever run before ;)

Excellent :) Still works fine afterwards; mount, umount, etc works
flawlessly. Memory consumption though is huge :)
> 
> Try using:
> 
> # mkfs.xfs -d size=64k
> 
> Will speed up large directory operations by at least an order of
> magnitude.

OK, we'll try that too :)
 
> > Now we're starting afresh with 1000 directories with 1 million files
> > each :)
> 
> Which is exactly the test that was used to generate the numbers that
> were published.
> 
> > (Kernel version used : vanilla 2.6.32.11 x86_64 smp)
> 
> Not much point in testing that kernel - delayed logging is where the
> future is for this sort of workload, which is what I'm testing.

I'll compile a 2.6.36rc for comparison.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux