Re: Anyone using XFS in production on > 20TiB volumes?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 22, 2010 at 12:10:06PM -0500, Justin Piszcz wrote:

> Do you have an example/of what you found?

i don't have the numbers anymore, they are with a previous employer.

basically using dbench (there were cifs NAS machines, so dbench seemed
as good or bad as anything to test with) the performance was about 3x
better between 'old' and 'new' with a small number of workers and
about 10x better with a large number

i don't know how much difference each of inode64 and getting the geom
right made each, but bother were quite measurable in the graphs i made
at the time


from memory the machines are raid50 (4x (5+1)) with 2TB drives, so
about 38TB usable on each one

initially these machines were 3ware controllers and later on LSI (the
two products lines have since merged so it's not clear how much
difference that makes now)

in testing 16GB for xfs_repair wasn't enough, so they were upped to
64GB, that's likely largely a result of the fact there were 100s of
millions of small files (as well as some large ones)

> Is it dependent on the RAID card?

perhaps, do you have a BBU and enable WC?  certainly we found the LSI
cards to be faster in most cases than the (now old) 3ware


where i am now i use larger chassis and no hw raid cards, using sw
raid on these works spectacularly well with the exception of burst of
small seeky writes (which a BBU + wc soaks up quite well)

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs


[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux