Justin Piszcz wrote: > > > On Sun, 28 Feb 2010, tytso@xxxxxxx wrote: > >> On Sat, Feb 27, 2010 at 06:36:37AM -0500, Justin Piszcz wrote: >>> >>> I still would like to know however, why 350MiB/s seems to be the maximum >>> performance I can get from two different md raids (that easily do >>> 600MiB/s >>> with XFS). > >> Can you run "filefrag -v <filename>" on the large file you created >> using dd? Part of the problem may be the block allocator simply not >> being well optimized super large writes. To be honest, that's not >> something we've tried (at all) to optimize, mainly because for most >> users of ext4 they're more interested in much more reasonable sized >> files, and we only have so many hours in a day to hack on ext4. :-) >> XFS in contrast has in the past had plenty of paying customers >> interested in writing really large scientific data sets, so this is >> something Irix *has* spent time optimizing. > Yes, this is shown at the bottom of the e-mail both with -o data=ordered > and data=writeback. ... > === SHOW FILEFRAG OUTPUT (NOBARRIER,ORDERED) > > p63:/r1# filefrag -v /r1/bigfile Filesystem type is: ef53 > File size of /r1/bigfile is 10737418240 (2621440 blocks, blocksize 4096) > ext logical physical expected length flags > 0 0 34816 32768 > 1 32768 67584 30720 > 2 63488 100352 98303 32768 > 3 96256 133120 30720 > 4 126976 165888 163839 32768 > 5 159744 198656 30720 ... That looks pretty good. I think Dave's suggesting of seeing what cpu usage looks like is a good one. Running blktrace on xfs vs. ext4 could possibly also shed some light. -Eric -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html