On 7/17/2011 7:22 AM, Iustin Pop wrote: > On Sun, Jul 17, 2011 at 01:11:19PM +0100, John Robinson wrote: >> On 17/07/2011 09:12, Pol Hallen wrote: >>> hello and thanks for the reply :-) >>> >>> dd if=/dev/zero of=test bs=4096 count=262144 >>> 262144+0 records in >>> 262144+0 records out >>> 1073741824 bytes (1.1 GB) copied, 31.3475 s, 34.3 MB/s >> >> Pretty poor. CentOS 5, Intel ICH10, md RAID 6 over 5 7200rpm 1TB >> drives, then LVM, then ext3: >> # dd if=/dev/zero of=test bs=4096 count=262144 >> 262144+0 records in >> 262144+0 records out >> 1073741824 bytes (1.1 GB) copied, 2.5253 seconds, 425 MB/s >> >> And there's a badblocks running on another drive also on the ICH10. >> >> Having said that, I think mine's wrong too, I don't think my array >> can really manage that much throughput. We should both be using more >> realistic benchmarking tools like bonnie++: > > Or simply pass the correct flags to dd — like oflag=direct, which will > make it do non-buffered writes. I'm not sure of the reasons, but O_DIRECT doesn't work with dd quite the way one would think, at least not from a performance perspective. On my test rig it yields an almost 10x decrease, much like using insane block size. It may have something to do with write barriers being enabled in XFS on my test rig, or something similar. This system is running vanilla 2.6.38.6 with Debian Squeeze atop. Using O_DIRECT with dd with 2.6.26 and 2.6.34 yielded the same dd O_DIRECT behavior in the past. $ dd if=/dev/zero of=./test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 15.0542 s, 71.3 MB/s $ dd oflag=direct if=/dev/zero of=./test bs=4096 count=262144 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB) copied, 133.888 s, 8.0 MB/s -- Stan -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html