Hi Chris... we tried the direct DD as requested and the problem is still there... 1.3GBsec > 325MBsec (even more dromatic)... hopefully this helps narrow it down? Write > MD linux-poly:~ # dd if=/dev/zero of=/dev/md0 oflag=direct bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 15.7671 s, 1.3 GB/s Write > XFS > MD linux-poly:~ # dd if=/dev/zero of=/mnt/md0/test oflag=direct bs=1M count=20000 20000+0 records in 20000+0 records out 20971520000 bytes (21 GB) copied, 64.616 s, 325 MB/s On Tue, Oct 13, 2009 at 11:52 PM, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > On Tue, Oct 13, 2009 at 12:06:24PM +0100, mark delfman wrote: >> A little more information which I ?think? seems to point at MD..... >> >> Creating an EXT3 FS on an MD RAID also shows a circa 50% performance drop. >> We have tried a multitude of RAID options (raid6/0 various chunks etc). >> >> Using a hardware based raid XFS / EXT3 shows no performance drop >> (although the hardware raid is significantly slower than MD in the >> first place) >> >> We are happy to keep testing and offering anything that could be >> useful, we are just a little stuck thinking of anything else to do.... > > Can you test with conv=direct added to the dd command lines? If that > shows the problems too it's probably writeback-related. If not the > problems must be somewhere lower in the stack. > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html