I was helping somebody else diagnose some issues, and decided to run comparitive tests on my own raid (raid10,f2). The raid10,f2 (md0) is the only physical device backing a volume group, which is then carved into a bunch of (primarily) ext4 filesystems. The kernel is 2.6.31.12 (openSUSE) on a Quad Processor AMD Phenom 9150e system. The raid is two Western Digital Caviar Blue drives (WDC WD5000AAKS-00V1A0). The problem: really, really bad I/O performance under certain circumstances. When using an internal bitmap and *synchronous* I/O, applications like dd report 700-800 kB/s. When not using a bitmap at all, and synchronous I/O, dd reports 2.5 MB/s (but dstat shows 14MB/s?) Without a bitmap and async I/O (but with fdatasync) I get 65MB/s. *With* a bitmap and using async. I/O (but with fdatasync) I get more like 65MB/s. The system has 3GB of memory and I'm testing with dd if=/dev/zero of=somefile bs=4k count=524288. I'm trying to understand why the synchronous I/O is so bad, but even so I was hoping for more. 65MB/s seems *reasonable* given the raid10,f2 configuration and all of the seeking that such a configuration involves (when writing). The other very strange thing is that the I/O patterns seem very strange. I'll see 14MB/s very consistently as reported by dstat (14MB/s for each sda, sdb, and md0) for 10-15 seconds and then I'll see it drop, sometimes to just 3 or 4 MB/s, for another 10 seconds, and then the pattern repeats. What's going on here? With absolutely no other load on the system, I would have expected to see something much more consistent. -- Jon -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html