I expect RAID1 write performance to be, at best, the performance of the slowest drive. I'm seeing twice the performance, as if it were a RAID0. The read performance is 2x also, which is what I would expect. I'm using the incantation: mdadm --create /dev/md0 --chunk=256 --level=1 --assume-clean --raid-devices=2 /dev/sd[bc] I use "assume clean" on the fresh create, as there is no reason to sync the new drives. My "fio" test uses O_DIRECT with 64 threads, each with a queue depth of 64, running for 10 minutes. All caching is disabled, and the NOOP scheduler is being used. I run this test all the time, and can't imagine why it's getting such repeatably good write performance. Any ideas? Thanks, Chris -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html