Hi, I have a server with a fast device (a SATA SSD) and a very fast device (NVMe). I was experimenting with different Linux RAID configurations to see which worked best. While doing so I discovered that in this situation, RAID-1 and RAID-10 can perform VERY differently. A RAID-1 of these devices will parallelise reads resulting in ~84% of the read IOs hitting the NVMe and an average IOPS close to that of the NVMe. By contrast RAID-10 seems to split the IOs much more evenly: 53% hit the NVMe, and the average IOPS was only 35% that of RAID-1. Is this expected? I suppose so since it is documented that RAID-1 can parallelise reads but RAID-10 will stripe them. That is normally presented as a *benefit* of RAID-10 though; I'm not sure that it is obvious that if your devices have dramatically different performance characteristics that RAID-10 could hobble you. I did try out --write-mostly, by the way, in an attempt to force ~100% of the reads to go to the NVMe, but this actually made performance worse. I think that --write-mostly may only make sense when the performance gap is much bigger (e.g. between rotational and fast flash), where any read to the slow half will kill performance. I wrote up my tests here: http://strugglers.net/~andy/blog/2019/05/29/linux-raid-10-may-not-always-be-the-best-performer-but-i-dont-know-why/ There are still a bunch of open questions ("Summary of open questions" section) and some results I could not explain. I included some tests against slow HDDs and couldn't explain why I achieved 256 read IOPS there, for example. I don't believe that was the page cache. If you have any ideas about that, can see any problems with my testing methodology, have suggestions for other tests etc then please do let me know. Thanks, Andy