On 5/30/19 3:41 AM, Andy Smith wrote:
Hi,
I have a server with a fast device (a SATA SSD) and a very fast
device (NVMe). I was experimenting with different Linux RAID
configurations to see which worked best. While doing so I discovered
that in this situation, RAID-1 and RAID-10 can perform VERY
differently.
A RAID-1 of these devices will parallelise reads resulting in ~84% of
the read IOs hitting the NVMe and an average IOPS close to
that of the NVMe.
By contrast RAID-10 seems to split the IOs much more evenly: 53% hit
the NVMe, and the average IOPS was only 35% that of RAID-1.
Is this expected?
I suppose so since it is documented that RAID-1 can parallelise
reads but RAID-10 will stripe them. That is normally presented as a
*benefit* of RAID-10 though; I'm not sure that it is obvious that if
your devices have dramatically different performance characteristics
that RAID-10 could hobble you.
There are some optimizations in raid1's read_balance for ssd, unfortunately,
raid10 didn't have similar code. I guess the below commits are related.
commit 9dedf60313fa4dddfd5b9b226a0ef12a512bf9dc ("md/raid1: read balance
chooses idlest disk for SSD")
commit 12cee5a8a29e7263e39953f1d941f723c617ca5f ("md/raid1: prevent
merging too large request")
Thanks,
Guoqing