On 6/7/19 4:22 AM, Andy Smith wrote: > On Sat, Jun 01, 2019 at 05:39:25AM +0000, Andy Smith wrote: >> On Fri, May 31, 2019 at 09:43:35AM +0800, Guoqing Jiang wrote: >>> There are some optimizations in raid1's read_balance for ssd, unfortunately, >>> raid10 didn't have similar code. > > […] > >> Is it just that no one has tried to apply the same optimizations to >> RAID-10, or is it technically difficult/impossible to do this in >> RAID-10? > > Guoqing sent me a patch off-list that implements these same device > selection optimizations to RAID-10, and it seems to work. RAID-10 > random read performance in this setup is now the same as RAID-1 > (both very near to fastest device) and sequential read is even > better than RAID-1. > > http://strugglers.net/~andy/blog/2019/06/06/linux-raid-10-fixed-on-imbalanced-devices/ We've been seriously considering switching from raid10 to lvm stripes across raid1 for a different reason. Crucial/Micron SSDs, even the enterprise ones, do not always finish smart tests under some read loads. With RAID1 we could set them temporarily to write-mostly so that they can finish their smart tests and vendor tests. It would be really nice if with RAID10 we could also set drives to write-mostly. --Sarah