Am 01.06.19 um 10:50 schrieb keld@xxxxxxxxxx: > On Sat, Jun 01, 2019 at 05:39:25AM +0000, Andy Smith wrote: >> Hi, >> >> On Fri, May 31, 2019 at 09:43:35AM +0800, Guoqing Jiang wrote: >>> On 5/30/19 3:41 AM, Andy Smith wrote: >>>> By contrast RAID-10 seems to split the IOs much more evenly: 53% hit >>>> the NVMe, and the average IOPS was only 35% that of RAID-1. >>>> >>>> Is this expected? >> >> [???] >> >>> There are some optimizations in raid1's read_balance for ssd, unfortunately, >>> raid10 didn't have similar code. >> >> Thanks Guoqing, that certainly seems to explain it. >> >> Would it be worth mentioning in the man page and/or wiki that when >> there are devices that are very mismatched, performance wise, RAID-1 >> is likely to be able to direct more reads to the faster device(s), >> whereas RAID-10 can't do that? >> >> Is it just that no one has tried to apply the same optimizations to >> RAID-10, or is it technically difficult/impossible to do this in >> RAID-10? > > Still, Andy, you need to cover all layouts of md raid10. > > L know that for the far layout we actually had something that meant choosing the faster drives > an thus it violated the striping on HDs, degrading read performance severely. A patch fixed that. > > this patch did not apply to the offset layout, so maybe that layout could satisfy your needs. > > > it seems that there may be special code for SSDs in the md drivers. > > I would like if we could use more precise terminology, RAID-10 could easily be understood > as normal raid where you need 4 drives. The name "md raid10" is actually a bit misleading, > as for the 4-drive version it is actually a RAID-01 layout, which has poorer redudancy properties. well, it would be nice just skip optimizations for rotating disks entirely when the whole 4 disk RAID10 is built of SSD's