Re: RAID-1 can (sometimes) be 3x faster than RAID-10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



you need to clarify which layout you use with md raid10.
the layouts are near, far and offset, with very different performance characteristics.
far and offset are designed to be faster than near, which I understand that you use.
So why are you using the slowest md raid10 layout, and not mentioning this fact?

maybe you could run your tests for all 3 layouts?


keld
 

On Wed, May 29, 2019 at 07:41:36PM +0000, Andy Smith wrote:
> Hi,
> 
> I have a server with a fast device (a SATA SSD) and a very fast
> device (NVMe). I was experimenting with different Linux RAID
> configurations to see which worked best. While doing so I discovered
> that in this situation, RAID-1 and RAID-10 can perform VERY
> differently.
> 
> A RAID-1 of these devices will parallelise reads resulting in ~84% of
> the read IOs hitting the NVMe and an average IOPS close to
> that of the NVMe.
> 
> By contrast RAID-10 seems to split the IOs much more evenly: 53% hit
> the NVMe, and the average IOPS was only 35% that of RAID-1.
> 
> Is this expected?
> 
> I suppose so since it is documented that RAID-1 can parallelise
> reads but RAID-10 will stripe them. That is normally presented as a
> *benefit* of RAID-10 though; I'm not sure that it is obvious that if
> your devices have dramatically different performance characteristics
> that RAID-10 could hobble you.
> 
> I did try out --write-mostly, by the way, in an attempt to force
> ~100% of the reads to go to the NVMe, but this actually made
> performance worse. I think that --write-mostly may only make sense
> when the performance gap is much bigger (e.g. between rotational and
> fast flash), where any read to the slow half will kill performance.
> 
> I wrote up my tests here:
> 
> http://strugglers.net/~andy/blog/2019/05/29/linux-raid-10-may-not-always-be-the-best-performer-but-i-dont-know-why/
> 
> There are still a bunch of open questions ("Summary of open
> questions" section) and some results I could not explain. I included
> some tests against slow HDDs and couldn't explain why I achieved 256
> read IOPS there, for example. I don't believe that was the page
> cache.
> 
> If you have any ideas about that, can see any problems with my
> testing methodology, have suggestions for other tests etc then
> please do let me know.
> 
> Thanks,
> Andy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux