Re: RAID-1 can (sometimes) be 3x faster than RAID-10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 01.06.19 um 17:43 schrieb keld@xxxxxxxxxx:
> On Sat, Jun 01, 2019 at 04:03:05PM +0200, Reindl Harald wrote:
>> well, it would be nice just skip optimizations for rotating disks
>> entirely when the whole 4 disk RAID10 is built of SSD's
> 
> yes, possibly, if there is a gain in this.
> 
> as I wrote layout offset may already do the right thing.
> and raid1 probably also does it.
> I think raid10 offers extra functionlity over raid1 that could be useful
>  for raids of drives with different speeds.
> Even layout far may be advantageaous to use with ssds, this was reported to the list some time ago.

problem is that you can't switch layouts and given that "near" is
default when you setup a distribution with RAID10 in the installer you
get that

if i would have known that in 2011 at least md2 where VMs are running
would have been created after the inital setup, now that ship has sailed
because the whole point of RAID10 was "never install from scratch, just
repplace dead disks and in case of switch the hardware create a
non-hostonly initrd and move the drives"

now it's more about reduce overhead in general because for a SSD all the
optimizations trying to take head movement into account are useless,
just read from as much drives as possible

for dives with different speeds "writemostly" would be perfect and two
years ago that was my lan not realizing that Linux RAID10 is not the
same as a ordinary RAID10 and on a testing VM the param was recognized
for RAID10 too but turned out not to work in reality after replace two
drives with SSD

for my primary machine it don't no longer matter, already 4x2 TB SSD
RAID10 but for example i have a HP Microserver Gen8 with 4x6 TB RAID10
"far layout" for the storage running a NFS storgae for a VMware
Backup-Appliance

May '19      2.47 TiB |   68.09 TiB

as you can see there is a 1/28 ratio reads/writes and so replace two of
that disks with a SSD would dramatically improve real workload
performance with half of the costs when reads don't go to the HDD at all
as long all 4 drives are up





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux