Re: raid10 redundancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Sat, May 08, 2021 at 09:54:03AM +0800, d tbsky wrote:
> Andy Smith <andy@xxxxxxxxxxxxxx>
> > If you're referring to this, which I wrote:
> >
> >     http://strugglers.net/~andy/blog/2019/06/01/why-linux-raid-10-sometimes-performs-worse-than-raid-1/

[…]

> sorry I didn't find that comprehensive report before.

Okay, so that wasn't what you were thinking of then.

I haven't got anything published to back up the assertion but I
haven't really noticed very much performance difference between
RAID-10 and RAID-1 on non-rotational storage since the above fix.
Most of my storage is non-rotational these days.

> what I saw is that raid10 and raid1 performance are similar and
> raid1 is a little faster.

I haven't got anything published to back up the assertion but I
haven't really noticed very much performance difference between
RAID-10 and RAID-1 on non-rotational storage since the above fix.
Most of my storage is non-rotational these days.

That does assume a load that isn't single-threaded, since a single
thread is only ever going to read from one half of an md RAID-1. It
doesn't stripe.

> so I just use raid1 at two disks conditions these years. like the
> discussion here
> https://www.reddit.com/r/homelab/comments/4pfonh/2_disk_ssd_raid_raid_1_or_10/

I must admit that as most of my storage has shifted from HDD to SSD
I've shifted away from md RAID-10, which I used to use even when
there were only 2 devices. With HDDs I felt (and measured) the
increased performance.

But with SSDs these days I tend to just use RAID-1 pairs and
concatenate them in LVM (which I am using anyway) afterwards. Mainly
just because it's much simpler and the performance is good enough.

If you need to eke out the most performance this is maybe not the
way. Certainly not the way if you need better redundancy (lose any
two devices etc). Many concerns, performance only one of them…

> I don't know if the situation is the same now. I will try to do my
> testing. but I think in theory they are similar under multiple
> process.

I think so but it's always good to see a recent test with numbers!

Cheers,
Andy



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux