Re: misunderstanding of spare and raid devices? - and one question more

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/30/2011 08:52 AM, Karsten Römke wrote:

[...]

>> This will end up with four drives' capacity, with parity interspersed, on five drives.  No spare.
>>
>>>> That's what I want, but I reached it more or less by random.
>>>> Where is my "think-error" (in german).
> No - that's not what I want, but it seems first to be the right way.
> After my posting before put the raid back to lvm I do mdadm --detail
> and see, that the capacity cant't match, I have around 16 GB, I expected
> 12 GB - so I decided to stop my experiments - until I get a hint, which
> comes very fast.

So the first layout is the one you wanted.  Each drive is ~4GB ?  Or is this just a test setup?

>> I hope this helps you decide which layout is the one you really want.
> If you think you want the first layout, you should also consider raid6 (dual redundancy).
> There's a performance penalty, but your data would be significantly safer.
> I have to say, I haved looked at  raid 6 only at a glance.
> Are there any experiences in which percentage the performance penalty is to expect?

I don't have percentages to share, no.  They would vary a lot based on number of disks and type of CPU.  As an estimate though, you can expect raid6 to be about as fast as raid5 when reading from a non-degraded array.  Certain read workloads could even be faster, as the data is spread over more spindles.  It will be slower to write in all cases.  The extra "Q" parity for raid6 is quite complex to calculate.  In a single disk failure situation, both raid5 and raid6 will use the "P" parity to reconstruct the missing information, so their single-degraded read performance will be comparable.  With two disk failures, raid6 performance plummets, as every read requires a complete inverse "Q" solution.  Of course, two disk failures in raid5 stops your system.  So running at a crawl, with data intact, is better than no data.

You should also consider the odds of failure during rebuild, which is a serious concern for large raid5 arrays.  This was discussed recently on this list:

http://marc.info/?l=linux-raid&m=130754284831666&w=2

If your CPU has free cycles, I suggest you run raid6 instead of raid5+spare.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux