Re: layout of far blocks in raid10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 11 May 2010 18:22:58 -0400
Aryeh Gregor <Simetrical+list@xxxxxxxxx> wrote:

> On Tue, May 11, 2010 at 5:56 PM, Neil Brown <neilb@xxxxxxx> wrote:
> > I'm not quite sure how to respond to this...  As a mathematician I would
> > expect you to understand the important of precision in choosing words, yet
> > you use the word "know" for something that is exactly wrong.  Either you mean
> > "guess" or you have been seriously misinformed.  If it is the latter, then
> > please let me know where this misinformation came from so I can see about
> > getting it corrected.
> >
> > md/raid10 uses a simple cyclic layout in all cases.  It does so because this
> > layout is completely general and works for all numbers of devices and copies.
> >
> > So you can only survive multiple device failures where are most N-1 are
> > adjacent where N is the number of copies, and the first and last devices are
> > treated as adjacent.
> 
> Mathematicians are sometimes wrong too, sadly.  :)  (And I'm only a
> grad student!)  I believe this is where I got my info:

A grad student!  You must be over educated:
         http://www.maa.org/devlin/devlin_02_10.html
:-)

> 
> http://git.debian.org/?p=pkg-mdadm/mdadm.git;a=blob_plain;f=debian/FAQ;hb=HEAD

Thanks... I guess I should read through that and report errors...

> 
> The answer to question 20 of that suggests that if you have four
> disks, 0 1 2 3, then 0 and 1 form one pair and 2 and 3 form the other.
>  If 2 fails, then 0 or 1 could still fail without data loss, but a
> failure of 3 will cause data loss.  Obviously, you know what you're
> talking about better than a Debian FAQ, so unless I'm misunderstanding
> the FAQ or you or both, maybe you should talk to the author of that.

The conclusion stated in question 20 is correct if you are considering the
'near' layout, though the reasoning is foggy and doesn't generalise to the
'far' or 'offset' layout.

With a 'near 2' layout on 4 drives, the blocks are:

  0  0  1  1
  2  2  3  3

which looks like striping across mirrored pairs, but that is really just a
coincidence.
On 5 drives it would look like:

  0  0  1  1  2
  2  3  3  4  4

The rule "OK as long as no two adjacent devices fail" still holds, though
there are some cases where it is OK even if two adjacent devices fail, for
the even-number-of-devices case.

NeilBrown

> 
> Testing with loopback files does seem to show that failing the second
> and third drives in a four-drive RAID will cause the RAID to fail, as
> I would predict from what you say and contrary to what I interpreted
> that FAQ to mean, so hopefully now I understand correctly.
> 
> Thanks for the correction.  Next time I'll be more cautious.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux