Re: Re-adding disks to RAID6 in a Fujitsu NAS: old mdadm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 28 Jun 2012 17:56:39 +0200 "Stefan G. Weichinger" <lists@xxxxxxxx>
wrote:

> Am 28.06.2012 13:22, schrieb NeilBrown:
> 
> >> Do I have to fear read-errors as with RAID5 now?
> > 
> > If you get a read error, then that block in the new devices cannot
> > be recovered, so the recovery will abort.  But you have nothing to
> > fear except fear itself :-)
> 
> Ah, yes. Not exactly raid-specific, but I agree ;-) (we have a poem by
> Mischa Kaleko in german reflecting this, btw ...)
> 
> So if there is one non-readable block on the 2 disks I started with
> (the degraded array) the recovery will fail?
> 
> As sd[ab]3 were part of the array earlier, would that mean that maybe
> they bring the missing bit, just in case?
> 
> 
> >> I still don't fully understand if there are also 2 bits of 
> >> parity-informations available in a degraded RAID6 array on 2
> >> disks only.
> > 
> > In a 4-drive RAID6 like yours, each stripe contains 2 data blocks
> > and 2 parity blocks (Called 'P' and 'Q'). When two devices are
> > failed/missing, some stripes will have 2 data blocks and no parity,
> > some will have both parity blocks and no data (but can of course 
> > compute the data blocks from the parity blocks). Some will have one
> > of each.
> > 
> > Does that answer the question?
> 
> Yes, it does.
> 
> But ... I still don't fully understand it :-P
> 
> What I want to understand and know:
> 
> There is this issue with RAID5: resyncing the array after swapping a
> failed disk for a new one stresses the old drives, and if there is one
> read-problem on them the whole array blows up.
> 
> As far as I read RAID6 protects me against this because of the 2
> parity blocks (instead of one) because it is much more unlikely that I
> can't read both of them, right?

Right.

> 
> Does this apply to only a N-1 degraded RAID6 or also an N-2 degraded
> array? As far as I understand, it is correct for both cases.

Only an N-1 degraded array.
An N-2 degraded RAID6 is much like an N-1 degraded RAID5 and would suffer the
same fate on a read error during recovery.


> 
> -
> 
> I faced this RAID5-related problem 2 times already (breaking the array
> ...) and therefore started to use RAID6 for the servers I deploy,
> mostly using 4 disks, sometimes 6 or 8.
> 
> If this doesn't really protect things better, I should rethink that,
> maybe.

Your current array had lost 2 drives.  If it had been a RAID5 you would be
wishing you had better backups right now.  so I think RAID6 really does
provide better protection :-)  However it isn't perfect - it cannot protect
against concurrent failures on 3 drives...

NeilBrown



> 
> -
> 
> Right now my recovery still needs around 80mins to go:
> 
> md0 : active raid6 sdb3[4](S) sda3[5] sdc3[2] sdd3[3]
>       3903891200 blocks level 6, 64k chunk, algorithm 2 [4/2] [__UU]
>       [================>....]  recovery = 83.0%
> (1621636224/1951945600) finish=81.5min speed=67477K/sec
> 
> I assume it is OK in this state of things that sdb3 is marked as
> (S)pare ...
> 
> Thanks, greetings, Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux