Re: raid6 rebuild

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Lennert Buytenhek wrote:
On Wed, Apr 04, 2007 at 08:22:00PM -0700, Dan Williams wrote:

While my RAID6 array was rebuilding after one disk had failed (which
I replaced), a second disk failed[*], and this caused the rebuild
process to start over from the beginning.

Why would the rebuild need to start over from the beginning in this
case?  Why couldn't it just continue from where it was?
I believe it is because raid5 and raid6 share the same error handler
which sets MD_RECOVERY_ERR after losing any disk.  It should probably
not set this flag in 1-disk lost raid6 case, but I might be
overlooking something else.

Right, so you're saying that it's probably a bug rather than an
intentional 'feature'?

No, I would say it's a "reviewable design decision" To be pedantic (as I often am), a bug really means that unintended results are generated. In this case I think the code functions as intended, but might be able to safely take some other action.

I confess, I would feel safer with my data if the rebuild started over, I would like to be sure that when it (finally) finishes the data are valid. If you replaced the 2nd drive, then a full rebuild would be required in any case, to get ALL drives valid.

--
bill davidsen <davidsen@xxxxxxx>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux