Re: A few questions regarding RAID5/RAID6 recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2011/4/25 KÅvÃri PÃter <peter@xxxxxxxxxxxxxx>:
> Hi all,
>
> Since this is my first post here, let me first thank all developers for their great tool. It really is a wonderfull piece of soft. ;)
>
> I heard a lot of horror stories about the event, when a member of a raid5/6 array gets kicked off due to I/O errors, and then, after the replacement and during the recostruction, another drive fails, and the array become unusable. (For raid6, add another drive to the story, and the problem is the same, so letâs just talk about raid5 now). I want to prepare myself for this kind of unlucky event, and build up a strategy that I can follow once it happens. (I hope never, but...)

>From what I understand If you run weekly raid scrubs you will limit
the possibility of this happening. CentOS / RedHat already have this
scheduled. If not you can add a cron job to call check or repair. Make
sure you replace DEV with the device.

echo check > /sys/block/DEV/md/sync_action

I have had 3 x 1TB drives in RAID 5 for the past 2.5 years. I have not
had a drive kicked out or an error found. If an error is found, since
it is caught early, I should have a good probability of replacing the
failed drive without incurring another error.

Ryan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux