Re: raid 5 crashed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 02, 2016 at 03:01:35PM +0100, Wols Lists wrote:
> If you have 3 x 4TB desktop drives in an array, then the spec
> says you should expect, and be able to deal with, an error EVERY time
> you scan the array.

It doesn't happen in practice, though. (Thank god.)

There was a paper about disk failures that said URE was simply not useful. 
(Empirical Measurements of Disk Failure Rates).

There's this ZDnet article that declared RAID5 dead in 2009 but it still 
works fine for me.

I just ignore the URE spec entirely.
(Until someone can prove that it actually matters.)

IMHO the main reason people notice disk failures during rebuilds, is that 
they never ever tested their disks for read errors before. You should do 
so regularly.

A long SMART self-test takes ages on a large disk, on a busy server with 
today's disk sizes it can take days, which is why people avoid running 
them (the other reason is lazyness).

However, SMART also supports selective self-tests; so you can run a 
relatively short test every day and cover the entire disk over time.
You can schedule these partial tests at night when server load is lowest.

I think mdadm can also do a selective RAID check by fiddling with some 
variables in /proc but there is no obvious way of doing so via 
the userspace program.

Regards
Andreas Klauer
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux