Re: When do you replace old hard drives in a raid6?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 9 Mar 2016 07:43:16 +0100 (CET)
Mikael Abrahamsson <swmike@xxxxxxxxx> wrote:

> I had very high failure rates of the early 2TB WD Greens, but I still have 
> some WD20EARS and WD20EADS that are alive after 58k hours.

In my experience a major cause for the WD failures is that they develop rust
on the PCB contacts which connect to the drive insides, see e.g.:
http://ods.com.ua/win/rus/other/hdd/2/wd_cont2.jpg
http://www.chipmaker.ru/uploads/monthly_12_2013/post/image/post-7336-020632100%201388236036.jpg
and: https://www.youtube.com/watch?v=tDTt_yjYYQ8

If such WD drive has just developed several unreadable sectors, this often can
be solved by checking and cleaning those contacts, then overwriting the whole
drive with zeroes (or well, with whatever you want, the point is to rewrite
all the "badly written" areas), then it will likely work fine for years after
that.

> One of the slightly lower power on time ones has a scary load cycle count 
> though:
> 
> Device Model:     WDC WD20EARS-00S8B1
>    9 Power_On_Hours          0x0032   033   033   000    Old_age   Always       -       49255
> 193 Load_Cycle_Count        0x0032   001   001   000    Old_age   Always       -       1317839

This can be disabled:
http://www.storagereview.com/how_to_stop_excessive_load_cycles_on_the_western_digital_2tb_caviar_green_wd20ears_with_wdidle3
http://idle3-tools.sourceforge.net/

-- 
With respect,
Roman

Attachment: pgpEGqujoVJ90.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux