-----Original Message----- From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-owner@xxxxxxxxxxxxxxx] On Behalf Of David Greaves Sent: Monday, July 14, 2008 11:15 AM To: Matthias Urlichs Cc: linux-raid@xxxxxxxxxxxxxxx Subject: Re: How to avoid complete rebuild of RAID 6 array (6/8 active devices) >I've found that once a disk starts to go bad there is a very strong tendency for >it to continue to deteriorate. >So I don't replace disks because they have a bad sector; I replace them because >I suspect they will fail more as time goes by. >Sure, some don't - I don't want to take that chance. >David Different disk technologies (SAS, SATA, FC, etc) and the root cause for a "bad" sector vary widely. It is unwise to assume that a certain disk will have a "strong" tendency for a failure after a "bad" sector. Without analyzing the sense code or S.M.A.R.T. logs returned by the failed read you can't make any assumptions about how the error affects the health of the drive or even just the media. Now if your experience is limited to consumer-class ATA/SATA disks, then I agree there is a higher probability of a 2nd failure then on enterprise-class fibre channel disks, but it is certainly not a "strong tendency" of failure. There isn't even a likelihood of failure. (But cause for concern, especially if the disk was just put into service). It could be an ECC error that was caused by improper power off or a bad cable. David @ santools.com ��.n��������+%������w��{.n�����{����w��ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f