Re: Question about raid robustness when disk fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tim Bock <jtbock@xxxxxxxxxxxx> writes:

> Thank you for the response.  Through the smartctl tests, I noticed that
> the "seek error rate" value for the misbehaving disk was at 42, with the
> threshold at 30.  For other disks in the same array, the "seek error
> rate" values were up around 75 (same threshold of 30).  As it seems the
> values decrement to the threshold, I took that as a further sign that
> the disk was in trouble and replaced it.  Any likely correlation between
> the described problem and the "seek error rate" value?

Always keep in mind that smart values are often random, fictional or
garbage. I have disks that have an airflow temperature (outside) of 80+
and temperature (inside) of 50+. Both going down as the disk heats up
from use.

The only values I would keep a close eye on is remapped sectors and
pending sectors. Anything else gives nice graphs but I always feel is
totaly useless. And even the pending sectors are != 0 on one drive while
badblocks reports no errors on repeated passes. The drive just doesn't
seem to reduce the count when it successfully remaps a sector.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux