Re: Fault tolerance with badblocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/08/2017 03:52 PM, Nix wrote:
> On 8 May 2017, Phil Turmel verbalised:
> 
>> On 05/08/2017 10:50 AM, Nix wrote:

> And... then what do you do? On RAID-6, it appears the answer is "live
> with a high probability of inevitable corruption".

No, you investigate the quality of your data and the integrity of the
rest of the system, as something *other* than a drive problem caused the
mismatch.  (Swap is a known exception, though.)

> That's not very good.
> (AIUI, if a check scrub finds a URE, it'll rewrite it, and when in the
> common case the drive spares it out and the write succeeds, this will
> not be reported as a mismatch: is this right?)

This is also wrong, because you are assuming sparing-out is the common
case.  A read error does not automatically trigger relocation.  It
triggers *verification* of the next *write*.  In young drives,
successful rewrite in place is the common case.  As the drive ages,
rewrites will begin relocating because there really is a new problem at
that spot, not simple thermal/magnetic decay.

But keep in mind that the firmware of the drive will start verification
of a sector only if it gets a *read* error.  Such sectors get marked as
"pending" relocations until they are written again.  If that write
verifies correct, the "pending" status simply goes away.  Ordinary
writes to presumed-ok sectors are *not* verified.  (There'd be a huge
difference between read and write speeds on rotating media if they were.)

{ Drive self tests might do some pre-emptive rewriting of marginal
sectors -- it's not something drive manufacturers are documenting.  But
a drive self-test cannot fix an unreadable sector -- it doesn't know
what to write there. }

>> This is actually counterproductive.  Rewriting everything may refresh
>> the magnetism on weakening sectors, but will also prevent the drive from
>> *finding* weakening sectors that really do need relocation.
> 
> If a sector weakens purely because of neighbouring writes or temperature
> or a vibrating housing or something (i.e. not because of actual damage),
> so that a rewrite will strengthen it and relocation was never necessary,
> surely you've just saved a pointless bit of sector sparing? (I don't
> know: I'm not sure what the relative frequency of these things is. Read
> and write errors in general are so rare that it's quite possible I'm
> worrying about nothing at all. I do know I forgot to scrub my old
> hardware RAID array for about three years and nothing bad happened...)

Drives that are in applications that get *read* pretty often don't need
much if any scrubbing -- the application itself will expose problem
sectors.  Hobbyists and home media servers can go months with specific
files unread, so developing problems can hit in clusters.  Regular
scrubbing will catch these problems before they take your array down.

And you can't compare hardware array behavior to MD -- they have their
own algorithms to take care of attached disks without OS intervention.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux