Re: Fault tolerance with badblocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6 May 2017, Wols Lists outgrape:

> On 06/05/17 12:21, Ravi (Tom) Hale wrote:
>>> Bear in mind also, that any *within* *spec* drive can have an "accident"
>>> > every 10TB and still be considered perfectly okay. Which means that if
>>> > you do what you are supposed to do (rewrite the block) you're risking
>>> > the drive remapping the block - and getting closer to the drive bricking
>>> > itself. But if you trap the error yourself and add it to the badblocks
>>> > list, you are risking throwing away perfectly decent blocks that just
>>> > hiccuped.
>
>> For hiccups, having a bad-read-count for each suspected-bad block could
>> be sensible. If that number goes above <small-threshold> it's very
>> likely that the block is indeed bad and should be avoided in future.
>
> Except you have the second law of thermodynamics in play - "what man
> proposes, nature opposes". This could well screw up big time.
>
> DRAM memory needs to be refreshed by a read-write cycle every few
> nanoseconds. Hard drives are the same, actually, except that the
> interval is measured in years, not nanoseconds. Fill your brand new hard
> drive with data, then hammer it gently over a few years. Especially if a
> block's neighbours are repeatedly rewritten but this particular block is
> never touched, it is likely to become unreadable.
>
> So it will fail your test - reads will repeatedly fail - but if the
> firmware was given a look-in (by rewriting it) it wouldn't be remapped.

You mean it *would* be remapped (and all would be well).

I wonder... scrubbing is not very useful with md, particularly with RAID
6, because it does no writes unless something mismatches, and on failure
there is no attempt to determine which of the N disks is bad and rewrite
its contents from the other devices (nor, as I understand it, does it
clearly say which drive gave the error, so even failing it out and
resyncing it is hard).

If there was a way to get md to *rewrite* everything during scrub,
rather than just checking, this might help (in addition to letting the
drive refresh the magnetization of absolutely everything). "repair" mode
appears to do no writes until an error is found, whereupon (on RAID 6)
it proceeds to make a "repair" that is more likely than not to overwrite
good data with bad. Optionally writing what's already there on non-error
seems like it might be a worthwhile (and fairly simple) change.

-- 
NULL && (void)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux