On Sat, 19 Jan 2013, Chris Murphy wrote:
Please explain this basic, simple math, where a URE is equivalent to 1
bit of information. And also, explain the simple math where bit of error
is equal to a URE. And please explain the simple math in the context of
a conventional HDD 512 byte sector, which is 4096 bits.
If you have a URE, you have lost not 1 bit. You have lost 4096 bits. A
loss of 4096 bits in 12.5TB (not 12.5TiB) is an error rate of 1 bit of
error in 2.44^10 bits. That is a gross difference from published error
rates.
I have seen your point of view posted in other discussions, and I don't
buy it. I believe the manufacturers are talking about how many bits read
before there is one or more bit error (ie can't error correct the bit
errors on that sector, so now the whole sector is URE. Since the sector is
an atomic unit, the drive can't report a single bit error (even though
that's probably what it is), it'll URE the whole 4k bytes. The
manufacturer is still talking about what's on the platter, not what the OS
sees.
Your view on how this works would mean that drives would read more than
10^3 more data before an URE, which from my empirical data isn't right.
Also, for a 4k sector drive, with your logic, would have 8 times better
BER ratio which I don't believe either.
I believe Phil was spot on when it comes to how it works. His post "19 Jan
2013 18:53:41" is exactly how I believe things work.
--
Mikael Abrahamsson email: swmike@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html