C) Suffer with desktop drives without SCTERC support. They cannot be
set to appropriate error timeouts. Udev or boot script assistance is
needed to set a 120 second driver timeout in sysfs. They do *not* work
properly with MD out of the box.
> the recommended timeout for 'C' has drifted upward to 180.
Yes, I saw this; but, is it really not possible to examine the default
timeout in a certain desktop drive, rather than follow rough estimates
like "about two of three minutes should be enough"?.. I wanted to make
sure it is indeed not possible, because that is hard to believe. Or do
they not have a specified timeout at all?..
Since that was written, 'A' would now include almost-enterprise drives
with RAID ratings like the Western Digital Red family.
Yes, I understand that; always make sure they support it.
Still, I don't think it has anything to do with what has happened to my
"small file server"...
That's why I asked for the dmesg. It could have been a bug. No crisis
if it's lost, so long as you've accepted one of A through D above.
I've moved all the data to another server, disassembled this one, and
reused the surviving hard drive, so I'm safe, but sadly, no logs. The
important thing is, I've confirmed that this is not the expected
behaviour - I was kind of ready to hear that "that's how it is with
softraids, faulty drives hang your entire system like they hang Windows".
I've checked all my hard drives in all my RAIDs; all of them support
"TL;DR" technology. They were all made before the "crippling" tendencies
took over, and they are mostly Hitachi, so I'm lucky.
--
darkpenguin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html