On Fri, 28 Mar 2008, Richard Scobie wrote:
Mattias Wadenstein wrote:
A day or two? That's quite risky. Never mind that you get awful
performance for that day or two and/or a risk of data corruption.
Neil Brown some weeks on this mailing list expressed a very
cautionary thought:
«It is really best to avoid degraded raid4/5/6 arrays when at all
possible. NeilBrown»
Yes, I read that mail. I've been meaning to do some real-world testing of
restarting degraded/rebuilding raid6es from various vendors, including MD,
but haven't gotten around to it.
You may be interested in these results - throughput results on an 8 SATA
drive RAID6 showed average write speed went from 348MB/s to 354MB/s and read
speed 349MB/s to 196MB/s, while rebuilding with 2 failed drives. This was
with an Areca 1680x RAID controller.
http://www.amug.org/amug-web/html/amug/reviews/articles/areca/1680x/
That's only performance though. I was interested in seeing if you could
provoke actual data corruption by doing unkind resets while doing various
things like writing, flipping bits/bytes inside files, or some other
access/update pattern.
That performance is good enough in the "easy" cases I'm well aware of,
I've done some poking at earlier areca controllers. They had really weird
performance issues in (for me) way too common corner cases though. I just
got a couple of newer ones delivered though, so I'll start pushing those
soon. I'm not sure I'll get the time for trying to provoke data corruption
in degraded raidsets though.
/Mattias Wadenstein
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html