On 05/07/13 06:58, Christoph Anton Mitterer wrote:
I'd have even one more question here: Has anyone experience with my idea of intentionally running devices of different vendors (Seagate, WD, HGST)... for resilience reasons?... Does it work out as I plan, or are there any hidden caveats I can't see which make the resilience (not the performance) worse?
I've got both here. A large RAID-6 comprised entirely of single brand, single type consumer drives and a smaller RAID-10 built from a diverse selection. Both have had great reliability, so that's not really a good data point for you.
What I *have* found over the years is the importance of weeding out early failures. Before I commit a disk to service, I subject it to a couple of weeks of hard work. I usually knock up a quick and dirty bash script with multiple concurrent instances of dd reading and writing to differing parts of the disk simultaneously, with a bonnie++ run for good measure. With all this going on at the same time the drive mechanism gets a serious workout and the drive stays warmer than it will in actual service. If I have the chance, I do all the drives simultaneously and preferably in the machine they are going to spend the next couple of years. If I can't do that, then I have a burn-in chassis built from a retired server that can do 15 at a time.
This has proven quite effective in spotting the early life failures. I generally find (for consumer drives) if they pass this they'll last the 3 years of 24/7 I use them for before I replace them. My enterprise drives are a different story, and I have some here with just over 38k hours on them. I'll probably replace them for bigger drives before they ever fail.
Regards, Brad -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html