On 23 Apr 2006, Mark Hahn said: > some people claim that if you put a normal (desktop) > drive into a 24x7 server (with real round-the-clock load), you should > expect failures quite promptly. I'm inclined to believe that with > MTBF's upwards of 1M hour, vendors would not claim a 3-5yr warranty > unless the actual failure rate was low, even if only running 8/24. I've seen a lot of cheap disks say (generally deep in the data sheet that's only available online after much searching and that nobody ever reads) that they are only reliable if used for a maximum of twelve hours a day, or 90 hours a week, or something of that nature. Even server disks generally seem to say something like that, but the figure given is more like `168 hours a week', i.e., constant use. It still stuns me that anyone would ever voluntarily buy drives that can't be left switched on (which is perhaps why the manufacturers hide the info in such an obscure place), and I don't know what might go wrong if you use the disk `too much': overheating? But still it seems that there are crappy disks out there with very silly limits on the time they can safely be used for. (But this *is* the RAID list: we know that disks suck, right?) -- `On a scale of 1-10, X's "brokenness rating" is 1.1, but that's only because bringing Windows into the picture rescaled "brokenness" by a factor of 10.' --- Peter da Silva - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html