Re: photo storage question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



----- Original Message ----- 
From: "David Dyer-Bennet" 

: Probably.  There's a tradeoff, spinning wears the disk (rather slowly;
: look at the estimated lifespan for modern disks!), but starting the disk
: spinning is a lot of extra wear.  So it depends how many starts vs. how
: many hours of spinning.  And the exact numbers for any given drive aren't
: really known and aren't available even as estimates.
: 
: Powering a system down (well, it's more the powering it *up* step) has an
: even bigger impact.  Again, sitting vs. starting tradeoff, with the
: numbers not known.  20 years ago it was pretty clearly better to leave a
: system running for 24 hours rather than subject it to one extra power
: cycle.  I'd expect that period of time to have been reduced since then,
: but haven't seen recent estimates.
: 

I didn't quote the whole article earlier when I posted this but I'll cite one line again
"other notable patterns showed that failure rates are indeed definitely correlated to drive manufacturer, model, and age; failure rates did not correspond to drive usage except in very young and old drives "


it's a recent study by one of the biggest single user of drives.


Massive Google hard drive survey turns up very interesting things
http://tinyurl.com/6h257b
posted Feb 18th 2007 at 9:47PM

"When your server farm is in the hundreds of thousands and you're using cheap, off-the-shelf hard drives as your primary means of storage, you've probably got a pretty damned good data set for looking at the health and failure patterns of hard drives. Google studied a hundred thousand SATA and PATA drives with between 80 and 400GB storage and 5400 to 7200rpm, and while unfortunately they didn't call out specific brands or models that had high failure rates, they did find a few interesting patterns in failing hard drives. One of those we thought was most intriguing was that drives often needed replacement for issues that SMART drive status polling didn't or couldn't determine, and 56% of failed drives did not raise any significant SMART flags (and that's interesting, of course, because SMART exists solely to survey hard drive health); other notable patterns showed that failure rates are indeed definitely correlated to drive manufacturer, model, and age; failure rates did not correspond to drive usage except in very young and old drives (i.e. heavy data "grinding" is not a significant factor in failure); and there is less correlation between drive temperature and failure rates than might have been expected, and drives that are cooled excessively actually fail more often than those running a little hot. Normally we'd recommend you go on ahead and read the document, but be ready for a seriously academic and scientific analysis



[Index of Archives] [Share Photos] [Epson Inkjet] [Scanner List] [Gimp Users] [Gimp for Windows]

  Powered by Linux