Re: Use of WD20EARS with MDADM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Berkey B Walker wrote:


Mikael Abrahamsson wrote:
On Wed, 21 Apr 2010, Bill Davidsen wrote:

I hear this said, but I don't have any data to back it up. Drive vendors aren't stupid, so if the parking feature is likely to cause premature failures under warranty, I would expect that the feature would not be there, or that the drive would be made more robust. Maybe I have too much faith in greed as a design goal, but I have to wonder if load cycles are as destructive as seems to be the assumption.

What I think people are worried about is that a drive might have X load/unload cycles in the data sheet (300k or 600k seem to be normal figures) and reaching this in 1-2 years of "normal" (according to the user who is running it 24/7) might be worrying (and understandably so).

Otoh these drives seem to be designed for desktop 8 hour per day use, so running them as a 24/7 fileserver under linux is not what they were designed for. I have no idea what will happen when the load/unload cycles goes over the data sheet number, but my guess is that it was put there for a reason.

I'd love to find some real data, anecdotal stories about older drives are not overly helpful. Clearly there is a trade-off between energy saving, response, and durability, I just don't have any data from a large population of new (green) drives.

My personal experience from the WD20EADS drives is that around 40% of them failed within the first year of operation. This is not from a large population of drives though and wasn't due to load/unload cycles. I had no problem getting them replaced under warranty, but I'm running RAID6 nowadays :P

Sorry, you sound like a factory droid. *I* see no reason for early failure besides cheap mat'ls in construction. Were these assertations of short life to be true, I would campaign against the drive maker. (I think that they are just normalizing failure rate against warranty claims) Buy good stuff. I *wish* I could define the term by mfg. It seems Seagate, & WD don't hack it. The Japanese drives did, but since the $ dropped -

Let's see, first you put my name on something I was quoting (with attribution), delete the correct name of the person you are quoting, and then call me a "factory droid." So I have some idea of your attention to detail. Second, the short term failure rates are influenced by components delivered, assembly, and treatment in shipping. So assembly is controlled by the vendor, parts are influenced by supplier selected, and delivery treatment is usually selected by the retailer. A local clone maker found that delicate parts delivered on Wednesday had a higher infant mortality that other days. Regular driver had Wednesday off, sub thought "drop ship" was an unloading method, perhaps.

One thing seemingly missed is the relationship between storage density and drive temp.variations.. Hard drive mfgs are going to be in deep doodoo when the SSD folks get price/perf in the lead lane. This year, I predict. And maybe another 2 for long term reliability to be in the lead..

I think you're an optimist on cost equality, people are changing to green drives which are generally slower due to spin down or lower rpm, because the cost of power and cooling is important. It's not clear if current SSD tech will be around in five years, because there are new technologies coming which are inherently far more stable for multiple writes. The spinning platter may be ending, but the replacement is not in sight. In ten years I doubt current SSD tech will be in use, replaced by phase change, optical isomers, electron spin, or something still in a lab. And the deployment of user visible large sectors (write chunks, whatever) is not clear, if the next tech will work just as well with smaller sectors, this may become a moot point.

I believe that many [most?] RAID users are looking for results (long term archival) that are not intended in the design.We are about 2 generations away from that being a reality - I think.. For other users, I would suggest a mirror machine. with both machines being scrubbed daily, and media being dissimilar in mfg and mfg date.

It's not clear that date of manufacture is particularly critical, while date of deployment (in-use hours), probably is. But looking at the Google disk paper, a crate of drives from the same batch doesn't all drop dead at once, or close to it, so age in service is a factor, but not likely a critical factor.
I can't wait until Neil gets to (has to) play/work with the coming tech. Neat things are coming.

I would rather see some of the many things on the "someday list" get implemented. It's more fun to play with new stuff than polish off the uglies in the old, but the uglies are still there.

--
Bill Davidsen <davidsen@xxxxxxx>
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux