Re: Use of WD20EARS with MDADM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phillip Susi wrote:
On 4/21/2010 9:20 AM, Bill Davidsen wrote:
I hear this said, but I don't have any data to back it up. Drive vendors
aren't stupid, so if the parking feature is likely to cause premature
failures under warranty, I would expect that the feature would not be
there, or that the drive would be made more robust. Maybe I have too
much faith in greed as a design goal, but I have to wonder if load
cycles are as destructive as seems to be the assumption.

Indeed, I think you have too much faith in people doing sensible things.
 Especially when their average customer isn't placing the drive in a
high use environment and they know it, and suggest against doing so.

I'd love to find some real data, anecdotal stories about older drives
are not overly helpful. Clearly there is a trade-off between energy
saving, response, and durability, I just don't have any data from a
large population of new (green) drives.

I've not seen any anecdotal stories, but I have seen plenty of reports
with real data showing a large number of head unloads from the SMART
data after a relatively short period of use.  Personally mine has a few
hundred so far and I have not even used it for real storage yet, only
testing.  The specifications say it's good for 300,000 cycles, so do the
math... getting 5 unloads per minute would lead to probable failure
after 41 days.  Granted that is about worst case, but still something to
watch out for.  In order to make it the entire 3 year warranty period,
you need to stay under 11.4 unloads per hour.  If you have very little
IO activity, or VERY MUCH, then this is entirely possible, but more
moderate loads in the middle have been observed to cause hundreds of
unloads per hour.

Given that, and the fact that WD themselves have stated that you should
not use these drives in a raid array, I'd either stay away, or watch out
for this problem and try to take action to avoid and monitor it.

Part of this is my feeling that no one really knows if the drive fails after N loads, because even if WD could set the unload time down, the cycle takes time to happen, so I would bet that they are taking an educated guess. The other part is that there are lots of clerical tasks which would hit the drive, under Windows, single drive, 3-5 times a minute. Data entry comes to mind, customer support, print servers, etc. Granted that these are probably 7x5 hours a week, but I'm think 2/min, 7hr/day, 200day/yr... 168k/yr, and that's not the worst case.

Having run lots of drives (some TB of 73GB 15k rpm LVD320) MTTF is interesting, because the curve has spikes at the front from infant mortality, and at the end from old age, but it was damn quiet in the middle. I'd love to see the data on these, not because I'm going to run them but just to keep current, so when someone calls me and says they got a great deal on green drives, I'll know what to tell them.

--
Bill Davidsen <davidsen@xxxxxxx>
 "We can't solve today's problems by using the same thinking we
  used in creating them." - Einstein

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux