On Wed, May 4, 2011 at 9:34 PM, David Boreham <david_list@xxxxxxxxxxx> wrote: > On 5/4/2011 9:06 PM, Scott Marlowe wrote: >> >> Most of it is. But certain parts are fairly new, i.e. the >> controllers. It is quite possible that all these various failing >> drives share some long term ~ 1 year degradation issue like the 6Gb/s >> SAS ports on the early sandybridge Intel CPUs. If that's the case >> then the just plain up and dying thing makes some sense. > > That Intel SATA port circuit issue was an extraordinarily rare screwup. > > So ok, yeah...I said that chips don't just keel over and die mid-life > and you came up with the one counterexample in the history of > the industry :) When I worked in the business in the 80's and 90's > we had a few things like this happen, but they're very rare and > typically don't escape into the wild (as Intel's pretty much didn't). > If a similar problem affected SSDs, they would have been recalled > and lawsuits would be underway. Not necessarily. If there's a chip that has a 15% failure rate instead of the predicted <1% it might not fail enough for people to have noticed, since a user with a typically small sample might think he just got a bit unlucky etc. Nvidia made GPUs that overheated and died by the thousand, but took 1 to 2 years to die. There WAS a lawsuit, and now to settle it, they're offering to buy everybody who got stuck with the broken GPUs a nice single core $279 Compaq computer, even if they bought a $4,000 workstation with one of those dodgy GPUs. There's a lot of possibilities as to why some folks are seeing high failure rates, it'd be nice to know the cause. But we can't assume it's not an inherent problem with some part in them any more than we can assume that it is. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general