Re: Corrupted files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 10, 2014 at 10:31 AM, Emmanuel Florac
<eflorac@xxxxxxxxxxxxxx> wrote:
> Le Tue, 09 Sep 2014 20:43:08 -0500
> Leslie Rhorer <lrhorer@xxxxxxxxxxxx> écrivait:
>
>>       None of the failed drives were WD green.  All three and the
>> previous four were Seagate.  I realize that is not a large
>> statistical sample.
>>
>
> If you're interested in large statistical samples, on a grand total of
> 4000 1 TB Seagate Barracuda ES2, I had to replace 2100 of them over the
> course of 3 years. I still have a couple of hundred of these
> unfortunate pieces of crap in service, and they still represent the
> vast majority of unexpected RAID malfunctions, urgent replacements,
> late night calls and other "interesting side activities".
>
> I wouldn't buy anything labeled Seagate nowadays. Their drives have
> been the baddest train wreck since the dreaded 9 GB Micropolis back in
> 1994 (or was it 1995?).

I buy about 100 drives a year, but I don't work them very hard.  Just
lots of data to store and I need to keep my data sets segregated for
legal reasons.  I don't use raid, just lots of individual disks and
most data maintained redundantly.

About 4 years ago (or maybe 5), Seagate had a catastrophic drive
situation.  I can remember buying a batch of 10 drives and having 8 of
them fail in the first 2 months.  The bad part was they mostly
survived a 10 hour burn-in, so they tended to fail with real data on
them.   I had one case (at a minimum) that summer where I put the data
on 3 different Seagate drives and all 3 failed.

Fortunately, I was able to swap the disk controller card from one of
the working drives with one of the dead drives and recover the data.

Regardless, ignoring the summer of discontent, I find Seagate to be my
preferred drives.

fyi: In June I bought 30 or so WD elements drives to try them out.
These are not the green drives, just bare bones WD drives.  None of
them were DOA, but 3 failed within 4 weeks, so a 10% failure rate in
the first month.  Only one of them had unique data on it, so I had to
recreate that data.  Fortunately the source of the data was still
available.  All of those drives have been pulled out of routine
service.

Greg

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux