RE: Samsung F1 RAID Class SATA/300 1TB drives

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: linux-raid-owner@xxxxxxxxxxxxxxx [mailto:linux-raid-
> owner@xxxxxxxxxxxxxxx] On Behalf Of David Rees
> Sent: Monday, November 01, 2010 4:26 PM
> To: John Robinson
> Cc: Mark Knecht; Linux-RAID
> Subject: Re: Samsung F1 RAID Class SATA/300 1TB drives
> 
> On Thu, Oct 28, 2010 at 4:37 PM, John Robinson
> <john.robinson@xxxxxxxxxxxxxxxx> wrote:
> > On 29/10/2010 00:17, Mark Knecht wrote:
> >> I saw in Fry's San Jose ad today they were selling these
> >> Serial-ATA/300 drives for $67. They didn't give a model number but
> >> scouting around a bit on the web I'm guessing they are a discontinued
> >> model.
> >>
> >> Any inputs on whether these are drives that work well with mdadm RAID?
> >> Do they support TLER and otherwise work well?
> >>
> >> This would just be a home server of some type, nothing industrial.
> >> Probably a 3 drive RAID-1 or something like that.
> >
> > Well, they're perhaps not great. I bought three and after only about a
> > thousand hours one of them was giving SMART errors, then after about
> 7,500
> > hours a second one started doing it too. At that point I replaced both
> with
> > other makes, copying over with ddrescue (or maybe it was dd_rescue),
> which
> > worked without any failed sectors, then ran badblocks -w on the Samsungs
> and
> > the SMART errors went away. The third one of mine is still fine, and the
> > other two are now in a ReadyNAS giving good service.
> 
> I think that using two different brand drives in general is a good idea.
> 
> We recently had 2 500 GB WD5000AAKS drives die at the same time over a
> weekend.  Both of them suffered from the same death and would no
> longer spin up - just making a clicking/whirring sound when you
> powered it on.
> 
> Luckily we had backups for most of the data on there, but some
> non-critical (but time consuming to manually restore) data had to be
> reconstructed.
> 
> I understand that drives from the same batch will often die around the
> same period of time - from now on we plan on trying to use dissimilar
> drives when possible.
> 
> First time we've seen 2 drives in one array die that close together
> and that catastrophically, though.

	I had 4 Seagate drives go bad all at once on a 10 drive array.
Fortunately, the drives did not die entirely.  Indeed, I'm still not sure
what is wrong with them.  They seem to read and write just fine, but if they
are added back to the array and the array is asked to access data with a
high seek rate (read or write), the drives get kicked from the array.
Relatively low seek rates allow the drives to continue to be array members,
and when the drives get kicked, they can always be added back.  I was able
to use ddrescue to read the data 100% without any failures.  Had the array
been unrecoverable, I had backups, of course, but I did not have to resort
to them.  I'm not entirely sure when the drives all went bad, but it was
within a week or two of each other.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux