Re: 3TB drives failure rate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/29/2012 03:54 AM, David Brown wrote:
> On 29/10/2012 05:29, Roman Mamedov wrote:
>> On Sun, 28 Oct 2012 20:09:06 -0400
>> Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx> wrote:
>>
>>> Two separate issues.
>>
>>> The comments about "dropping out of raid" had to do with drives that are
>>> slow to come out of sleep mode - causing hiccups when the RAID
>>> hardware/software simply doesn't see the drive, and drops it.
>>
>> There are no drives in good working order that would come out of sleep
>> mode SO
>> slowly, that the Linux kernel ATA subsystem would even give up trying and
>> return an I/O error from it (and it's only after that point, when this
>> begins
>> to become mdraid's concern).
>>
>> I have yet to see even any first sign of "SATA frozen" due to drive sleep
>> mode, let alone to imagine this last through all the port resets and
>> speed
>> step-downs the SATA driver will attempt.
>>
>> So the "sleep" issue is not relevant with Linux software RAID, and if
>> you're
>> still concerned that it might be, you can just reconfigure your drives
>> so they
>> don't enter that sleep mode.
>>
> 
> The same applies to the long retry times of "desktop" drives - Linux
> software raid has no problem with them.  Some (perhaps "many" or "all" -
> I don't have the experience with hardware raid cards to say) hardware
> raid cards see long read retries as a timeout on the disk, and will drop
> the whole disk from the array.

Not true.  The default linux controller timeout is 30 seconds.  Drives
that spend longer than the timeout in recovery will be reset.  If they
don't respond to the reset (because they're busy in recovery) when the
raid tries to write the correct data back to them, they will be kicked
out of the array.

Been there, done that, have the tee shirt.

> Linux md raid will wait for the data to come in, and use it if it is
> valid.  If the disk returns an error, the md layer will re-create the
> data from the other disks, then re-write the bad block.  The disk will
> then re-locate the bad block to one of its spare blocks, and everything
> should be fine.  (If the write also fails, the drive gets kicked out.)

Precisely, except for the wait.  It won't wait that long unless you
change the default driver timeout.  The Seagate drives that did this to
me were kicked of the array because they were still stuck on recovery
when the write was commanded.

The drives were (and still are) just fine.  They had UREs that needed to
be rewritten.  When I later wiped the drives, they remained
relocation-free.  They are now in solo duty as off-site backups.

> So with software raid, there are no problems using desktop drives of any
> sort in your array (assuming, of course, you don't have physical issues
> such as heat generation, vibration, support contracts, etc., that might
> otherwise make you prefer "raid" or "enterprise" drives).

Simply not true, in my experience.  You *must* set ERC shorter than the
timeout, or set the driver timeout longer than the drive's worst-case
recovery time.  The defaults for desktop drives are *not* suitable for
linux software raid.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux