Re: Inactive arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 14/09/16 19:16, Daniel Sanabria wrote:
> Other than replacing the green drives with something more
> suitable (any suggestions are welcome)

WD Reds or Seagate NAS. I don't think they make them any more, but
Seagate Constellations are fine too. My Toshiba 2TB 2.5" laptop drive
would be fine.

The tl;dr version of the problem with Greens (and any other desktop
drive for that matter), if you haven't read it up yet, is that when the
kernel requests a read from a dodgy drive, it just sits there,
*unresponsive*, until the read succeeds or the drive times out. And the
drive will time out in its own good time.

If the kernel times out *before* the drive, and by default the kernel
does so after 7 secs, while the drive can take two minutes or more, then
the kernel will recreate the missing block and try to write it. The
drive is unresponsive, the write times out, and the kernel assumes the
drive is dead and kicks it from the array.

That's why you need to increase the kernel timeout, because you can't
reduce the drive timeout, and which is why a flaky hard drive will cause
system response to fall off horrendously.

Cheers,
Wol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux