Re: raid6 array assembled from 11 drives and 2 spares - not enough to start the array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/07/2013 07:49 PM, Garðar Arnarsson wrote:
> I seem to have been able to resolve this.
> 
> I tried force assembling the array with all the drives except for sda1
> (the problematic device before) that way the array got assembled with
> 12 drives and one spare, enough so I could recover the array.
> 
> Still would want to know what might have caused these problems for the
> first place, but I'm glad it seems to be working ok for now.

In my years of helping people on this list, the single most common cause
of spurious dropouts is mismatched error recovery timeouts.  Caused by
use of desktop hard drives in raid arrays.  You should search the list
archives for various combinations of "scterc" "device/timeout" "tler"
and "ure".

Then report the following on the list (inline, not attached):

for x in /sys/block/*/device/timeout ; do echo $x $(< $x) ; done

smartctl -x /dev/sd[a-q]


HTH,

Phil

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux