Timeout until degrade of RAID5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear List
Apologies if this is a FAQ but googling for an hour did not yield much of a 
result. 

My situation:
I have a NAS running 4 WD greenpower drives in RAID5. This works all nice and 
dandy, but since I do not actually need the NAS 90% of the time, I figured I 
would allow the drives to spin down to save on both power, noise and heat 
budget.

This works most of the times, but I've seen a few cases by now where one drive 
was kicked out of the array upon wake up. My assumption is the drive took a 
little too long to wake up and hence hit some form of a time out especially 
because I could easily rebuild the array with the same drive.

So Is there any way to increase the time that md would wait until removing a 
drive from an array?

Thanks for your time and kind regards,
Gabriel
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux