Re: Timeout until degrade of RAID5 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, June 8, 2009 5:19 am, Gabriel Ambuehl wrote:
> Dear List
> Apologies if this is a FAQ but googling for an hour did not yield much of
> a
> result.
>
> My situation:
> I have a NAS running 4 WD greenpower drives in RAID5. This works all nice
> and
> dandy, but since I do not actually need the NAS 90% of the time, I figured
> I
> would allow the drives to spin down to save on both power, noise and heat
> budget.
>
> This works most of the times, but I've seen a few cases by now where one
> drive
> was kicked out of the array upon wake up. My assumption is the drive took
> a
> little too long to wake up and hence hit some form of a time out
> especially
> because I could easily rebuild the array with the same drive.
>
> So Is there any way to increase the time that md would wait until removing
> a
> drive from an array?

No - this is not a function of md at all.

You should be asking "is there some way to get the SCSI/SATA/whatever
controller to wait a bit longer before failing an IO request?"
And that questions should, of course, be directed to developers
of the driver for whatever sort of disk controller you have.

Good luck.

NeilBrown


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux