Re: 4 out of 16 drives show up as 'removed'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2011-12-09 at 14:07 -0800, Eli Morris wrote:

> My further understanding is that one can control the timeout in the OS of drives that are in an expansion bay, such as they are now configured in our system. But, look, I'll admit that that I'm no expert in this issue and someone might have a better suggestion or will tell me why that is not the right idea / a bad idea, whatever. And if using these drives is just impossible (which very well might be - YES, I'm getting very sick of trying to find a way to make these work), then so be it. 
> 

I'm setting up my first raid server for home use so power reduction when
not in use is important hence my drives and the hostlink power
management need to be able to go into low power mode and this is what I
have gleaned from the net!


First issue the following (taken and edited from
http://www.excaliburtech.net/archives/83 )

(top one works in Red Hat 4):
for a in /sys/class/scsi_device/*/device/timeout; do echo -n “$a “; cat
“$a” ; done;
or
for a in /sys/class/scsi_generic/*/device/timeout; do echo -n “$a “; cat
“$a” ; done;

You should see results similar to:

/sys/class/scsi_device/0:0:0:0/device/timeout 30
/sys/class/scsi_device/2:0:0:0/device/timeout 30
/sys/class/scsi_device/4:0:0:0/device/timeout 30


If you (as root) for each entry above:

echo 120 > /sys/class/...   (use the full path names displayed from
previous here)

Then re-run the first command they should all now show 120. This is more
than enough to cope with disks winding up, doing some stuff, maybe a bit
of alignment correction, and then replying to the md stack.

>From what I have read (although documentation on the net is rarely up to
date) the md stack will wait forever on an action until either an error
is returned or the data is. There is no "time out" within the md stack
as there is with a hardware raid controller.

Mind you, I don't know anything about SAS cables or controllers so its
possible they may have hardware/software timeouts in their own right.

It might be worth investigating also (note this is for sata, so may not
be applicable):

for a in /sys/class/scsi_host/*/link_power_management_policy; do echo -n
"$a "; cat "$a"; done

which should show:

/sys/class/scsi_host/host0/link_power_management_policy max_performance
/sys/class/scsi_host/host1/link_power_management_policy max_performance


I guess another place to look is at the sdparm/hdparm data for the disks
to see what options are set regarding winddown etc.



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux