Checking the sanity of SATA disks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a home fileserver with 4 SATA disks in a RAID 5.  As I am
sure you are aware, SATA devices in Linux currently cannot be
queried for SMART info, so I can't do SMART health checks of these
devices.

Also there is still the tendency for Linux Software RAID to kick
devices out of the array as soon as there is any error on them.

I really don't want to be in the situation where a drive dies, I fit
a new one, and during the resync another device is kicked out
because of spontaneously finding a bad sector.

I tried simply doing a

        dd if=/dev/sd[abcd] of=/dev/null

To check each disk in a very unsubtle fashion, but it drives the
load average on the machine way way up (like to 20+) and makes it
very unresponsive (wait several minutes for a keypress to be
acknowledged), even if I run it under nice -n 19.

I don't notice any performance problems on this server during normal
day to day use, and while it's not particularly beefy it is an AMD
Sempron 1.8GHz so I am surprised that simply reading from one disk
causes these performance issues.

I know this isn't right, so has anyone got any advice in the way of
tracking down which part of the system is at fault, possibly
off-list if it's too offtopic?

Thanks,
Andy

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux