Re: "cannot start dirty degraded array"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kyler Laird wrote:
I'm in a bind.  I have three RAID6s on a Sun X4540.  A bunch of disks
threw error all of a sudden.  Two arrays came back (degraded) on reboot
but the third is having problems.

Just a thought, when multiple units have errors at the same time, I suspect a power issue. And if these are real SCSI drives, it's possible for a drive to fail in such a way that it glitches the SCSI bus and causes the controller to think that multiple drives doing concurrent seeks have failed. I saw this often enough to have a script to force the controller to mark drives good and then test them one at a time when I was running ISP servers.

--
Bill Davidsen <davidsen@xxxxxxx>
 Obscure bug of 2004: BASH BUFFER OVERFLOW - if bash is being run by a
normal user and is setuid root, with the "vi" line edit mode selected,
and the character set is "big5," an off-by-one error occurs during
wildcard (glob) expansion.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux