Few questions about a raid array with 3 faulty spares

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I just had 3 drives on a raid6 array marked as failed all at the same time.
I think it's unlikely that all of the drives went bad at the same time
so I am thinking the reason could be that one of my sata controllers
died or something like that.

I am thinking about moving the disks to other sata ports and then try
to get the array running again. My question is what method I should
use,
force assemble? Re-add? Or something else?
Would a force assemble be enough to unmark the disks as faulty? If I
try to re-add, would it wipe the data on the drives? I'd rather not
get my drives wiped in the process (for obvious reasons).

And if anyone knows of a way I can furter investigate as of why the
drives just got marked as failed all of a sudden I'd appreciate the
help.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux