Re: Raid5 2 drive failure (and my spare failed too)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So this is the approximately response I expected however I do want to pose a few additional queries:

So if I read the output correctly it appears that /dev/sdb is the most recent drive to fail it does appear that it is only slightly out of sync with the rest four drives that are currently functioning, what is it exactly that keeps things from being forced back online?

If as I suspect /dev/sdb was the last drive to fail... I have looked at it via smartctl and the drive still appears to be functional so wouldn't recreating be an option? I think this is the area which I was suspecting I might need guidance.

On 8/4/19 4:03 PM, Reindl Harald wrote:

Am 04.08.19 um 20:49 schrieb Ryan Heath:
I have a 6 drive raid5 with two failed drives (and unbeknownst to me my
spare died awhile back). I noticed the first failed drive a short time
ago and got a drive to replace it (and a new spare too) but before I
could replace it a second drive failed. I was hoping to force the array
back online since the recently failed drive appears to be only slightly
out of sync but get:

mdadm: /dev/md127 assembled from 4 drives - not enough to start the array.

I put some important data on this array so I'm really hoping someone can
provide guidance to force this array online, or otherwise get this array
back to a state allowing me to rebuild.
there is not enough data for a rebuild on a RAID5

you now learned backups the hard way as well as watch your log in
context of "and unbeknownst to me my spare died awhile back"




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux