Re: Raid6 recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/21/20 6:12 PM, Glenn Greibesland wrote:

[trim /]

So to summarize what happened and what I've learned:
I had a RAID6 array with only 16 out of 18 working drives.
I received an email from mdadm saying another drive failed.
I ran a full offline smart test that completed successfuly.

The drive was in F (failed) state. I used --re-add and mdadm overwrote
the superblock turning it into a spare drive instead of putting the
drive back into slot 10.
I should have used --assemble --force.

Am I correct?

Yes.

However, there have been bugs in --force that would cause it to not assemble. Also, I believe latest behavior for --re-add would not have damaged the metadata.


Phil



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux