Re: RAID5 degraded, removed the wrong hard disk frm the tray

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/06/18 15:37, Piero wrote:
The second damaged disk is still on the NAS, and I'm not touching nothing, obviously. Now I'm thinking to halt the NAS and try a ddrescue of the second damaged disk to the new. Is this make some sense or it is absolutely useless?

What concerns me is that the event counts of the two "working" disks are so massively different.

The only thing more I can suggest is that with your two new drives you copy the original drives. Then with the copies, try doing a "mdadm --assemble --force". That will tell mdadm to ignore the mismatched event count.

The problem is that the difference in event counts means that a load of writes will have hit one disk but not the other. A damaged array is pretty much inevitable. How much you can salvage from it is pot luck.

Unless somebody else has a bright idea, I don't think I can help much further. And from what you said, I don't think removing the wrong drive did the damage. I've got this nasty feeling that sdc failed and was kicked from the array quite a long time ago. Then when sda failed, you were left with this mess :-(

Cheers,
Wol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux