Re: destroyed raid 5 by removing wrong disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/07/18 18:52, Martin Probst wrote:


The disk /dev/sde1 was rebuilding at nearly 40%. I think, this disk is not useful for recovery. Is it possible to get these three disk working again? And how? I removed disk /dev/sdb for maybe 60 seconds. So it should be nearly synced to the both other clean devices. Can someone help me, please?

https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn

In particular "Asking for help" - get the output for lsdrv on all drives.

I'm hoping it's a simple matter of re-assembling the array, and restarting the rebuild. Try an explicit assemble, but DON'T use --force unless advised by someone who knows more than I do -) and then either the rebuild should restart automatically, or there are ways to restart it.

Just don't take the metaphorical hammer to try and force it to continue - that's a sure way to wreck it.

Cheers,
Wol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux