Thanks Wols and Valentijn for your input.
I tried again with the --bitmap=none, clearly that was a miss on my part.
However, still even with that correction, and attempting across varying
combinations of "drive ordering", the filesystem appears corrupt.
I think I have to accept entire data loss here. :(
~Matt
-------- Original Message --------
From: Wols Lists <antlists@xxxxxxxxxxxxxxx>
Sent: 1/12/2015, 11:35:57 AM
To: linux-raid@xxxxxxxxxxxxxxx
Cc:
Subject: Re: mdadm RAID6 "active" with spares and failed disks; need help
On 11/01/15 23:22, Valentijn Sessink wrote:
Also, I would not have dared to run all these statements on "live" (or
dead, for that matter ;-) disks.
I'm no expert either, but looking at the blog, I'm worried he might have
trashed the array with almost the first thing he did :-(
"add" says it will add a drive as spare if it didn't originally belong
to the array. If it adds a spare to a degraded array, the array will
immediately start to repair itself.
OOPS!!! As it sounds like exactly this could have happened - mdadm
didn't recognise the disk as it added it.
Just speculating, but unfortunately this seems quite likely :-(
Cheers,
Wol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html