On 10/10/2013 12:20 PM, Digimer wrote: >> Phil > > Ya, I have no plan at all to use these drives or the server they came > from anymore. In fact, they've already been replaced. :) That's good. > I tried the --assemble --force (and --assemble --force --run) without > success. It fails saying that sde2 thinks sdb2 has failed, leaving two > dead members. If I try to start with just sd[bcd], it says that it has > two drives and one spare, so still refuses to start. Ok. > Any other options/ideas? I'm not in any rush, so I am happy to test things. Well, you have rock-solid knowledge of the device order and array parameters. So a --create operation is the next step. Given that sdd2 is marked as spare, and therefore of unknown value, I'd leave it out. mdadm --stop /dev/md1 mdadm --create --level=5 -n 4 --chunk=512 /dev/md1 \ /dev/sd{c,e,b}2 missing (--assume-clean isn't needed when creating a degraded raid5) The brace syntax is needed, not brackets, as the order matters. After creation, use mdadm -E to verify the Data Offset is 2048. If not, get a new version of mdadm that lets you specify it. Only after that should you use "fsck -n" to verify your filesystem and mount it. HTH, Phil -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html