I have a five-disk RAID 5 array in which one disk's failure went unnoticed for an indeterminate time. Once I finally noticed, I did a raidhotremove on the disk -- or what I thought was the disk. Unfortunately, I can't count. Now my array has one 'failed' disk and one 'spare' disk. Aaargh.
Since then, I've learned a lot, but I haven't been able to find reassurances and/or answers elsewhere on a few issues.
The two big questions are:
1) How can I mark the 'spare' disk as 'clean' and get it back in the array? If I read the mdadm source correctly, it looks like 'removed' disks are skipped when trying to assemble.
2) If I --assemble --force the array and just specify (n-1) disks, does that ensure that (if the array starts) it starts in degraded mode and won't start re-writing the parity information?
Thanks a bunch in advance for any help.
Bob
- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html