Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1: 0.8258715
sde1: 0.8258715
sdf1: 0.8258715
sdg1: 0.8258708
sdh1: 0.8258716
sdg1 is out of date (expected), but sdh1 has received an extra event.
Any attempt to restart with mdadm --assemble --force, results in an an
un-startable array with an event count of 0.8258715.
Can anybody advise on the correct command to use to get it started again?
I'm assuming I'll need to use mdadm --create --assume-clean - but I'm
not sure
which drives should be included/excluded when I do this.
Many thanks!
Chris Allen.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html