On Fri, Apr 04, 2008 at 09:26:28AM -0700, Shane W wrote: > Ok this is the problem then. A drive failed during the > reshape so it got aborted so was running degraded with > three of four drives. Now that I've re-added the fourth, > it's showing as a spare rather than restarting the grow. It may also be worth noting that the reshape appears to still be noted in the array, though it can't continue due to the failed device. A mdadm --detail /dev/md0 doesn't show any reshape activity but: continuum:~# mdadm -E /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 00.91.00 UUID : de2cfbb8:431374f2:873bb752:e60eb5cc Creation Time : Thu Mar 27 10:44:20 2008 Raid Level : raid5 Used Dev Size : 732419264 (698.49 GiB 750.00 GB) Array Size : 2197257792 (2095.47 GiB 2249.99 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Reshape pos'n : 10720704 (10.22 GiB 10.98 GB) Delta Devices : 1 (3->4) Update Time : Fri Apr 4 09:51:09 2008 State : clean Internal Bitmap : present Active Devices : 3 Working Devices : 4 Failed Devices : 1 Spare Devices : 1 Checksum : f4a198ba - correct Events : 0.792576 Layout : left-symmetric Chunk Size : 64K Number Major Minor RaidDevice State this 2 8 1 2 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync //dev/sdc1 2 2 8 1 2 active sync /dev/sda1 3 3 0 0 3 faulty removed 4 4 8 49 4 spare /dev/sdd1 S -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html