"J. David Beutel" <jdb@xxxxxxxxx> writes: > Neil Brown wrote: >> 2.6.12 does support reducing the number of drives in a raid1, but it >> will only remove drives from the end of the list. e.g. if the >> state was >> >> 58604992 blocks [3/2] [UU_] >> >> then it would work. But as it is >> >> 58604992 blocks [3/2] [_UU] >> >> it won't. You could fail the last drive (hdc8) and then add it back >> in again. This would move it to the first slot, but it would cause a >> full resync which is a bit of a waste. >> > > Thanks for your help! That's the route I took. It worked ([2/2] > [UU]). The only hiccup was that when I rebooted, hdd2 was back in the > first slot by itself ([3/1] [U__]). I guess there was some contention > in discovery. But all I had to do was physically remove hdd and the > remaining two were back to [2/2] [UU]. mdadm --zero-superblock /dev/hdd Never forget that when removing a disk. It sucks when you reboot and your / is suddenly on the removed disk instead of the remaining raid. MfG Goswin - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html