Oups, sorry for the double posts...
This is normal for a RAID-5 array construction. Rather than force you to wait for ages until the RAID parity has been written, mdadm creates a degraded two-element array with a single spare and fails over to it; the rebuild involved in the failover automatically constructs the parity.
Makes sense. And i was aware that it was reconstructing...
> [dev 9, 1] /dev/md1 84AA4AAF.8B2C555E.3F9AE70D.2EEDD5B3 online > [dev 8, 18] /dev/sdb2 84AA4AAF.8B2C555E.3F9AE70D.2EEDD5B3 good > [dev 8, 34] /dev/sdc2 84AA4AAF.8B2C555E.3F9AE70D.2EEDD5B3 good > [dev ?, ?] (unknown) 00000000.00000000.00000000.00000000 missing > [dev 8, 50] /dev/sdd2 84AA4AAF.8B2C555E.3F9AE70D.2EEDD5B3 spare That output is rather strange though, mainly because of the mystic missing drive with no name. mdadm bug?
That missing drive is the story of my life with raid!! You must have missed the previous posts i sent in this thread, where I was getting one active device and two spare (not being active) and 2 missing drives. As if the missing drive were taking the spot of an active one, and the spare waiting could not take that spot. Well, that's how I understood it, with my noobish mind! ;) Worst part is that, even though no trace is left on my system after a reboot (all is written to a tmpfs), even if i was formating the devices (eg: /dev/sdb2) or using `mdadm --zero-superblock ...` They were still being created as missing. Anyway, good news! After reconstruction the output looks perfect, i tried soft failing one device and it came back just fine. Problem was with a real failure (see first post). Thanks, Simon - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html