On 11/21/2013 12:15 AM, David F. wrote: > mdadm: /dev/sdf is identified as a member of /dev/md/ddf0, slot 4. > mdadm: /dev/sde is identified as a member of /dev/md/ddf0, slot 3. > mdadm: /dev/sdd is identified as a member of /dev/md/ddf0, slot 2. > mdadm: /dev/sdc is identified as a member of /dev/md/ddf0, slot 1. > mdadm: /dev/sdb is identified as a member of /dev/md/ddf0, slot 0. > mdadm: ignoring /dev/sdb as it reports /dev/sdf as failed > mdadm: ignoring /dev/sdc as it reports /dev/sdf as failed > mdadm: ignoring /dev/sdd as it reports /dev/sdf as failed > mdadm: ignoring /dev/sde as it reports /dev/sdf as failed > mdadm: no uptodate device for slot 0 of /dev/md/ddf0 > mdadm: no uptodate device for slot 2 of /dev/md/ddf0 > mdadm: no uptodate device for slot 4 of /dev/md/ddf0 > mdadm: no uptodate device for slot 6 of /dev/md/ddf0 > mdadm: added /dev/sdf to /dev/md/ddf0 as 4 > mdadm: Container /dev/md/ddf0 has been assembled with 1 drive (out of 5) That looks really weird. The (healthy?) devices sdb-sde are ignored because they report sdf as failed, and then sdf is used for assembly? I have no idea at the moment, I need to read the code. > Output of dmraid --raid_devices command: > /dev/sdf: ddf1, ".ddf1_disks", GROUP, unknown, 285155328 sectors, data@ 0 > /dev/sde: ddf1, ".ddf1_disks", GROUP, ok, 285155328 sectors, data@ 0 > /dev/sdd: ddf1, ".ddf1_disks", GROUP, ok, 285155328 sectors, data@ 0 > /dev/sdc: ddf1, ".ddf1_disks", GROUP, ok, 285155328 sectors, data@ 0 > /dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 285155328 sectors, data@ 0 This seems to support then notion that something's wrong with /dev/sdf. Martin -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html