On Mon, Nov 14, 2011 at 6:52 AM, Lukasz Dorau <lukasz.dorau@xxxxxxxxx> wrote: > The problem occurs when RAID10 array under rebuild > (after one disk fails) is assembled incrementally. > Mdadm tries to start array just after adding the third disk > and the volume is assembled incorrectly (in degraded state). > > The cause is that container_enough depends on > newly missing disks which are checked incorrectly now. > They should be checked using always the first map. > > Signed-off-by: Lukasz Dorau <lukasz.dorau@xxxxxxxxx> > --- > super-intel.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > diff --git a/super-intel.c b/super-intel.c > index 4ebee78..511a32a 100644 > --- a/super-intel.c > +++ b/super-intel.c > @@ -2529,13 +2529,13 @@ static void getinfo_super_imsm(struct supertype *st, struct mdinfo *info, char * > > failed = imsm_count_failed(super, dev); > state = imsm_check_degraded(super, dev, failed); > - map = get_imsm_map(dev, dev->vol.migr_state); > + map = get_imsm_map(dev, 0); > > /* any newly missing disks? > * (catches single-degraded vs double-degraded) > */ > for (j = 0; j < map->num_members; j++) { > - __u32 ord = get_imsm_ord_tbl_ent(dev, i, -1); > + __u32 ord = get_imsm_ord_tbl_ent(dev, i, 0); This looks wrong. I noticed this when looking over Przemyslaw's patch [1]. map[0] always contains the destination state of the migration so the most reliable source for looking for out of sync disks is map[1]. -- Dan [1]: http://marc.info/?l=linux-raid&m=132206766827484&w=2 -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html