mismatches after growing raid1 and re-adding a failed drive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Neil,
testing the following scenario:

1) create a raid1 with drives A and B, wait for resync to complete
(verify mismatch_cnt is 0)
2) drive B fails, array continues to operate as degraded, new data is
written to array
3) add a fresh drive C to array (after zeroing any possible superblock on C)
4) wait for C recovery to complete

At this point, for some reason "bitmap->events_cleared" is not
updated, it remains 0, although the bitmap is clear.

5) grow the array by one slot:
mdadm --grow /dev/md1 --raid-devices=3 --forc
6) re-add drive B back
mdadm --manage /dev/md1 --re-add /dev/sdb

MD accepts this drive, because in super_1_validate:
        /* If adding to array with a bitmap, then we can accept an
         * older device, but not too old.
         */
        if (ev1 < mddev->bitmap->events_cleared)
            return 0;
Since events_cleared==0, this condition DOES NOT hold, and drive B is accepted

7) recovery begins and completes immediately as the bitmap is clear
8) issuing "echo check > ..." yields in a lot of mismatched
(naturally, as B's data was not synced)

Is this a valid scenario? Any idea why events_cleared is not updated?
Kernel is 3.8.13

Thanks,
Alex.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux