Hubert Verstraete wrote:
Hubert Verstraete wrote:
Hello
According to mdadm's man page:
"When creating a RAID5 array, mdadm will automatically create a degraded
array with an extra spare drive. This is because building the spare
into a degraded array is in general faster than resyncing the parity on
a non-degraded, but not clean, array. This feature can be over-ridden
with the --force option."
Unfortunately, I'm seeing a kind of bug when I create a RAID5 array
with an internal bitmap, then stop the array before the initial
synchronization is done and restart the array.
1° When I create the array with an internal bitmap:
mdadm -C /dev/md_d1 -e 1.2 -l 5 -n 4 -b internal -R /dev/sd?
I see the last disk as a spare disk. After the restart of the array,
all disks are seen active and the array is not continuing the aborted
synchronization!
Note that I did not use the --assume-clean option.
2° When I create the array without a bitmap:
mdadm -C /dev/md_d1 -e 1.2 -l 5 -n 4 -R /dev/sd?
I see the last disk as a spare disk. After the restart of the array,
the spare disk is still a spare disk and the array continues the
synchronization where it had stopped.
In the case 1°, is this a bug or did I miss something?
Secondly, what could be the consequences of this non-performed
synchronization ?
Kernel version: 2.6.26-rc4
mdadm version: 2.6.2
Thanks,
Hubert
For the record, the new stable kernel 2.6.25.6 has the same issue.
I thought maybe the patch "md: fix prexor vs sync_request race" could
have fixed this, unfortunately not.
Regards,
Hubert
By the way and FYI, with my configuration, all disks on the same
controller, internal bitmap, v1 superblock, ... the initial RAID-5
synchronization duration is the same whether I'm using the option
--force or not.
Hubert
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html