Hi Martin,
I noticed another problem, with or without your patch, the problem occurs:
Zeroed the superblocks (just in case):
mdadm --zero-superblock /dev/loop[1-6]
Created the container:
mdadm -CR /dev/md127 -e ddf -l container -n 5 /dev/loop[1-5]
Created an md device in the container, it used /dev/loop4 and
/dev/loop5, rebuild and finished
mdadm -CR /dev/md0 -l raid1 -n 2 /dev/md127
I fail one of the disks, it rebuild with one of the available unused
disks in the container:
mdadm -f /dev/md0 /dev/loop4
I add another md device.
mdadm -CR /dev/md1 -l raid5 -n 3 /dev/md127
This last one is the odd bit, it is build using the previously failed
disk. Looks like the container is not aware that /dev/loop4 has failed,
en reuses it. Which is wrong. So the failed status is not kept.
Regards,
Albert
On 08/03/2013 11:43 AM, Albert Pauw wrote:
Hi Martin,
I can confirm that patch 3/4 indeed fixed the problem.
Thanks for your quick fix.
Kind regards,
Albert
On 08/02/2013 12:37 AM, Martin Wilck wrote:
Hi Albert,
On 08/02/2013 12:09 AM, Martin Wilck wrote:
On 08/01/2013 11:13 PM, Martin Wilck wrote:
Can you please try the attached patch?
DON'T use it, it's broken. I will send a better one in a minute.
My latest patch series should fix this. The fix for your problem is 3/4,
and I added your test case as 4/4. Retest is welcome.
Martin
Regards,
Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html