On Wed, Sep 11, 2013 at 9:40 AM, Francis Moreau <francis.moro@xxxxxxxxx> wrote: [...] >> >> I think this patch will help. The last hunk in particular should make the >> difference. >> >> Please let me know if it fixes the problem. >> > > Yes it fixes the problem. > > I had to adjust the patch to make it compile by using be64_to_cpu() > where needed. > Hmm unfortunately the following test case seems broken too, I'm not sure it's related however: # create a ddf array containing loop0 and loop1 $ cat /proc/mdstat Personalities : [raid1] md124 : active raid1 loop0[1] loop1[0] 84416 blocks super external:/md125/0 [2/2] [UU] md125 : inactive loop1[1](S) loop0[0](S) 65536 blocks super external:ddf # stop the array $ mdadm --stop /dev/md124 mdadm: stopped /dev/md124 $ mdadm --stop /dev/md125 mdadm: stopped /dev/md125 # Add only one disk $ mdadm -I /dev/loop0 mdadm: container /dev/md/ddf1 now has 1 device mdadm: /dev/md/array1 assembled with 1 device but not started # start the array $ mdadm -R /dev/md124 # looks like it failed $ cat /proc/mdstat Personalities : [raid1] md124 : inactive loop0[0] 84416 blocks super external:/md125/0 md125 : inactive loop0[0](S) 32768 blocks super external:ddf # start mdmon manually with debug trace $ mdmon /dev/md125 starting mdmon for md125 monitor: wake ( ) no arrays to monitor... exiting -- Francis -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html