It turns out the cause was missing 63-md-raid-arrays.rules. It appears 64-md-raid-assembly.rules will assemble the devices on boot/sysinit and works even without 63-md-raid-arrays.rules, but starting using the script needs the 63-md-raid-array.rules. Not sure why, but that appears to be it. If I remove 64 and leave 63, the dmraid takes over, but I can "dmsetup remove_all" then start the md RAID with the script and it works, I can also use nodmraid and then run the script. So thanks for pointing me to udev - still would be curious why 64 doesn't need 63. On Tue, Aug 16, 2022 at 10:54 PM Hannes Reinecke <hare@xxxxxxx> wrote: > > On 8/17/22 04:04, David F. wrote: > > What rules should be used? I don't see a /dev/md directory, I > > created one, stopped the raid (all the /dev/md* devices went away) > > and tried to start the raid, same thing and only /dev/md127 gets > > created, nothing in /dev/md/ directory and none of the md126 devices ? > > You then get the timeout. > > > Please check the device-mapper status. > > My guess is that device-mapper gets in the way (as it probably will be > activated from udev, too), and blocks the devices when mdadm is trying > to access them. > > Cheers, > > Hannes > -- > Dr. Hannes Reinecke Kernel Storage Architect > hare@xxxxxxx +49 911 74053 688 > SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg > HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew > Myers, Andrew McDonald, Martje Boudien Moerman