On Mon, Dec 10 2018, Niklas Hambüchen wrote: > Hey Neil, > > On 2018-12-10 02:41, NeilBrown wrote: >> No, you don't want to do that. Reading from the superblock is what >> "mdadm --examine" is for. "mdadm --detail" reports what the kernel >> thinks. >> >> What mdadm should do in this case is simple not report the level at all, >> just like it doesn't report "Raid Devices" at all. > > I'm still curious though what the kernel *should* think. > My problem is that beyond mdadm --detail reporting raid0, the disk is actually not started as degraded on boot. > Should it be started as degraded? It depends on how user-space is set up to configure things. What is probably happening is that udev is running "mdadm -I /dev/foo" whenever a device is found. This should *not* start the array as the missing device might yet be found. Alternately it might be running mdadm --assemble --scan with --run. This similarly should avoid starting newly-degraded arrays. There *should* be some mechanism to cause "mdadm -IRs" to be run after a short timeout. This activates any arrays which are inactive, but can be started degraded. With current mdadm: /usr/lib/udev/rules/64-md-raid-assembly.rules will run mdadm --incremental --export /dev/foo --offroot and capture the output. If the output contains MD_STARTED=unsafe then this indicates that the array has been left inactive, so systemd is asked to activate mdadm-last-resort@foo.timer This will wait 30 seconds, then run mdadm-last-resort@foo.service, which will run mdadm --run /dev/foo (So this does each array individually, rather than the "mdadm -IRs" approach which does all arrays at once). I don't think you have said with distro you use - maybe you don't even have systemd. In that case you (or the distro) would need to find some other mechanism. Hope that helps. NeilBrown
Attachment:
signature.asc
Description: PGP signature