It was a 4-drive RAID5 array missing one drive completely. But I expected is was an easy thing to fix as "mdadm --detail /dev/md1xxx" will show the details fine (out of the same information in memory). But if it's not, maybe just make a note about it and move on with more important things. I wouldn't be surprised if I'm the only one ever needing this feature. And I already implemented a work-around in my storage-management-system be getting the RAID level etc. from /proc/mdstat first. A pain to serialize of course but it works now. On Fri, Jul 9, 2021 at 1:52 AM NeilBrown <neilb@xxxxxxx> wrote: > > On Thu, 08 Jul 2021, BW wrote: > > 1: Just because the array is inactive doesn't mean the information is > > not valuable, actually it's even more, as your most likely needs your > > attention > > 2: The information is available and is printed when not doing --export > > Ahh... I missed that. My memory is that when the array is inactive, the > md driver really don't know anything about the array. It doesn't find > out until it reads the metadata, and it does that as it activates the > array. > But looking at your sample output I see does, as you say, give a raid > level for an inactive array. > > But looking at the code, it should do exactly the same thing for > --export, and --brief, and normal. > It determines the raid level: > > if (inactive && info) > str = map_num(pers, info->array.level); > else > str = map_num(pers, array.level); > > and then report 'str' in all 3 cases (possibly substituting "-unknown-" > or "container" for NULL) providing that array.raid_disks is non-zero - > which it is in your example. > So I cannot see how you would get the results that you report. > > Do you know how you got the array in this inactive state? I could then > experiment and see if I can reproduce your result. > > NeilBrown