Hi folks,
tl;dr : Some more information/confirmation, and mdadm 4.1-1 test coming.
Even if I have some difficulty to reproduce the issue into another
computer (it happened only once in a big amount of tests), on the real
server, it failed exactly the same a second time during the night, so it
seems that this can be repeated more easily on it. This time the ext4
filesystem wasn't mounted.
So I'll upgrade it to Debian 10 which is using Linux 4.19 and mdadm
4.1-1, and do the test again; in order to tell you if the problem is
still here.
By the way, doing some tests on another computer, playing --create over
an existing array after switching from 3.4-4 to 4.1-1, needs to specify
the data-offset, because it changed. If changed and not given, the array
filesystem isn't readable until you create it with the right data-offset
value.
That is, in case of same failure (is no actual data changed - but mdadm
cannot assemble anymore), after the upgrade, the exact sentence for
recovering my server's RAID will be :
mdadm --create /dev/md0 --level=5 --chunk=512K --metadata=1.2 --layout
left-symmetric --data-offset=262144s --raid-devices=3 /dev/sdd1
/dev/sde1 /dev/sdb1 --assume-clean
It also implies that /dev/sdd1 /dev/sde1 /dev/sdb1 didn't moved (I know
what are the associated serial numbers - so it's easy to check). If
wrong, the array filesystem isn't readable until you create it with the
right positions.
By doing some archeology on the list archive about "grow" subjects, I
found this guy who suffered from what looks like the same problem on
same Debian 9 too (his thing about inserting the disk to the VM seems
not to be a real difference - and he found another way to get it started
again).
https://marc.info/?t=153183310600004&r=1&w=2
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884719
I'll probably keep you informed tonight !