On Wed, Sep 30, 2020 at 09:16:10PM +0100, antlists wrote: > The problem is that if you use mdadm 3.4 with kernel 4.9.237, the 237 means > that your kernel has been heavily updated and is far too new. But if you use > mdadm 4.1 with kernel 4.9.237, the 4.9 means that the kernel is basically a > very old one - too old for mdadm 4.1 But the point of the longterm kernel lines like 4.9.237 is to keep strict compatibility with the original branch point (that's the point of a "stable" line) and perform only bugfixes, isn't it? Do you mean to say that there is NO stable kernel line with full mdadm support? Or just the ones provided by distributions? (But don't distributions like Debian do exactly the same thing as GKH and others with these longterm lines? I.e., fix bugs while keeping strict compatibility. If there are no longterm stable kernels with full RAID support, I find this rather worrying.) But in my specific case, the issue didn't come from a mdadm/kernel mismatch after all: I performed further investigation after I wrote my previous message, and my problem did indeed come from the /lib/systemd/system/mdadm-grow-continue@.service which, as far as I can tell, is broken insofar as --backup-file=... goes (the option is needed for --continue to work and it isn't passed). Furthermore, this file appears to be distributed by mdadm itself (it's not Debian-specific), and the systemd service is called by mdadm (from continue_via_systemd() in Grow.c). So it seems to me that RAID reshaping with backup files is currently broken on all systems which use systemd. But then I'm confused as to why this didn't get more attention. Anyway, if you have any suggestion as to where I should bugreport this, it's the least I can do. In my particular setup, after giving this more thought, I thought the wisest thing would be to get tons of external storage, copy everything away, recreate a fresh RAID6 array, and copy everything back into it. Whatever the case, thanks for your help. Cheers, -- David A. Madore ( http://www.madore.org/~david/ )