Another> Thanks for the suggestion but that's still 'trying' things Another> rather then an analytical approach. Well... since Neil is the guy who knows the code, and I've been several emails in the past about re-shapes gone wrong, and pulling down Neil's latest version was the solution. So that's what I'd go with. Another> I also do not want to reboot this machine until I absolutely Another> have to incase I need to capture any data needed to identify Another> and thereby resolve the problem. Reboot won't make a difference, all the data is on the disks. Another> Given I'm not getting much joy here I think I'll have to post Another> a bug tomorrow and see where that goes. I'd also argue that removing a disk from a RAID6 of 30Tb in size is crazy, but you know the risks I'm sure. It might have been better to just fail one disk, then zero it's super-block and use that new disk formatted by hand into a plain xfs or ext4 filesystem for you travels. Then when done, you'd just re-add the disk into the array and let it rebuild the second parity stripes. Also, I jsut dug into my archives, have you tried: --assemble --update=revert-reshape on your array? John -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html