Ok, thanks. I'm pretty sure I'll be able to DD from at least one of the failed drives, as I could still query them before I yanked them. Assuming I can DD one of the old drives to one of my new ones. I'd DDrescue old to new drive. Then do an assemble for force, with a mix of the dd drives and my old good ones? So if sda/b are new DD'd drives and sdc/d/e are hosed grow drives, I'd do an assemble force revert-reshape /dev/md127 sda sdb sdc sdd and sde? Then assemble can use my info from the DD drives to assemble the array back to 7 drives? Did I understand that right? Oh and how can I tell if I have a timeout mismatch. They should be raid drives. Cheers, Curt On Wed, Oct 4, 2017 at 3:01 PM, Anthony Youngman <antlists@xxxxxxxxxxxxxxx> wrote: > On 04/10/17 19:44, Joe Landman wrote: >> >> Generally speaking 3 failed drives on a RAID6 is a dead RAID6. You may >> get lucky, in that this may have been simply a timeout error (I've seen >> these on consumer grade drives), or an internal operation on the drive >> taking longer than normal, and been booted. In which case, you'll get scary >> warning messages, but might get your data back. >> >> Under no circumstances do anything to change RAID metadata right now >> (grow, shrink, etc.). Start with basic assembly. If you can do that, you >> are in good shape. If you can't, recovery is unlikely, even with heroic >> intervention. > > > No - Curt needs to stop the grow before anything else will work, I think. > Fortunately, seeing as it's hung at 0% this shouldn't mess about with the > data at all. > > And you NEED to ddrescue that fifth drive. At which point basic assembly > should hopefully work fine. > > (Note that the grow quite likely failed because the fifth drive errored ... > Oh - and are they raid drives? What are they? You don't have a timeout > mismatch?) > > Cheers, > Wol -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html