Can you post the output of mdadm -E /dev/sd?1 for all your drives? And did you pull down the latest version of mdadm from neil's repo and build it and use that to undo the re-shape? John Another> I have no clue, they were used in a temporary system for 10 days about Another> 8 months ago, they were then used in the new array that was built back Another> in August. Another> Even if the metadata was removed from those two drives the 'merge' Another> that happened, without warning or requiring verification, seems to now Another> have 'contaminated' all the drives possibly. Another> I'm still reasonably convinced the data is there and intact, just need Another> an analytical approach to how to recover it. Another> On 4 March 2016 at 21:02, Alireza Haghdoost <alireza@xxxxxxxxxx> wrote: >> On Fri, Mar 4, 2016 at 2:30 PM, Another Sillyname >> <anothersname@xxxxxxxxxxxxxx> wrote: >>> That's possibly true, however there are lessons to be learnt here even >>> if my array is not recoverable. >>> >>> I don't know the process order of doing a reshape....but I would >>> suspect it's something along the lines of. >>> >>> Examine existing array. >>> Confirm command can be run against existing array configuration (i.e. >>> It's a valid command for this array setup). >>> Do backup file (if specified) >>> Set reshape flag high >>> Start reshape >>> >>> I would suggest.... >>> >>> There needs to be another step in the process >>> >>> Before 'Set reshape flag high' that the backup file needs to be >>> checked for consistency. >>> >>> My backup file appears to be just full of EOLs (now for all I know the >>> backup file actually gets 'created' during the process and therefore >>> starts out as EOLs). But once the flag is set high you are then >>> committing the array before you know if the backup is good. >>> >>> Also >>> >>> The drives in this array had been working correctly for 6 months and >>> undergone a number of reboots. >>> >>> If, as we are theorising, there was some metadata from a previous >>> array setup on two of the drives that as a result of the reshape >>> somehow became the 'valid' metadata regarding those two drives RAID >>> status then I would suggest that during any mdadm raid create process >>> there is an extensive and thorough check of any drives being used to >>> identify and remove any possible previously existing RAID metadata >>> information...thus making the drives 'clean'. >>> >>> >>> >>> >>> >>> >>> On 4 March 2016 at 19:11, Alireza Haghdoost <alireza@xxxxxxxxxx> wrote: >>>> On Fri, Mar 4, 2016 at 1:01 PM, Another Sillyname >>>> <anothersname@xxxxxxxxxxxxxx> wrote: >>>>> >>>>> >>>>> Thanks for the suggestion but I'm still stuck and there is no bug >>>>> tracker on the mdadm git website so I have to wait here. >>>>> >>>>> Ho Huum >>>>> >>>>> >>>> >>>> Looks like it is going to be a long wait. I think you are waiting to >>>> do something that might not be inplace/available at all. That thing is >>>> the capability to reset reshape flag when the array metadata is not >>>> consistent. You had an old array in two of these drives and it seems >>>> mdadm confused when it observes the drives metadata are not >>>> consistent. >>>> >>>> Hope someone chip in some tricks to do so without a need to develop >>>> such a functionality in mdadm. >> >> Do you know the metadata version that is used on those two drives ? >> For example, if the version is < 1.0 then we could easily erase the >> old metadata since it has been recorded in the end of the drive. Newer >> metada versions after 1.0 are stored in the beginning of the drive. >> >> Therefore, there is no risk to erase your current array metadata ! Another> -- Another> To unsubscribe from this list: send the line "unsubscribe linux-raid" in Another> the body of a message to majordomo@xxxxxxxxxxxxxxx Another> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html