Re: Fwd: RAID6 Array crash during reshape.....now will not re-assemble.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have no clue, they were used in a temporary system for 10 days about
8 months ago, they were then used in the new array that was built back
in August.

Even if the metadata was removed from those two drives the 'merge'
that happened, without warning or requiring verification, seems to now
have 'contaminated' all the drives possibly.

I'm still reasonably convinced the data is there and intact, just need
an analytical approach to how to recover it.



On 4 March 2016 at 21:02, Alireza Haghdoost <alireza@xxxxxxxxxx> wrote:
> On Fri, Mar 4, 2016 at 2:30 PM, Another Sillyname
> <anothersname@xxxxxxxxxxxxxx> wrote:
>> That's possibly true, however there are lessons to be learnt here even
>> if my array is not recoverable.
>>
>> I don't know the process order of doing a reshape....but I would
>> suspect it's something along the lines of.
>>
>> Examine existing array.
>> Confirm command can be run against existing array configuration (i.e.
>> It's a valid command for this array setup).
>> Do backup file (if specified)
>> Set reshape flag high
>> Start reshape
>>
>> I would suggest....
>>
>> There needs to be another step in the process
>>
>> Before 'Set reshape flag high' that the backup file needs to be
>> checked for consistency.
>>
>> My backup file appears to be just full of EOLs (now for all I know the
>> backup file actually gets 'created' during the process and therefore
>> starts out as EOLs).  But once the flag is set high you are then
>> committing the array before you know if the backup is good.
>>
>> Also
>>
>> The drives in this array had been working correctly for 6 months and
>> undergone a number of reboots.
>>
>> If, as we are theorising, there was some metadata from a previous
>> array setup on two of the drives that as a result of the reshape
>> somehow became the 'valid' metadata regarding those two drives RAID
>> status then I would suggest that during any mdadm raid create process
>> there is an extensive and thorough check of any drives being used to
>> identify and remove any possible previously existing RAID metadata
>> information...thus making the drives 'clean'.
>>
>>
>>
>>
>>
>>
>> On 4 March 2016 at 19:11, Alireza Haghdoost <alireza@xxxxxxxxxx> wrote:
>>> On Fri, Mar 4, 2016 at 1:01 PM, Another Sillyname
>>> <anothersname@xxxxxxxxxxxxxx> wrote:
>>>>
>>>>
>>>> Thanks for the suggestion but I'm still stuck and there is no bug
>>>> tracker on the mdadm git website so I have to wait here.
>>>>
>>>> Ho Huum
>>>>
>>>>
>>>
>>> Looks like it is going to be a long wait. I think you are waiting to
>>> do something that might not be inplace/available at all. That thing is
>>> the capability to reset reshape flag when the array metadata is not
>>> consistent. You had an old array in two of these drives and it seems
>>> mdadm confused when it observes the drives metadata are not
>>> consistent.
>>>
>>> Hope someone chip in some tricks to do so without a need to develop
>>> such a functionality in mdadm.
>
> Do you know the metadata version that is used on those two drives ?
> For example, if the version is < 1.0 then we could easily erase the
> old metadata since it has been recorded in the end of the drive. Newer
> metada versions after 1.0 are stored in the beginning of the drive.
>
> Therefore, there is no risk to erase your current array metadata !
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux