Re: Inactive arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the help Chris,

> Have you told us the entire story about how you got into
> this situation?

I think I have but I can see how it can be confusing since I have
provided non requested info - including old records from where arrays
were working (more on that below). Basically the system was moved
meaning it was offline for a few days, on first boot after the move I
ended up with md128 and md129 inactive

> Have you use 'mdadm create' trying to fix this? If you
> haven't, don't do it.

I haven't

> I see a lot of conflicting information. For example:
>
>> /dev/md129:
>>         Version : 1.2
>>   Creation Time : Mon Nov 10 16:28:11 2014
>>      Raid Level : raid0
>>      Array Size : 1572470784 (1499.63 GiB 1610.21 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Mon Nov 10 16:28:11 2014
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>      Chunk Size : 512K
>>
>>            Name : lamachine:129  (local to host lamachine)
>>            UUID : 895dae98:d1a496de:4f590b8b:cb8ac12a
>>          Events : 0
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8       50        0      active sync   /dev/sdd2
>>        1       8       66        1      active sync   /dev/sde2
>>        2       8       82        2      active sync   /dev/sdf
>
>
>
>>> /dev/md129:
>>>         Version : 1.2
>>>      Raid Level : raid0
>>>   Total Devices : 1
>>>     Persistence : Superblock is persistent
>>>
>>>           State : inactive
>>>
>>>            Name : lamachine:129  (local to host lamachine)
>>>            UUID : 895dae98:d1a496de:4f590b8b:cb8ac12a
>>>          Events : 0
>>>
>>>     Number   Major   Minor   RaidDevice
>>>
>>>        -       8       50        -        /dev/sdd2
>
>
> The same md device, one raid0 one raid5. The same sdd2, one in the
> raid0, and it's also in the raid5. Which is true?

So the first record for /dev/md129 is from the time the array was
working ok and the second is the current status. I think both records
shows Raid Level: raid0

> It sounds to me like
> you've tried recovery and did something wrong; or about as bad is
> you've had these drives in more than one software raid setup, and you
> didn't zero out old superblocks first.

The only thing that comes to mind is that at first the system wasn't
coming up because so I tried to boot from individual drives while
trying to locate the boot device.

> Maybe start out with 'mdadm -D' on everything... literally everything,
> every whole drive (i.e. /dev/sdd, /dev/sdc, all of them) and also
> everyone of their partitions; and see if it's possible to sort out
> this mess.

Will run on devices "a to f"

On 13 September 2016 at 21:13, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote:
> An invalid backup GPT suggests it was stepped on by something that was
> used on the whole block device. The backup GPT is at the end of the
> drive. And if you were to use mdadm create on the entire drive rather
> than a partition, you'd step on that GPT and also incorrectly recreate
> the array. Have you told us the entire story about how you got into
> this situation? Have you use 'mdadm create' trying to fix this? If you
> haven't, don't do it.
>
> I see a lot of conflicting information. For example:
>
>> /dev/md129:
>>         Version : 1.2
>>   Creation Time : Mon Nov 10 16:28:11 2014
>>      Raid Level : raid0
>>      Array Size : 1572470784 (1499.63 GiB 1610.21 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Mon Nov 10 16:28:11 2014
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>      Chunk Size : 512K
>>
>>            Name : lamachine:129  (local to host lamachine)
>>            UUID : 895dae98:d1a496de:4f590b8b:cb8ac12a
>>          Events : 0
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8       50        0      active sync   /dev/sdd2
>>        1       8       66        1      active sync   /dev/sde2
>>        2       8       82        2      active sync   /dev/sdf
>
>
>
>>> /dev/md129:
>>>         Version : 1.2
>>>      Raid Level : raid0
>>>   Total Devices : 1
>>>     Persistence : Superblock is persistent
>>>
>>>           State : inactive
>>>
>>>            Name : lamachine:129  (local to host lamachine)
>>>            UUID : 895dae98:d1a496de:4f590b8b:cb8ac12a
>>>          Events : 0
>>>
>>>     Number   Major   Minor   RaidDevice
>>>
>>>        -       8       50        -        /dev/sdd2
>
>
> The same md device, one raid0 one raid5. The same sdd2, one in the
> raid0, and it's also in the raid5. Which is true? It sounds to me like
> you've tried recovery and did something wrong; or about as bad is
> you've had these drives in more than one software raid setup, and you
> didn't zero out old superblocks first. If you leave old signatures
> intact you end up with this sort of ambiguity, which signature is
> correct. So now you have to figure out which one is correct and which
> one is wrong...
>
> Maybe start out with 'mdadm -D' on everything... literally everything,
> every whole drive (i.e. /dev/sdd, /dev/sdc, all of them) and also
> everyone of their partitions; and see if it's possible to sort out
> this mess.
>
>
> Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux