Re: reconstruct raid superblock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 16, 2009 at 10:36 PM, Carl Karsten <carl@xxxxxxxxxxxxxxxxx> wrote:
> On Thu, Dec 17, 2009 at 12:26 AM, Michael Evans <mjevans1983@xxxxxxxxx> wrote:
>> On Wed, Dec 16, 2009 at 10:18 PM, Carl Karsten <carl@xxxxxxxxxxxxxxxxx> wrote:
>>> A degraded array is just missing the redundant data, not needed data, right?
>>>
>>> I am pretty sure I need all 4 disks.
>>>
>>> Is there any reason to 0 out the bytes I want replaced with good bytes?
>>>
>>> On Wed, Dec 16, 2009 at 11:21 PM, Majed B. <majedb@xxxxxxxxx> wrote:
>>>> If your other disks are sane and you are able to run a degraded array,á then
>>>> you can remove grub using dd then re-add the disk to the array.
>>>>
>>>> To clear the first 1MB of the disk:
>>>> dd if=/dev/zero of=/dev/sdx bs=1M count=1
>>>> Replace sdx with the disk name that has grub.
>>>>
>>>> On Dec 17, 2009 6:53 AM, "Carl Karsten" <carl@xxxxxxxxxxxxxxxxx> wrote:
>>>>
>>>> I took over a box that had 1 ide boot drive, 6 sata raid drives (4
>>>> internal, 2 external.) áI believe the 2 externals were redundant, so
>>>> could be removed. áso I did, and mkfs-ed them. áthen I installed
>>>> ubuntu to the ide, and installed grub to sda, which turns out to be
>>>> the first sata. áwhich would be fine if the raid was on sda1, but it
>>>> is on sda, and now the raid wont' assemble. áno surprise, and I do
>>>> have a backup of the data spread across 5 external drives. ábut before
>>>> I áabandon the array, I am wondering if I can fix it by recreating
>>>> mdadm's metatdata on sda, given I have sd[bcd] to work with.
>>>>
>>>> any suggestions?
>>>>
>>>> root@dhcp128:~# mdadm --examine /dev/sd[abcd]
>>>> mdadm: No md superblock detected on /dev/sda.
>>>> /dev/sdb:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 6
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Tue Mar 31 23:08:02 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 5
>>>> Working Devices : 6
>>>> áFailed Devices : 1
>>>> áSpare Devices : 1
>>>> á á á Checksum : a4fbb93a - correct
>>>> á á á á Events : 8430
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 6 á á á 8 á á á 16 á á á á6 á á áspare á /dev/sdb
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 8 á á á 64 á á á á1 á á áactive sync á /dev/sde
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 80 á á á á5 á á áactive sync
>>>> á 6 á á 6 á á á 8 á á á 16 á á á á6 á á áspare á /dev/sdb
>>>> /dev/sdc:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 4
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Sun Jul 12 11:31:47 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 4
>>>> Working Devices : 4
>>>> áFailed Devices : 2
>>>> áSpare Devices : 0
>>>> á á á Checksum : a59452db - correct
>>>> á á á á Events : 580158
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 0 á á á á0 á á á á1 á á áfaulty removed
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 96 á á á á5 á á áactive sync
>>>> /dev/sdd:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 4
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Sun Jul 12 11:31:47 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 4
>>>> Working Devices : 4
>>>> áFailed Devices : 2
>>>> áSpare Devices : 0
>>>> á á á Checksum : a59452ed - correct
>>>> á á á á Events : 580158
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 0 á á á á0 á á á á1 á á áfaulty removed
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 96 á á á á5 á á áactive sync
>>>>
>>>> --
>>>> Carl K
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at áhttp://vger.kernel.org/majordomo-info.html
>>>>
>>>
>>>
>>>
>>> --
>>> Carl K
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>> You may want to recreate the array anyway to gain the benefits from
>> the 1.x metadata format (such as storing resync resume info).
>>
>> It would also be a good idea to look at what you need to do.  As long
>> as you still have at least one parity device you can (assuming no
>> other hardware error) --fail any single device in the array, --remove
>> it, --zero-superblock that device, then re-add it as a fresh spare.
>>
>>
>
> Do I have one parity device?
>
> btw - all I need to to is get the array assembled and the fs mounted
> one more time so I can copy the data onto some externals and drive it
> over to the data centre where it will be uploaded into crazy raid
> land.  So no point in adding hardware or any steps that are not needed
> to just read the files.
>
> --
> Carl K
>

Sorry, forgot to hit reply to all last time (gmail's got buttons on
top and bottom, but I know of no way to inform it I'm on a list and to
thus make the default action reply to all instead of reply).

Looking at it; you seem to have one STALE disk, and four in your
current array.  It looks like you have ZERO spares, and zero spare
parity devices (it looks like you started with 6 devices, 2 parity
devices, and have since lost two devices).  Your array could, since
there is no other data to compare against, accumulate unrecoverable
sectors/silently failed sectors on the drives without knowledge at
this point, if I understand what information is stored correctly.

cat /proc/mdstat will give you more information about which devices
are in what state.  However it looks like you could re-add one device
which you listed to the array; have it resync to it, and then you
would have a parity device.

Of course if the device in question is the one you want to alter than
you should do so before re-adding it.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux