Re: linux raid recreate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 6 Apr 2010 23:37:02 +0530
Anshuman Aggarwal <anshuman@xxxxxxxxxxxxx> wrote:

> I've just had to recreate my raid5 device by using 
> mdadm --create --assume-clean -n4 -l5 -e1.2 -c64 
> 
> in order to recover my data (because --assemble would not work with force etc.). 
> The problem:
>  *  Data Offset in the new array is much larger. 
>  * Internal Bitmap is starting at a different # sectors from superblock.
>  * Array Size is smaller though the disks are the same. 
> 
> How can I get these to be the same as what they were in the original array???

Use the same version of mdadm and you used to originally create the array.
Probably 2.6.9 from the data, though 3.1.1 seems to create the same layout.
So anything before 3.1.2

I really should write a "--recreate" for mdadm which uses whatever parameters
if it finds already on the devices.

NeilBrown


> 
> I have tried to make sure that nothing gets written to the md device except the metadata during create. 
> All of these are important because the fs on top of the LVM on top of the md would need all the data it can to fsck properly and I don't want it starting on the wrong offset. 
> 
> I am including the output from mdadm --examine from before and after the create
> 
> Originally...
> 
> >>> /dev/sdb5:
> >>>        Magic : a92b4efc
> >>>      Version : 1.2
> >>>  Feature Map : 0x1
> >>>   Array UUID : 42c56ea0:2484f566:387adc6c:b3f6a014
> >>>         Name : GATEWAY:127  (local to host GATEWAY)
> >>> Creation Time : Sat Aug 22 09:44:21 2009
> >>>   Raid Level : raid5
> >>> Raid Devices : 4
> >>> 
> >>> Avail Dev Size : 586099060 (279.47 GiB 300.08 GB)
> >>>   Array Size : 1758296832 (838.42 GiB 900.25 GB)
> >>> Used Dev Size : 586098944 (279.47 GiB 300.08 GB)
> >>>  Data Offset : 272 sectors
> >>> Super Offset : 8 sectors
> >>>        State : clean
> >>>  Device UUID : f8ebb9f8:b447f894:d8b0b59f:ca8e98eb
> >>> 
> >>> Internal Bitmap : 2 sectors from superblock
> >>>  Update Time : Fri Mar 19 00:56:15 2010
> >>>     Checksum : 1005cfbc - correct
> >>>       Events : 3796145
> >>> 
> >>>       Layout : left-symmetric
> >>>   Chunk Size : 64K
> >>> 
> >>> Device Role : Active device 2
> >>> Array State : .AA. ('A' == active, '.' == missing)
> 
> New...
> 
> /dev/sdb5:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 8588b69c:c0579680:8a63486a:cbcb0e7d
>            Name : GATEWAY:511  (local to host GATEWAY)
>   Creation Time : Tue Apr  6 01:53:25 2010
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 586097284 (279.47 GiB 300.08 GB)
>      Array Size : 1758290688 (838.42 GiB 900.24 GB)
>   Used Dev Size : 586096896 (279.47 GiB 300.08 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 8 sectors
>           State : clean
>     Device UUID : 13d6a075:c1cad6dc:c13c3d98:e4b980e9
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Tue Apr  6 23:23:07 2010
>        Checksum : df3cb34f - correct
>          Events : 4
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>    Device Role : Active device 2
>    Array State : .AAA ('A' == active, '.' == missing)
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux