Re: Data Offset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry.

/dev/sde fell out of the array, so I replaced the physical drive with
what is now /dev/sdf. udev may have relabelled the drive - smartctl
states that the drive that is now /dev/sde works fine.
/dev/sdf is a new drive. /dev/sdf has a single, whole-disk partition
with type marked as raid. It is physically larger than the others.

/dev/sdf1 doesn't have a mdadm superblock. /dev/sdf seems to, so I
gave output of that device instead of /dev/sdf1, despite the
partition. Whole-drive RAID is fine, if it gets it working.

What I'm attempting to do is rebuild the RAID from the data from the
other four drives, and bring the RAID back up without losing any of
the data. /dev/sdb3, /dev/sdc3, /dev/sdd3, and what is now /dev/sde3
should be used to rebuild the array, with /dev/sdf as a new drive. If
I can get the array back up with all my data and all five drives in
use, I'll be very happy.

On Fri, Jun 1, 2012 at 6:52 PM, NeilBrown <neilb@xxxxxxx> wrote:
> On Fri, 1 Jun 2012 18:22:33 -0500 freeone3000 <freeone3000@xxxxxxxxx> wrote:
>
>> Hello. I have an issue concerning a broken RAID of unsure pedigree.
>> Examining the drives tells me the block sizes are not the same, as
>> listed in the email.
>>
>> > I certainly won't be easy.  Though if someone did find themselves in that
>> > situation it might motivate me to enhance mdadm in some way to make it easily
>> > fixable.
>>
>> I seem to be your motivation for making this situation fixable.
>> Somehow I managed to get drives with an invalid block size. All worked
>> fine until a drive dropped out of the RAID5. When attempting to
>> replace, I can re-create the RAID, but it cannot be of the same size
>> because the 1024-sector drives are "too small" when changed to
>> 2048-sector, exactly as described. Are there any recovery options I
>> could try, including simply editing the header?
>
> You seem to be leaving out some important information.
> The "mdadm --examine" of all the drives is good - thanks - but what exactly
> if your problem, and what were you trying to do?
>
> You appear to have a 5-device RAID5 of which one device (sde3) fell out of
> the array on or shortly after 23rd May, 3 drives are working fine, and one -
> sdf (not sdf3??) - is a confused spare....
>
> What exactly did you do to sdf?
>
> NeilBrown
>
>
>>
>>
>> mdadm --examine of all drives in the RAID:
>>
>> /dev/sdb3:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3906525098 (1862.78 GiB 2000.14 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>   Used Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 872097fa:3ae66ab4:ed21256a:10a030c9
>>
>>     Update Time : Fri Jun  1 03:11:54 2012
>>        Checksum : 6d627f7a - correct
>>          Events : 2127454
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 1
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3906525098 (1862.78 GiB 2000.14 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>   Used Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 2ea285a1:a2342c24:ffec56a2:ba6fcf07
>>
>>     Update Time : Fri Jun  1 03:11:54 2012
>>        Checksum : fae2ea42 - correct
>>          Events : 2127454
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 0
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>> /dev/sdc3:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3906525098 (1862.78 GiB 2000.14 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>   Used Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 2ea285a1:a2342c24:ffec56a2:ba6fcf07
>>
>>     Update Time : Fri Jun  1 03:11:54 2012
>>        Checksum : fae2ea42 - correct
>>          Events : 2127454
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 0
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>>
>> /dev/sdd3:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>     Data Offset : 1024 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 8d656a1d:bbb1da37:edaf4011:1af2bbb9
>>
>>     Update Time : Fri Jun  1 03:11:54 2012
>>        Checksum : ab4c6863 - correct
>>          Events : 2127454
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 3
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>> /dev/sde3:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>     Data Offset : 1024 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 37bb83bd:313c9381:cabff9d0:60bd205c
>>
>>     Update Time : Wed May 23 03:30:50 2012
>>        Checksum : f72e6959 - correct
>>          Events : 2004256
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : spare
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>> /dev/sdf:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 9759ad94:75e30b6b:8a726b4d:177a6eda
>>            Name : leyline:1  (local to host leyline)
>>   Creation Time : Mon Sep 12 13:19:00 2011
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
>>      Array Size : 15626096640 (7451.10 GiB 8000.56 GB)
>>   Used Dev Size : 3906524160 (1862.78 GiB 2000.14 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : e16d4103:cd11cc3b:bb6ee12e:5ad0a6e9
>>
>>     Update Time : Fri Jun  1 03:11:54 2012
>>        Checksum : e287a82a - correct
>>          Events : 0
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : spare
>>    Array State : AAAA. ('A' == active, '.' == missing)
>>
>> --
>> James Moore
>>
>> --
>> James Moore
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
James Moore
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux