Re: RAIS5 Rebuild Help - Possible Data Offset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu Jan 30, 2014 at 10:20:45pm +0800, Yorik wrote:

> Hello
> 
> Sorry if double post had to remove some HTML.
> 
> I've been using "mdadm" for about 8-9 years with a lot of success in
> different Array configurations. I am a huge fan of Software RAID. I
> recently suffered a failure that was very unfortunate and I now have
> an issue with differing Data Offset and other issues preventing my
> Array from rebuilding. A brief history follows:
> 
> Array originally created as RAID5, 3x 3TB Drives in May 2011 on a
> (then) latest Fedora build.
> Added more drives in 2012 and 2013 with upgraded Fedora builds.
> Recently added another drive and changed to RAID6, 6x6TB Drives.
> Suffered a failure shortly after 2 more drives that kicked the last 2
> drives off and one of the 6 other that were previously running.
> During the RAID rebuild, my computer locked up and then suffered a
> Power Failure (having a bad day) that stopped my OS Disk (SSD) from
> allowing me to reboot. Eventually rebuilt the OS with Fedora 20,
> wiping the majority of the SSD and config files with it.
> 
> The RAID6 does not start, stating that it only has 5 of the 8 drives
> required. Looking into this, the drives seem to be clean and check
> okay, but there is even differences (expected) and Data Offset
> difference between drives that I read (Neil Brown Blog date
> -20120615073245) was not necessarily good for rebuilding and that I
> should ask for a version of mdadm that can handle this as it's
> possible that creating with an older version, then adding new drives
> is causing an issue. Data Offsets of 2048 and 4096 exist.
> 
> Also of note is that I have some drives that are on the Array without
> partitions as they would not add to the Array with partitions on them
> due to size when they were added. I think that I had version issue
> with fdisk, but it seemed to work okay until now.
> 
> When I run the following command:
> 
> mdadm --create --assume-clean --level=6 --raid-devices=8
> --size=2930263552 /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd
> /dev/sdi /dev/sde /dev/sdg /dev/sdf
> 
> I get:
> 
> mdadm: /dev/sda1 is smaller than given size. 2930134016K < 2930263552K
> + metadata
> mdadm: /dev/sdb1 is smaller than given size. 2930134016K < 2930263552K
> + metadata
> mdadm: /dev/sdc1 is smaller than given size. 2930134016K < 2930263552K
> + metadata
> mdadm: /dev/sdd is smaller than given size. 2930135040K < 2930263552K + metadata
> mdadm: /dev/sdi is smaller than given size. 2930135040K < 2930263552K + metadata
> mdadm: /dev/sde is smaller than given size. 2930135040K < 2930263552K + metadata
> mdadm: /dev/sdg is smaller than given size. 2930135040K < 2930263552K + metadata
> mdadm: /dev/sdf is smaller than given size. 2930135040K < 2930263552K + metadata
> mdadm: create aborted
> 
> Which makes me think that there are further issues. Should I be
> looking for an older version of mdadm or a different version? Can
> anyone offer some advice please or where to look next.
> 
You don't mention having tried a force assemble, which should have been
done before resorting to re-creating the array. What's the output from
running "mdadm -Af /dev/md0 /dev/sd[abc]1 /dev/sd[defgi]"?

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@xxxxxxxxxxxxxxx> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux