Re: raid10 issues after reorder of boot drives.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 27 Apr 2012 19:29:37 -0400 likewhoa <likewhoa@xxxxxxxxxxxxxxxx> wrote:

> On 04/27/2012 06:05 PM, NeilBrown wrote:
> > On Fri, 27 Apr 2012 17:51:54 -0400 likewhoa <likewhoa@xxxxxxxxxxxxxxxx> wrote:
> >
> >
> >> adding more verbose info gives me:
> >>
> >>> -> mdadm -A --verbose /dev/md1
> >> mdadm: looking for devices for /dev/md1
> >> mdadm: /dev/dm-8 is not one of
> >> /dev/sdg3,/dev/sdf3,/dev/sde3,/dev/sdd3,/dev/sdb3,/dev/sda3,/dev/sdc3
> > You seem to have an explicit list of devices in /etc/mdadm.conf
> > This is not a good idea for 'sd' devices as they can change their names,
> > which can mean they aren't on the list any more.  You should remove that
> > once you get this all sorted out.
> >
> > NeilBrown
> >
> >
> @Neil sorry but I didn't get to reply to all on my last 2 emails, so
> here is goes again so it's archived.
> 
> /dev/sdh3:
>           Magic : a92b4efc
>         Version : 1.0
>     Feature Map : 0x0
>      Array UUID : 828ed03d:0c28afda:4a636e88:7b29ec9f
>            Name : Darkside:1  (local to host Darkside)
>   Creation Time : Sun Aug 15 21:12:34 2010
>      Raid Level : raid10
>    Raid Devices : 8
> 
>  Avail Dev Size : 902993648 (430.58 GiB 462.33 GB)
>      Array Size : 3611971584 (1722.32 GiB 1849.33 GB)
>   Used Dev Size : 902992896 (430.58 GiB 462.33 GB)
>    Super Offset : 902993904 sectors
>           State : clean
>     Device UUID : 00565578:e2eaaba3:f1eae17c:f474ee8d
> 
>     Update Time : Wed Apr 25 17:22:58 2012
>        Checksum : 1e7c3692 - correct
>          Events : 82942
> 
>          Layout : far=2
>      Chunk Size : 256K
> 
>    Device Role : Active device 0
>    Array State : AAAAAAAA ('A' == active, '.' == missing)
>  
> 
> The only drive that didn't get affected is far=3. Any suggestions? I
> have the drives on separate controllers and when I created the array I
> set up the order as /dev/sda3 /dev/sde3 /dev/sdb3 /dev/sdf3 and so on.
> so I would assume the same order would be used, also note that I ran
> luksFormat on /dev/md1 then ran pvcreate /dev/md1 and so on. Will I have
> issues with luksOpen after recreating the array? I removed the /dev/sdh1
> drive so now the output is like:

As the array is "far=2" you will need to create with --layout=f2

That means you cannot easily detect pairs by comparing the first few
kilobytes.

However if you are confident that you know the order of the drives (because
of they arrangement of controllers) then maybe you can just create the array.
Make sure you set --chunk=256


Yes, you will need to 'luksOpen' after recreating the array.  That will
quite possibly fail if you have the order wrong.
Then you would need to "pvscan" or whatever one does to find LVM components.
If both those succeed, then try "fsck -n".
If they fail, try a different ordering.

NeilBrown

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux