Re: MD array keeps resyncing after rebooting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 31, 2013 at 9:36 PM, Francis Moreau <francis.moro@xxxxxxxxx> wrote:
> Hello Martin,
>
> I finally managed to get more information.
>
> After the resync finished I have the following state:
>
> partial content of /sys/block/md126/md:
> --------------------------------------
> array_size           default
> array_state          active
> chunk_size           65536
> component_size       975585280
> degraded             0
> layout               0
> level                raid1
> max_read_errors      20
> metadata_version     external:/md127/0
> mismatch_cnt         0
> raid_disks           2
> reshape_position     none
> resync_start         none
> safe_mode_delay      0.000
> suspend_hi           0
> suspend_lo           0
> sync_action          idle
> sync_completed       none
>
> # cat /proc/mdstat
> Personalities : [raid1]
> md126 : active raid1 sdb[1] sda[0]
>       975585280 blocks super external:/md127/0 [2/2] [UU]
>
> md127 : inactive sdb[1](S) sda[0](S)
>       2354608 blocks super external:ddf
>
> unused devices: <none>
>
> # mdadm -E /dev/sda | egrep "GUID|state"
> Controller GUID : 4C534920:20202020:FFFFFFFF:FFFFFFFF:FFFFFFFF:FFFFFFFF
>  Container GUID : 4C534920:20202020:80861D6B:10140432:3F14FDAD:5271FC67
>       VD GUID[0] : 4C534920:20202020:80861D60:00000000:3F2A56A7:00001450
>         state[0] : Optimal, Not Consistent
>    init state[0] : Fully Initialised
>
> Same for /dev/sdb
>
> As you noticed the state is "Not Consistent". In my understanding it
> becomes "Consistent" when  the array is stopped.
>
> I checked during the shudown process that the array is correctly
> stopped since at that point I got:
>
> # mdadm -E /dev/sda | egrep "state"
>         state[0] : Optimal, Consistent
>    init state[0] : Fully Initialised
>
> After rebooting, it appears that the BIOS changed a part of VD
> GUID[0]. I'm not sure if that can confuse the kernel and if it's the
> reason why the kernel shows:

To be more accurate, I stopped during the initramfs execution before
udev is started, therefore I'm pretty sure that no mdadm commands have
been issued.

And at this point, the kernel message "md/raid1:md126: not clean --
starting background
reconstruction" has not been emitted and I can see this:

# mdadm -E /dev/sda | egrep "state"
        state[0] : Optimal, Consistent
   init state[0] : Not Initialised

So the "init state[0]" has been changed from "Initialised" to "Not Initialised".

How is this possible ?

Help much appreciated.
-- 
Francis
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux