Re: (Re: Questions regarding startup of imsm container)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So more info:

I can assemble this array as expected by doing the following:

mdadm -A /dev/md0 /dev/sd[bcde]
mdadm -I /dev/md0

I get:
# ls -l /dev/md/
total 0
lrwxrwxrwx 1 root root 6 Mar 23 08:40 0 -> ../md0
lrwxrwxrwx 1 root root 8 Mar 23 08:40 127 -> ../md127
lrwxrwxrwx 1 root root 8 Mar 23 08:40 Volume0_0 -> ../md127

and:
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md127 : active raid5 sdb[3] sdc[2] sdd[1] sde[0]
      2930280448 blocks super external:/md0/0 level 5, 64k chunk,
algorithm 0 [4/4] [UUUU]
      [=>...................]  resync =  5.8% (57270016/976760320)
finish=179.4min speed=85376K/sec

md0 : inactive sdb[3](S) sde[2](S) sdd[1](S) sdc[0](S)
      9028 blocks super external:imsm

unused devices: <none>

Not clear if this will force a resync every start...


On Tue, Mar 23, 2010 at 8:33 AM, Randy Terbush <randy@xxxxxxxxxxx> wrote:
> To follow-up this startup challenge... here is what I am getting.
>
> mdraid is being started with mdadm -As
>
> I have the following in mdadm.conf
>
> HOMEHOST Volume0
> #DEVICE /dev/sd[bcde]
> AUTO +imsm hifi:0 -all
> ARRAY metadata=imsm UUID=30223250:76fd248b:50280919:0836b7f0
> ARRAY /dev/md/Volume0 container=30223250:76fd248b:50280919:0836b7f0
> member=0 UUID=8a4ae452:da1e7832:70ecf895:eb58229c
>
> The following devices are being created.
>
> # ls -l /dev/md/
> total 0
> lrwxrwxrwx 1 root root 6 Mar 23 08:10 0 -> ../md0
> lrwxrwxrwx 1 root root 8 Mar 23 08:17 126 -> ../md126
> lrwxrwxrwx 1 root root 8 Mar 23 08:17 127 -> ../md127
> lrwxrwxrwx 1 root root 8 Mar 23 08:17 imsm0 -> ../md127
> lrwxrwxrwx 1 root root 8 Mar 23 08:17 Volume0 -> ../md126
>
> cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> md126 : active raid5 sdb[3] sdc[2] sdd[1] sde[0]
>      2930280448 blocks super external:/md127/0 level 5, 64k chunk,
> algorithm 0 [4/4] [UUUU]
>      [>....................]  resync =  1.8% (18285824/976760320)
> finish=182.6min speed=87464K/sec
>
> md127 : inactive sde[3](S) sdb[2](S) sdc[1](S) sdd[0](S)
>      9028 blocks super external:imsm
>
> unused devices: <none>
>
> So the container device is getting moved from md0 to md127. Not sure why.
>
> And would sure like to have a write-intent bitmap active to avoid this
> resync issue which seems to be happening way too frequently.
>
>
> On Tue, Mar 23, 2010 at 6:58 AM, Randy Terbush <randy@xxxxxxxxxxx> wrote:
>> On Tue, Mar 23, 2010 at 2:04 AM, Luca Berra <bluca@xxxxxxxxxx> wrote:
>>>> # mdadm --version
>>>> mdadm - v3.1.2 - 10th March 2010
>>>>
>>>> # mdadm -Es
>>>> ARRAY metadata=imsm UUID=30223250:76fd248b:50280919:0836b7f0
>>>> ARRAY /dev/md/Volume0 container=30223250:76fd248b:50280919:0836b7f0
>>>> member=0 UUID=8a4ae452:da1e7832:70ecf895:eb58229c
>>>>
>>>> # ls -l /dev/md/
>>>> total 0
>>>> lrwxrwxrwx 1 root root 6 Mar 22 20:54 0 -> ../md0
>>>> lrwxrwxrwx 1 root root 8 Mar 22 20:54 127 -> ../md127
>>>> lrwxrwxrwx 1 root root 8 Mar 22 20:54 Volume0_0 -> ../md127
>>>>
>>>> As you can see, the name for the link in /dev/md does not agree with
>>>> the name that the Examine is coming up with.
>>>
>>> please read mdadm.conf manpage, under the section "HOMEHOST"
>>
>> If I understand this correctly, I think there still may be a problem
>> as I am not clear on how I could have set the homehost in the metadata
>> for this imsm array. The Volume0 is provided by imsm and is configured
>> in the option ROM.
>>
>> The underlying question here is should the ARRAY entry in mdadm.conf
>> be changed to reflect the on disk name of the device, or is the
>> startup process munging that entry when it processes mdadm.conf to
>> strip the _0.
>>
>> I'll try setting HOMEHOST <ignore> to see if I am getting expected results.
>>
>> I seem to have some problems with startup still as I have the
>> following entry where the container is now md127. Was md0 when
>> originally created.
>>
>> # cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
>> md126 : active raid5 sdb[3] sdc[2] sdd[1] sde[0]
>>      2930280448 blocks super external:/md127/0 level 5, 64k chunk,
>> algorithm 0 [4/4] [UUUU]
>>
>> md127 : inactive sde[3](S) sdb[2](S) sdc[1](S) sdd[0](S)
>>      9028 blocks super external:imsm
>>
>> unused devices: <none>
>>
>> I am also running into a problem where fsck will crash during boot on
>> the ext4 filesystems that this array contains. No problem running fsck
>> after the boot process has completed so have not seemed to find the
>> magic with order of startup for this device.
>>
>>
>>>
>>>> Is it better to just forgo the ARRAY statements and go with an AUTO +imsm?
>>>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux