Re: lvcreate hangs always

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



it hangs as always(after locking VG). From redhat document, it said
clvmd should start before cmirror, or the sequence does not matter at
all.

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Logical_Volume_Manager_Administration/mirvol_create_ex.html


[root@wplccdlvm445 ~]# service cman start
Starting cluster:
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [  OK  ]
[root@wplccdlvm445 ~]# service cmirror start
Loading clustered mirror log module:                       [  OK  ]
Starting clustered mirror log server:                      [  OK  ]
[root@wplccdlvm445 ~]# service clvmd start
Starting clvmd:                                            [  OK  ]
Activating VGs:   0 logical volume(s) in volume group "vg100" now active
  2 logical volume(s) in volume group "VolGroup00" now active
                                                           [  OK  ]


[root@wplccdlvm445 ~]# lvcreate -vvv -m1 --corelog -L 800M   vg100
        Processing: lvcreate -vvv -m1 --corelog -L 800M vg100
        O_DIRECT will be used
      Setting global/locking_type to 3
      Cluster locking selected.
      Getting target version for linear
        dm version   OF   [16384]
        dm versions   OF   [16384]
      Getting target version for striped
        dm versions   OF   [16384]
    Setting logging type to core
      Setting activation/mirror_region_size to 512
    Finding volume group "vg100"
      Locking VG V_vg100 PW B (0x4)
        Opened /dev/ramdisk RW O_DIRECT
        /dev/ramdisk: block size is 4096 bytes
      /dev/ramdisk: No label detected
        Closed /dev/ramdisk
        Opened /dev/root RW O_DIRECT
        /dev/root: block size is 4096 bytes
      /dev/root: No label detected
        Closed /dev/root
        Opened /dev/ram RW O_DIRECT
        /dev/ram: block size is 4096 bytes
      /dev/ram: No label detected
        Closed /dev/ram
        Opened /dev/sda1 RW O_DIRECT
        /dev/sda1: block size is 1024 bytes
      /dev/sda1: No label detected
        Closed /dev/sda1
        Opened /dev/VolGroup00/LogVol01 RW O_DIRECT
        /dev/VolGroup00/LogVol01: block size is 4096 bytes
      /dev/VolGroup00/LogVol01: No label detected
        Closed /dev/VolGroup00/LogVol01
        Opened /dev/ram2 RW O_DIRECT
        /dev/ram2: block size is 4096 bytes
     /dev/ram2: No label detected
        Closed /dev/ram2
        Opened /dev/sda2 RW O_DIRECT
        /dev/sda2: block size is 512 bytes
      /dev/sda2: lvm2 label detected
        lvmcache: /dev/sda2: now in VG #orphans_lvm2 (#orphans_lvm2)
        /dev/sda2: Found metadata at 6656 size 1150 (in area at 4096
size 192512) for VolGroup00 (iQuMpX-6Un5-Q4jX-UtXp-MHZg-AQAC-wK0FXS)
        lvmcache: /dev/sda2: now in VG VolGroup00 with 1 mdas
        lvmcache: /dev/sda2: setting VolGroup00 VGID to
iQuMpX6Un5Q4jXUtXpMHZgAQACwK0FXS
        lvmcache: /dev/sda2: VG VolGroup00: Set creation host to
localhost.localdomain.
        Closed /dev/sda2
        Opened /dev/ram3 RW O_DIRECT
        /dev/ram3: block size is 4096 bytes
      /dev/ram3: No label detected
        Closed /dev/ram3
        Opened /dev/ram4 RW O_DIRECT
        /dev/ram4: block size is 4096 bytes
      /dev/ram4: No label detected
        Closed /dev/ram4
        Opened /dev/ram5 RW O_DIRECT
        /dev/ram5: block size is 4096 bytes
      /dev/ram5: No label detected
        Closed /dev/ram5
        Opened /dev/ram6 RW O_DIRECT
        /dev/ram6: block size is 4096 bytes
      /dev/ram6: No label detected
        Closed /dev/ram6
        Opened /dev/ram7 RW O_DIRECT
        /dev/ram7: block size is 4096 bytes
      /dev/ram7: No label detected
        Closed /dev/ram7
        Opened /dev/ram8 RW O_DIRECT
        /dev/ram8: block size is 4096 bytes
      /dev/ram8: No label detected
        Closed /dev/ram8
        Opened /dev/ram9 RW O_DIRECT
        /dev/ram9: block size is 4096 bytes
      /dev/ram9: No label detected
        Closed /dev/ram9
        Opened /dev/ram10 RW O_DIRECT
        /dev/ram10: block size is 4096 bytes
      /dev/ram10: No label detected
        Closed /dev/ram10
        Opened /dev/ram11 RW O_DIRECT
        /dev/ram11: block size is 4096 bytes
      /dev/ram11: No label detected
        Closed /dev/ram11
        Opened /dev/ram12 RW O_DIRECT
        /dev/ram12: block size is 4096 bytes
      /dev/ram12: No label detected
        Closed /dev/ram12
        Opened /dev/ram13 RW O_DIRECT
        /dev/ram13: block size is 4096 bytes
      /dev/ram13: No label detected
        Closed /dev/ram13
        Opened /dev/ram14 RW O_DIRECT
        /dev/ram14: block size is 4096 bytes
      /dev/ram14: No label detected
        Closed /dev/ram14
        Opened /dev/ram15 RW O_DIRECT
        /dev/ram15: block size is 4096 bytes
      /dev/ram15: No label detected
        Closed /dev/ram15
        Opened /dev/sdb RW O_DIRECT
        /dev/sdb: block size is 4096 bytes
      /dev/sdb: lvm2 label detected
        lvmcache: /dev/sdb: now in VG #orphans_lvm2 (#orphans_lvm2)
        /dev/sdb: Found metadata at 4608 size 854 (in area at 4096
size 192512) for vg100 (IZRA48-hz68-x145-4YOd-Q8vo-pGT2-O7V5PW)
        lvmcache: /dev/sdb: now in VG vg100 with 1 mdas
        lvmcache: /dev/sdb: setting vg100 VGID to
IZRA48hz68x1454YOdQ8vopGT2O7V5PW
        lvmcache: /dev/sdb: VG vg100: Set creation host to
wplccdlvm446.cn.ibm.com.
        Opened /dev/sdc RW O_DIRECT
        /dev/sdc: block size is 4096 bytes
      /dev/sdc: lvm2 label detected
        lvmcache: /dev/sdc: now in VG #orphans_lvm2 (#orphans_lvm2)
        /dev/sdc: Found metadata at 4608 size 854 (in area at 4096
size 192512) for vg100 (IZRA48-hz68-x145-4YOd-Q8vo-pGT2-O7V5PW)
        lvmcache: /dev/sdc: now in VG vg100
(IZRA48hz68x1454YOdQ8vopGT2O7V5PW) with 1 mdas
        Using cached label for /dev/sdb
        Using cached label for /dev/sdc
        Using cached label for /dev/sdb
        Using cached label for /dev/sdc
        Read vg100 metadata (1) from /dev/sdb at 4608 size 854
        Using cached label for /dev/sdb
        Using cached label for /dev/sdc
        Read vg100 metadata (1) from /dev/sdc at 4608 size 854
        /dev/sdb 0:      0    255: NULL(0:0)
        /dev/sdc 0:      0    255: NULL(0:0)
    Archiving volume group "vg100" metadata (seqno 1).
    Creating logical volume lvol0
        Allowing allocation on /dev/sdb start PE 0 length 255
        Allowing allocation on /dev/sdc start PE 0 length 255
        Allowing allocation on /dev/sdb start PE 200 length 55
        Allowing allocation on /dev/sdc start PE 0 length 255
        Parallel PVs at LE 0 length 200: /dev/sdb
    Creating logical volume lvol0_mimage_0
        Getting device info for vg100-lvol0
        dm info
LVM-IZRA48hz68x1454YOdQ8vopGT2O7V5PWxsPfOhRBpGDV6wbmcITMUwAmtX0yeVex
NF   [16384]
        dm info
IZRA48hz68x1454YOdQ8vopGT2O7V5PWxsPfOhRBpGDV6wbmcITMUwAmtX0yeVex NF
[16384]
        dm info vg100-lvol0  NF   [16384]
  cluster request failed: Invalid argument
      Inserting layer lvol0_mimage_0 for lvol0
      Stack lvol0:0[0] on LV lvol0_mimage_0:0
      Adding lvol0:0 as an user of lvol0_mimage_0
    Creating logical volume lvol0_mimage_1
      Remove lvol0:0[0] from the top of LV lvol0_mimage_0:0
      lvol0:0 is no longer a user of lvol0_mimage_0
      Stack lvol0:0[0] on LV lvol0_mimage_0:0
      Adding lvol0:0 as an user of lvol0_mimage_0
      Stack lvol0:0[1] on LV lvol0_mimage_1:0
      Adding lvol0:0 as an user of lvol0_mimage_1
        /dev/sdb 0:      0    200: lvol0_mimage_0(0:0)
        /dev/sdb 1:    200     55: NULL(0:0)
        /dev/sdc 0:      0    200: lvol0_mimage_1(0:0)
        /dev/sdc 1:    200     55: NULL(0:0)
      Locking VG P_vg100 PW B (0x4)



On Mon, Dec 28, 2009 at 4:51 PM, brem belguebli
<brem.belguebli@xxxxxxxxx> wrote:
> In my setup, cmirror is started before clvm, it may be the reason.
>
> 2009/12/28 Diamond Li <diamondiona@xxxxxxxxx>:
>> not at all, it hangs again.
>>
>> On Mon, Dec 28, 2009 at 3:22 PM, brem belguebli
>> <brem.belguebli@xxxxxxxxx> wrote:
>>> did it work ?
>>>
>>> 2009/12/28 Diamond Li <diamondiona@xxxxxxxxx>:
>>>> thanks for your reply, I have started cmirror:
>>>> [root@wplccdlvm445 ~]# service cmirror status
>>>> cmirror is running.
>>>> [root@wplccdlvm445 ~]# service clvmd status
>>>> clvmd (pid 5392) is running...
>>>>
>>>> [root@wplccdlvm445 ~]# service cman  status
>>>> cman is running.
>>>> [root@wplccdlvm445 ~]# clustat
>>>> Cluster Status for clearcase @ Mon Dec 28 10:54:49 2009
>>>> Member Status: Quorate
>>>>
>>>>  Member Name                                       ID   Status
>>>>  ------ ----                                       ---- ------
>>>>  wplccdlvm445.cn.ibm.com                               1 Online, Local
>>>>  wplccdlvm446.cn.ibm.com                               2 Online
>>>>
>>>> [root@wplccdlvm445 ~]# lvcreate  -vvv -m1 --corelog -L 800M   vg100
>>>>        Processing: lvcreate -vvv -m1 --corelog -L 800M vg100
>>>>        O_DIRECT will be used
>>>>      Setting global/locking_type to 3
>>>>      Cluster locking selected.
>>>>      Getting target version for linear
>>>>        dm version   OF   [16384]
>>>>        dm versions   OF   [16384]
>>>>      Getting target version for striped
>>>>        dm versions   OF   [16384]
>>>>    Setting logging type to core
>>>>      Setting activation/mirror_region_size to 512
>>>>    Finding volume group "vg100"
>>>>      Locking VG V_vg100 PW B (0x4)
>>>>
>>>>
>>>>
>>>> On Fri, Dec 25, 2009 at 8:04 AM, brem belguebli
>>>> <brem.belguebli@xxxxxxxxx> wrote:
>>>>> try to run cmirror (service cmirror start) before doing any clvm with mirror
>>>>>
>>>>>
>>>>>
>>>>> 2009/12/24 Diamond Li <diamondiona@xxxxxxxxx>:
>>>>>> Could someone  kindly help me to get through because I haven been
>>>>>> blocked for very long time.
>>>>>>
>>>>>>
>>>>>> On Wed, Dec 23, 2009 at 10:38 PM, Diamond Li <diamondiona@xxxxxxxxx> wrote:
>>>>>>> Hello  everyone,
>>>>>>>
>>>>>>>  I am trying to create a mirror LVM on cluster, but lvcreate hangs
>>>>>>>  every time and I have to use kill from another terminal. According to
>>>>>>>  release note, this should be supported since  5.3
>>>>>>>
>>>>>>>  Any words from wisdoms?
>>>>>>>
>>>>>>>  [root@wplccdlvm446 ~]# lvcreate -vvvv  -m1 --mirrorlog core -L 800M   vg00
>>>>>>>  #lvmcmdline.c:987         Processing: lvcreate -vvvv -m1 --mirrorlog
>>>>>>>  core -L 800M vg00
>>>>>>>  #lvmcmdline.c:990         O_DIRECT will be used
>>>>>>>  #config/config.c:950       Setting global/locking_type to 3
>>>>>>>  #locking/locking.c:253       Cluster locking selected.
>>>>>>>  #activate/activate.c:363       Getting target version for linear
>>>>>>>  #ioctl/libdm-iface.c:1672         dm version   OF   [16384]
>>>>>>>  #ioctl/libdm-iface.c:1672         dm versions   OF   [16384]
>>>>>>>  #activate/activate.c:363       Getting target version for striped
>>>>>>>  #ioctl/libdm-iface.c:1672         dm versions   OF   [16384]
>>>>>>>  #lvcreate.c:318     Setting logging type to core
>>>>>>>  #config/config.c:950       Setting activation/mirror_region_size to 512
>>>>>>>  #lvcreate.c:997     Finding volume group "vg00"
>>>>>>>  #locking/cluster_locking.c:458       Locking VG V_vg00 PW B (0x4)
>>>>>>>
>>>>>>>  [root@wplccdlvm446 ~]# cat /etc/redhat-release
>>>>>>>  Red Hat Enterprise Linux Server release 5.4 (Tikanga)
>>>>>>>
>>>>>>> [root@wplccdlvm446 ~]# uname -r
>>>>>>>  2.6.18-164.el5
>>>>>>>  same result, every I added nosync parameter.
>>>>>>>
>>>>>>>  [root@wplccdlvm446 ~]# lvcreate -vvvv  --nosync -m1 --mirrorlog core
>>>>>>>  -L 800M   vg00
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Linux-cluster mailing list
>>>>>> Linux-cluster@xxxxxxxxxx
>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>
>>>>>
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster@xxxxxxxxxx
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>
>>>> --
>>>> Linux-cluster mailing list
>>>> Linux-cluster@xxxxxxxxxx
>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster@xxxxxxxxxx
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux