Error Attempting to Create LV on Clustered VG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm attempting to create a logical volume on a clustered volume group
in a (2) node Pacemaker + Corosync cluster (active/active). I have
successfully created the VG and it appears available on both hosts:

[root@bill ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  r0     1   0   0 wz--nc 203.24g 203.24g

[root@ben ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  r0     1   0   0 wz--nc 203.24g 203.24g

When I attempt to create a LV on this VG from one of the hosts, I get
the following error message:
[root@ben ~]# lvcreate -L 150G -n testvmfs1 r0
  Error locking on node 40e6640a: Invalid argument
  Error locking on node 31e6640a: Invalid argument
  Failed to activate new LV.

I turned on the debug option for clvmd and I see what appears to be
the relevant part from the logs (full logs attached for both hosts):
--snip--
Mar 19 16:37:02 ben user.debug lvm[2845]: Sending message to all cluster nodes
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from
nodeid 837182474 for 0. len 84
Mar 19 16:37:02 ben user.debug lvm[2845]: lock_resource
'bZ6A3DuNVyQTbWKDtWgP9vQIzopLKfBNSzatwqVGb32qIU1v00DBvD3nZmiaNYbh',
flags=1, mode=1
Mar 19 16:37:02 ben user.debug lvm[2845]: dlm_ls_lock returned 22
Mar 19 16:37:02 ben user.debug lvm[2845]: hold_lock. lock at 1 failed:
Invalid argument
Mar 19 16:37:02 ben user.debug lvm[2845]: Command return is 22,
critical_section is 0
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 31e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 1 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: LVM thread waiting for work
Mar 19 16:37:02 ben user.debug lvm[2845]: 837182474 got message from
nodeid 1088840714 for 837182474. len 35
Mar 19 16:37:02 ben user.debug lvm[2845]: Reply from node 40e6640a: 17 bytes
Mar 19 16:37:02 ben user.debug lvm[2845]: Got 2 replies, expecting: 2
Mar 19 16:37:02 ben user.debug lvm[2845]: Got post command condition...
--snip--

I've Google'd around a bit but haven't found any other posts quite
like this one. Here are the software versions I'm using on these
hosts:
Linux kernel: 3.7.8
LVM2: 2.02.97
DLM User Tools (dlm_controld): 4.0.1
Corosync: 2.3.0
Pacemaker: 1.1.8

I built LVM2 with the following configure script options:
--with-lvm1=none --disable-selinux --prefix=/usr --with-clvmd=corosync
--with-cluster=internal --enable-ocf --enable-cmirrord

Here is dlm_tool information from each host (not sure if its helpful,
but just in case):

[root@bill ~]# dlm_tool -n ls
dlm lockspaces
name          clvmd
id            0x4104eefa
flags         0x00000000
change        member 2 joined 1 remove 0 failed 0 seq 3,3
members       837182474 1088840714
all nodes
nodeid 837182474 member 1 failed 0 start 1 seq_add 3 seq_rem 2 check none
nodeid 1088840714 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none

[root@bill ~]# dlm_tool status
cluster nodeid 1088840714 quorate 1 ring seq 316 316
daemon now 12430 fence_pid 0
node 837182474 M add 460 rem 178 fail 0 fence 0 at 0 0
node 1088840714 M add 119 rem 0 fail 0 fence 0 at 0 0

[root@ben ~]# dlm_tool -n ls
dlm lockspaces
name          clvmd
id            0x4104eefa
flags         0x00000000
change        member 2 joined 1 remove 0 failed 0 seq 1,1
members       837182474 1088840714
all nodes
nodeid 837182474 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none
nodeid 1088840714 member 1 failed 0 start 1 seq_add 1 seq_rem 0 check none

[root@ben ~]# dlm_tool status
cluster nodeid 837182474 quorate 1 ring seq 316 316
daemon now 12174 fence_pid 0
node 837182474 M add 115 rem 0 fail 0 fence 0 at 0 0
node 1088840714 M add 115 rem 0 fail 0 fence 0 at 0 0


I'd be grateful if anyone had a few minutes to look at my issue --
hopefully its something simple I'm missing. =)
Please let me know if you need any additional information.


Thanks,

Marc

Attachment: ben_debug
Description: Binary data

Attachment: bill_debug
Description: Binary data

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux