clvmd+corosync does not lock devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

this is my first mail here, so hi all :)

i have corosync 1.3.3, openais 1.1.4 and lvm2 2.02.88, on top of gentoo.

in lvm.conf i have locking_type = 3, and lvm2 were built with flag --with-cluster=corosync.
corosync is started with "OPENAIS_SERVICES=yes".
clvmd require corosync at start (otherwise refuse to setup properly) and when i make a `lvs` to list all volumes i can see correctly a shared AOE volume (vgAOE20), with c flag on it (clustered).

i also see, with clvmd in debug mode, communications between nodes on lvs (log attached)

but when i mount a volume in that vg, there is no communications between the nodes.

any hints, faq, documentation, debug procedure to suggest?

thanks,
Daniele

corosync.conf:
--
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
        version: 2
        secauth: off
        threads: 0
        interface {
                ringnumber: 0
                bindnetaddr: 192.168.1.0
                mcastaddr: 226.94.1.1
                mcastport: 5405
        }
}

logging {
        fileline: off
        to_stderr: no
        to_logfile: yes
        to_syslog: yes
        logfile: /var/log/cluster/corosync.log
        debug: on
        #debug: off
        timestamp: on
        logger_subsys {
                subsys: AMF
                debug: off
        }
}

amf {
        mode: disabled
}
--

pvsrv01 ~ # vgs
  VG      #PV #LV #SN Attr   VSize VFree
  vgAOE20   1   8   0 wz--nc 1.82t 1.79t
pvsrv01 ~ # lvs
  LV            VG      Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  backup-boot   vgAOE20 -wi-a- 52.00m
  backup-root   vgAOE20 -wi-ao  5.00g
  backup-swap   vgAOE20 -wi-a-  1.00g
  backup-var    vgAOE20 -wi-a- 10.00g
  testldap-boot vgAOE20 -wi-a- 52.00m
  testldap-root vgAOE20 -wi-a-  5.00g
  testldap-swap vgAOE20 -wi-a-  1.00g
  testldap-var  vgAOE20 -wi-a-  5.00g
pvsrv01 ~ #


lvs debug log:

clvmd
--
CLVMD[bbccc700]: Oct 4 10:31:31 1694607552 got message from nodeid 1744939200 for 0. len 31 CLVMD[bbccc700]: Oct 4 10:31:31 add_to_lvmqueue: cmd=0x898440. client=0x69c660, msg=0x7f0aba1dea2c, len=31, csid=0x7fffdffaea5c, xid=0
CLVMD[b9ddd700]: Oct  4 10:31:31 process_work_item: remote
CLVMD[b9ddd700]: Oct 4 10:31:31 process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 391 on node 6801a8c0
CLVMD[b9ddd700]: Oct  4 10:31:31 Syncing device names
CLVMD[b9ddd700]: Oct  4 10:31:31 LVM thread waiting for work
CLVMD[bbccc700]: Oct 4 10:31:31 1694607552 got message from nodeid 1694607552 for 1744939200. len 18 CLVMD[bbccc700]: Oct 4 10:31:32 1694607552 got message from nodeid 1744939200 for 0. len 31 CLVMD[bbccc700]: Oct 4 10:31:32 add_to_lvmqueue: cmd=0x898440. client=0x69c660, msg=0x7f0aba1debdc, len=31, csid=0x7fffdffaea5c, xid=0
CLVMD[b9ddd700]: Oct  4 10:31:32 process_work_item: remote
CLVMD[b9ddd700]: Oct 4 10:31:32 process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 451 on node 6801a8c0
CLVMD[b9ddd700]: Oct  4 10:31:32 Syncing device names
CLVMD[b9ddd700]: Oct  4 10:31:32 LVM thread waiting for work
CLVMD[bbccc700]: Oct 4 10:31:32 1694607552 got message from nodeid 1694607552 for 1744939200. len 18
--

corosync:
--
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 6c
Oct 04 11:00:10 corosync [TOTEM ] Delivering 6b to 6c
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 6c to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceOpen
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 6c
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 6d
Oct 04 11:00:10 corosync [TOTEM ] Delivering 6c to 6d
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 6d to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceLock
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 6d
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 6e
Oct 04 11:00:10 corosync [TOTEM ] Delivering 6d to 6e
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 6e to pending delivery queue
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 6e
Oct 04 11:00:10 corosync [CPG   ] got mcast request on 0x181ac50
Oct 04 11:00:10 corosync [TOTEM ] mcasted message added to pending queue
Oct 04 11:00:10 corosync [TOTEM ] Delivering 6e to 6f
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 6f to pending delivery queue
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 6f
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 6f
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 70
Oct 04 11:00:10 corosync [TOTEM ] Delivering 6f to 70
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 70 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceUnlock
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 70
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 71
Oct 04 11:00:10 corosync [TOTEM ] Delivering 70 to 71
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 71 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceClose
Oct 04 11:00:10 corosync [LCK ] [DEBUG]: lck_resourcelock_release { name=V_vgAOE20-1 } [0]
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 71
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 72
Oct 04 11:00:10 corosync [TOTEM ] Delivering 71 to 72
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 72 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceOpen
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 72
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 73
Oct 04 11:00:10 corosync [TOTEM ] Delivering 72 to 73
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 73 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceLock
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 73
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 74
Oct 04 11:00:10 corosync [TOTEM ] Delivering 73 to 74
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 74 to pending delivery queue
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 74
Oct 04 11:00:10 corosync [CPG   ] got mcast request on 0x181ac50
Oct 04 11:00:10 corosync [TOTEM ] mcasted message added to pending queue
Oct 04 11:00:10 corosync [TOTEM ] Delivering 74 to 75
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 75 to pending delivery queue
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 75
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 75
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 76
Oct 04 11:00:10 corosync [TOTEM ] Delivering 75 to 76
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 76 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceUnlock
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 76
Oct 04 11:00:10 corosync [TOTEM ] Received ringid(192.168.1.101:124) seq 77
Oct 04 11:00:10 corosync [TOTEM ] Delivering 76 to 77
Oct 04 11:00:10 corosync [TOTEM ] Delivering MCAST message with seq 77 to pending delivery queue
Oct 04 11:00:10 corosync [LCK   ] EXEC request: saLckResourceClose
Oct 04 11:00:10 corosync [LCK ] [DEBUG]: lck_resourcelock_release { name=V_vgPvSrv04-1 } [0]
Oct 04 11:00:10 corosync [TOTEM ] releasing messages up to and including 77
--
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss


[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux