mixed 6.4 and 6.5 cluster - delays accessing mpath devices and clustered lvm's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a cluster of EL6.4 servers, with one server at fully updated EL6.5.  After upgrading to 6.5, we see unreasonably long delays accessing some mpath devices and clustered lvm's on the 6.5 member.  There are no problems with the 6.4 members.

This can be seen by strace'ing lvscan.  In the following example, syscall time is at the end of the line,
reads with ascii text are mpath devices, the rest are volumes:

------
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.467385>
16241 read(5, "\17u\21^ LVM2 x[5A%r0N*>\1\0\0\0\0\20\0\0\0\0\0\0"..., 4096) = 4096 <1.760943>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.164032>
16241 read(5, "gment1 {\nstart_extent = 0\nextent"..., 4096) = 4096 <2.859972>
16241 read(5, "\353H\220\20\216\320\274\0\260\270\0\0\216\330\216\300\373\276\0|\277\0\6\271\0\2\363\244\352!\6\0"..., 4096) = 4096 <1.717222>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.476014>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.800225>
16241 read(5, "3\300\216\320\274\0|\216\300\216\330\276\0|\277\0\6\271\0\2\374\363\244Ph\34\6\313\373\271\4\0"..., 4096) = 4096 <2.008620>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <2.021734>
16241 read(5, "3\300\216\320\274\0|\216\300\216\330\276\0|\277\0\6\271\0\2\374\363\244Ph\34\6\313\373\271\4\0"..., 4096) = 4096 <2.126359>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <2.036027>
16241 read(5, "\1\4\0\0\21\4\0\0!\4\0\0\331[\362\37\2\0\4\0\0\0\0\0\0\0\0\0\356\37U\23"..., 4096) = 4096 <1.330302>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.381982>
16241 read(5, "vgift3 {\nid = \"spdYGc-5hqc-ejzd-"..., 8192) = 8192 <0.922098>
16241 read(5, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <2.440282>
16241 read(6, "vgift3 {\nid = \"spdYGc-5hqc-ejzd-"..., 8192) = 8192 <1.158817>
16241 read(5, "gment1 {\nstart_extent = 0\nextent"..., 4096) = 4096 <0.941814>
16241 read(6, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4096) = 4096 <1.518448>
16241 read(6, "gment1 {\nstart_extent = 0\nextent"..., 20480) = 20480 <2.006777>
------

The delay can also be seen in the syslog  messages we receive after restarting clvmd with debugging enabled.

------
Jul 14 11:47:58 lnx05 lvm[13423]: Got new connection on fd 5
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 28
Jul 14 11:48:03 lnx05 lvm[13423]: creating pipe, [11, 12]
Jul 14 11:48:03 lnx05 lvm[13423]: Creating pre&post thread
Jul 14 11:48:03 lnx05 lvm[13423]: Created pre&post thread, state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: in sub thread: client = 0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: doing PRE command LOCK_VG 'V_vgift5' at 1 (client=0x13e7460)
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: 'V_vgift5' mode:3 flags=0
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: returning lkid 24c0008
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3443, flags=0x1 (LOCAL)
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=28, csid=(nil), xid=3443
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: LOCK_VG (0x33) msg=0x13e7110, msglen =28, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: do_lock_vg: resource 'V_vgift5', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Jul 14 11:48:03 lnx05 lvm[13423]: Invalidating cached metadata for VG vgift5
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 1
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 31
Jul 14 11:48:03 lnx05 lvm[13423]: check_all_clvmds_running
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3444, flags=0x0 ()
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=31, csid=(nil), xid=3444
Jul 14 11:48:03 lnx05 lvm[13423]: Sending message to all cluster nodes
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: SYNC_NAMES (0x2d) msg=0x13e7110, msglen =31, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx01-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 2 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx02-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 3 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx04-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 4 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx07-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 5 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx06-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 6 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx08-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 7 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx09-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 8 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx03-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 9 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 28
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: doing PRE command LOCK_VG 'V_vgift5' at 6 (client=0x13e7460)
Jul 14 11:48:03 lnx05 lvm[13423]: sync_unlock: 'V_vgift5' lkid:24c0008
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3445, flags=0x1 (LOCAL)
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=28, csid=(nil), xid=3445
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: LOCK_VG (0x33) msg=0x13e7110, msglen =28, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: do_lock_vg: resource 'V_vgift5', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Jul 14 11:48:03 lnx05 lvm[13423]: Invalidating cached metadata for VG vgift5
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 1
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 28
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: doing PRE command LOCK_VG 'V_vgift3' at 1 (client=0x13e7460)
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: 'V_vgift3' mode:3 flags=0
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: returning lkid 166000b
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3446, flags=0x1 (LOCAL)
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=28, csid=(nil), xid=3446
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: LOCK_VG (0x33) msg=0x13e7110, msglen =28, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: do_lock_vg: resource 'V_vgift3', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Jul 14 11:48:03 lnx05 lvm[13423]: Invalidating cached metadata for VG vgift3
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 1
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 31
Jul 14 11:48:03 lnx05 lvm[13423]: check_all_clvmds_running
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3447, flags=0x0 ()
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=31, csid=(nil), xid=3447
Jul 14 11:48:03 lnx05 lvm[13423]: Sending message to all cluster nodes
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: SYNC_NAMES (0x2d) msg=0x13e7110, msglen =31, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx01-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 2 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx02-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 3 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx04-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 4 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx07-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 5 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx06-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 6 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx08-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 7 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx09-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 8 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx03-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 9 replies, expecting: 9
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 28
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: doing PRE command LOCK_VG 'V_vgift3' at 6 (client=0x13e7460)
Jul 14 11:48:03 lnx05 lvm[13423]: sync_unlock: 'V_vgift3' lkid:166000b
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3448, flags=0x1 (LOCAL)
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=28, csid=(nil), xid=3448
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: LOCK_VG (0x33) msg=0x13e7110, msglen =28, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: do_lock_vg: resource 'V_vgift3', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Jul 14 11:48:03 lnx05 lvm[13423]: Invalidating cached metadata for VG vgift3
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 1
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:03 lnx05 lvm[13423]: Read on local socket 5, len = 28
Jul 14 11:48:03 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: doing PRE command LOCK_VG 'V_vgift2' at 1 (client=0x13e7460)
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: 'V_vgift2' mode:3 flags=0
Jul 14 11:48:03 lnx05 lvm[13423]: sync_lock: returning lkid 3b20007
Jul 14 11:48:03 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: distribute command: XID = 3449, flags=0x1 (LOCAL)
Jul 14 11:48:03 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=28, csid=(nil), xid=3449
Jul 14 11:48:03 lnx05 lvm[13423]: process_work_item: local
Jul 14 11:48:03 lnx05 lvm[13423]: process_local_command: LOCK_VG (0x33) msg=0x13e7110, msglen =28, client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: do_lock_vg: resource 'V_vgift2', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Jul 14 11:48:03 lnx05 lvm[13423]: Invalidating cached metadata for VG vgift2
Jul 14 11:48:03 lnx05 lvm[13423]: Reply from node lnx05-p12: 0 bytes
Jul 14 11:48:03 lnx05 lvm[13423]: Got 1 replies, expecting: 1
Jul 14 11:48:03 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:03 lnx05 lvm[13423]: Got post command condition...
Jul 14 11:48:03 lnx05 lvm[13423]: Waiting for next pre command
Jul 14 11:48:03 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:03 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:03 lnx05 lvm[13423]: Send local reply
Jul 14 11:48:04 lnx05 lvm[13423]: Read on local socket 5, len = 31
Jul 14 11:48:04 lnx05 lvm[13423]: check_all_clvmds_running
Jul 14 11:48:04 lnx05 lvm[13423]: Got pre command condition...
Jul 14 11:48:04 lnx05 lvm[13423]: Writing status 0 down pipe 12
Jul 14 11:48:04 lnx05 lvm[13423]: Waiting to do post command - state = 0
Jul 14 11:48:04 lnx05 lvm[13423]: read on PIPE 11: 4 bytes: status: 0
Jul 14 11:48:04 lnx05 lvm[13423]: background routine status was 0, sock_client=0x13e7460
Jul 14 11:48:04 lnx05 lvm[13423]: distribute command: XID = 3450, flags=0x0 ()
Jul 14 11:48:04 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e2820. client=0x13e7460, msg=0x13e27f0, len=31, csid=(nil), xid=3450
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25821 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25832 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25844 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25857 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25905 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
Jul 14 11:48:14 lnx05 lvm[13423]: add_to_lvmqueue: cmd=0x13e27f0. client=0x6c60c0, msg=0x7fffd749a7dc, len=31, csid=0x7fffd749a75c, xid=0
Jul 14 11:48:14 lnx05 lvm[13423]: process_work_item: remote
Jul 14 11:48:14 lnx05 lvm[13423]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 25914 on node lnx04-p12
Jul 14 11:48:14 lnx05 lvm[13423]: Syncing device names
Jul 14 11:48:14 lnx05 lvm[13423]: LVM thread waiting for work
------

Before we upgrade all cluster members to 6.5, we'd like to be reasonably certain that it will fix the problem rather than spread it to the entire cluster.  Any help would be greatly appreciated.

Many thanks,
Devin



-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux