Re: [FAILED] regression tests: tests/bugs/distribute/bug-1066798.t, tests/basic/volume-snapshot.t

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 07/21/2015 09:20 AM, Atin Mukherjee wrote:

On 07/20/2015 06:23 PM, Avra Sengupta wrote:
Got access to the slave machine. Had a look at the BT. The test-case
that keeps failing spuriously is kill_glusterd in cluster.rc, and the bt
looks like the following. Looks like the volinfo is corrupted. Is looks
like it was trying to access peer data after cleanup_and _exit was called.

[2015-07-19 10:23:31.799681] W [glusterfsd.c:1214:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x79d1) [0x7f3f744729d1]
-->glusterd(glusterfs_sigwaiter+0xe4) [0x409734]
-->glusterd(cleanup_and_exit+0x87) [0x407ba7] ) 0-: received signum
(15), shutting down
[2015-07-19 10:23:31.802507] W [socket.c:637:__socket_rwv] 0-management:
readv on 127.1.1.3:24007 failed (No data available)
[2015-07-19 10:23:31.802581] I [timer.c:43:gf_timer_call_after]
(-->/build/install/lib/libgfrpc.so.0(rpc_transport_notify+0x12f)
[0x7f3f74f57aeb]
-->/build/install/lib/libgfrpc.so.0(rpc_clnt_notify+0x1ca)
[0x7f3f74f5b644]
-->/build/install/lib/libglusterfs.so.0(gf_timer_call_after+0xfb)
[0x7f3f751b9517] ) 0-timer: ctx cleanup started
[2015-07-19 10:23:31.802608] I [MSGID: 106004]
[glusterd-handler.c:5047:__glusterd_peer_rpc_notify] 0-management: Peer
<127.1.1.3> (<377dd370-1401-452b-831e-cbaf4340a376>), in state <Peer in
Cluster>, has disconnected from glusterd.
[2015-07-19 10:23:31.804501] W [socket.c:637:__socket_rwv] 0-management:
readv on 127.1.1.1:24007 failed (No data available)
[2015-07-19 10:23:31.804567] I [timer.c:43:gf_timer_call_after]
(-->/build/install/lib/libgfrpc.so.0(rpc_transport_notify+0x12f)
[0x7f3f74f57aeb]
-->/build/install/lib/libgfrpc.so.0(rpc_clnt_notify+0x1ca)
[0x7f3f74f5b644]
-->/build/install/lib/libglusterfs.so.0(gf_timer_call_after+0xfb)
[0x7f3f751b9517] ) 0-timer: ctx cleanup started
[2015-07-19 10:23:31.804612] I [MSGID: 106004]
[glusterd-handler.c:5047:__glusterd_peer_rpc_notify] 0-management: Peer
<127.1.1.1> (<fd3a8e54-a8fc-47d4-93fa-ed1a9e2c78fc>), in state <Peer in
Cluster>, has disconnected from glusterd.


#0  0x00007f8f3c400e2c in vfprintf () from ./lib64/libc.so.6
#1  0x00007f8f3c428752 in vsnprintf () from ./lib64/libc.so.6
#2  0x00007f8f3c408223 in snprintf () from ./lib64/libc.so.6
#3  0x00007f8f32cded8d in glusterd_volume_stop_glusterfs
(volinfo=0x253ee30, brickinfo=0x2549f60, del_brick=_gf_false)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-utils.c:1754

#4  0x00007f8f32cebd18 in glusterd_brick_stop (volinfo=0x253ee30,
brickinfo=0x2549f60, del_brick=_gf_false)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-utils.c:5458

#5  0x00007f8f32d76804 in glusterd_snap_volume_remove
(rsp_dict=0x7f8f1800102c, snap_vol=0x253ee30, remove_lvm=_gf_false,
force=_gf_false)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-snapshot.c:2897

#6  0x00007f8f32d76daf in glusterd_snap_remove (rsp_dict=0x7f8f1800102c,
snap=0x25388a0, remove_lvm=_gf_false, force=_gf_false)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-snapshot.c:3005

#7  0x00007f8f32da2e58 in glusterd_compare_and_update_snap
(peer_data=0x7f8f180024fc, snap_count=1, peername=0x7f8f18002370
"127.1.1.1", peerid=0x7f8f180023e0
"\375:\216T\250\374Gԓ\372\355\032\236,x\374p#")
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c:1849

#8  0x00007f8f32da311f in glusterd_compare_friend_snapshots
(peer_data=0x7f8f180024fc, peername=0x7f8f18002370 "127.1.1.1",
peerid=0x7f8f180023e0 "\375:\216T\250\374Gԓ\372\355\032\236,x\374p#")
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-snapshot-utils.c:1904

#9  0x00007f8f32cc49b3 in glusterd_ac_handle_friend_add_req
(event=0x7f8f180023d0, ctx=0x7f8f18002460)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-sm.c:831

#10 0x00007f8f32cc5250 in glusterd_friend_sm () at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-sm.c:1253

#11 0x00007f8f32cbadd4 in __glusterd_handle_incoming_friend_req
(req=0x7f8f1800128c) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-handler.c:2541

#12 0x00007f8f32cb36aa in glusterd_big_locked_handler
(req=0x7f8f1800128c, actor_fn=0x7f8f32cbac38
<__glusterd_handle_incoming_friend_req>)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-handler.c:79

#13 0x00007f8f32cbae0a in glusterd_handle_incoming_friend_req
(req=0x7f8f1800128c) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mgmt/glusterd/src/glusterd-handler.c:2551

#14 0x00007f8f3d61706d in rpcsvc_handle_rpc_call (svc=0x24b8430,
trans=0x7f8f20008e20, msg=0x7f8f18000e20) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:699

#15 0x00007f8f3d6173e0 in rpcsvc_notify (trans=0x7f8f20008e20,
mydata=0x24b8430, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f8f18000e20)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:793

#16 0x00007f8f3d61caeb in rpc_transport_notify (this=0x7f8f20008e20,
event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f8f18000e20)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538

#17 0x00007f8f3134187b in socket_event_poll_in (this=0x7f8f20008e20) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2285

#18 0x00007f8f31341dd1 in socket_event_handler (fd=20, idx=10,
data=0x7f8f20008e20, poll_in=1, poll_out=0, poll_err=0)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2398

#19 0x00007f8f3d8d09e4 in event_dispatch_epoll_handler
(event_pool=0x249ec90, event=0x7f8f2dfb5e70) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570

#20 0x00007f8f3d8d0dd2 in event_dispatch_epoll_worker (data=0x24a97f0)
at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673

#21 0x00007f8f3cb379d1 in start_thread () from ./lib64/libpthread.so.0
#22 0x00007f8f3c4a18fd in clone () from ./lib64/libc.so.6
glusterD team will look into it and get back.
    This crash due to race between handshake thread and snapshot thread.
Snapshot thread referring voinfo and same time volinfo is modified during handshake, glusterd was crashing due to this inconsistent data of volinfo .
link : https://bugzilla.redhat.com/show_bug.cgi?id=1246432
patch link : http://review.gluster.org/11757


Regards,
Avra


On 07/20/2015 04:38 PM, Avra Sengupta wrote:
The particular slave (slave21) containing the cores is down. I however
have access to slave0, so trying to recreate it on that slave and will
analyze the core when I get it.

Regards,
Avra

On 07/20/2015 03:19 PM, Ravishankar N wrote:
One more core for volume-snapshot.t:
https://build.gluster.org/job/rackspace-regression-2GB-triggered/12605/consoleFull


On 07/20/2015 03:00 PM, Raghavendra Talur wrote:
Adding Susant and Avra for dht and snapshot test cases respectively.


On Mon, Jul 20, 2015 at 11:45 AM, Milind Changire
<milindchangire@xxxxxxxxx> wrote:

http://build.gluster.org/job/rackspace-regression-2GB-triggered/12541/consoleFull


http://build.gluster.org/job/rackspace-regression-2GB-triggered/12499/consoleFull



     Please advise.

     --
     Milind


     _______________________________________________
     Gluster-devel mailing list
     Gluster-devel@xxxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxxx>
     http://www.gluster.org/mailman/listinfo/gluster-devel




--
*Raghavendra Talur *



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux