Re: Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+nithya, +raghavendra,

---- Original Message -----
From: "Gaurav Garg" <ggarg@xxxxxxxxxx>
To: "Joseph Fernandes" <josferna@xxxxxxxxxx>
Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
Sent: Thursday, July 9, 2015 11:20:49 AM
Subject: Re:  Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

Hi joseph,

By looking at bt it seems that rebalance process have crashed.


(gdb) bt
#0  0x000000000040e825 in glusterfs_rebalance_event_notify_cbk (req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
#1  0x00007ff27abc66ab in saved_frames_unwind (saved_frames=0x1817ca0)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:361
#2  0x00007ff27abc674a in saved_frames_destroy (frames=0x1817ca0)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:378
#3  0x00007ff27abc6ba1 in rpc_clnt_connection_cleanup (conn=0x1816870)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:527
#4  0x00007ff27abc75ad in rpc_clnt_notify (trans=0x1816cb0, mydata=0x1816870, event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:836
#5  0x00007ff27abc3ad7 in rpc_transport_notify (this=0x1816cb0, event=RPC_TRANSPORT_DISCONNECT, data=0x1816cb0)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538
#6  0x00007ff2703b3101 in socket_event_poll_err (this=0x1816cb0)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:1200
#7  0x00007ff2703b7e2c in socket_event_handler (fd=9, idx=1, data=0x1816cb0, poll_in=1, poll_out=0, poll_err=24)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2405
#8  0x00007ff27ae779f8 in event_dispatch_epoll_handler (event_pool=0x17dbc90, event=0x7ff26ed90e70)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570
#9  0x00007ff27ae77de6 in event_dispatch_epoll_worker (data=0x1817e70)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673
#10 0x00007ff27a0de9d1 in start_thread () from ./lib64/libpthread.so.0
#11 0x00007ff279a488fd in clone () from ./lib64/libc.so.6
(gdb) f 0
#0  0x000000000040e825 in glusterfs_rebalance_event_notify_cbk (req=0x7ff25000497c, iov=0x7ff26ed905e0, count=1, myframe=0x7ff25000175c)
    at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1725
1725	in /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c
(gdb) p myframe
$7 = (void *) 0x7ff25000175c
(gdb) p *myframe
Attempt to dereference a generic pointer.
(gdb) p $3.this
$8 = (xlator_t *) 0x17400000000000
(gdb) p (xlator_t *)$3.this
$9 = (xlator_t *) 0x17400000000000
(gdb) p *(xlator_t *)$3.this
Cannot access memory at address 0x17400000000000
(gdb) p (call_frame_t *)myframe
$2 = (call_frame_t *) 0x7ff25000175c
(gdb) p *(call_frame_t *)myframe
$3 = {root = 0x17400000000000, parent = 0xadc0de00007ff250, frames = {next = 0x25c9000000000de, prev = 0x3000007ff268}, local = 0xac0000000000, 
  this = 0x17400000000000, ret = 0xadc0de00007ff250, ref_count = 222, lock = 39620608, cookie = 0x3000007ff268, complete = _gf_false, op = 44032, begin = {
    tv_sec = 6544293208522752, tv_usec = -5926493018029821360}, end = {tv_sec = 170169215607636190, tv_usec = 52776566518376}, 
  wind_from = 0xac0000000000 <error: Cannot access memory at address 0xac0000000000>, 
  wind_to = 0x17400000000000 <error: Cannot access memory at address 0x17400000000000>, 
  unwind_from = 0xffffff00007ff250 <error: Cannot access memory at address 0xffffff00007ff250>, 
  unwind_to = 0xffffffffffffffff <error: Cannot access memory at address 0xffffffffffffffff>}
(gdb) p *iov
$4 = {iov_base = 0x0, iov_len = 0}
(gdb) p *req
$5 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base = 0x0, 
      iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
    authdata = '\000' <repeats 399 times>}, prog = 0x615580 <clnt_handshake_prog>, procnum = 5, cbkfn = 0x40e7ce <glusterfs_rebalance_event_notify_cbk>, 
  conn_private = 0x0}
(gdb) 
$6 = {conn = 0x1816870, xid = 2, req = {{iov_base = 0x0, iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, reqcnt = 0, req_iobref = 0x0, rsp = {{iov_base = 0x0, 
      iov_len = 0}, {iov_base = 0x0, iov_len = 0}}, rspcnt = 0, rsp_iobref = 0x0, rpc_status = -1, verf = {flavour = 0, datalen = 0, 
    authdata = '\000' <repeats 399 times>}, prog = 0x615580 <clnt_handshake_prog>, procnum = 5, cbkfn = 0x40e7ce <glusterfs_rebalance_event_notify_cbk>, 
  conn_private = 0x0}


this means that "this" have corrupted.

ccing rebalance folk's to look into this.

Regards,
Gaurav





----- Original Message -----
From: "Gaurav Garg" <ggarg@xxxxxxxxxx>
To: "Joseph Fernandes" <josferna@xxxxxxxxxx>
Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
Sent: Wednesday, July 8, 2015 10:55:38 PM
Subject: Re:  Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

Ya sure will look into this issue.

Regards,
Gaurav

----- Original Message -----
From: "Joseph Fernandes" <josferna@xxxxxxxxxx>
To: "Gaurav Garg" <ggarg@xxxxxxxxxx>
Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>, "Atin Mukherjee" <amukherj@xxxxxxxxxx>
Sent: Wednesday, July 8, 2015 10:51:07 PM
Subject: Spurious Failure: ./tests/bugs/cli/bug-1087487.t: 1 new core files

Hi Gaurav,

Could you please look into this

http://build.gluster.org/job/rackspace-regression-2GB-triggered/12126/consoleFull

Regards,
Joe
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux