Re: 3.7 pending patches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/28/2016 07:05 PM, Venky Shankar wrote:
Hey folks,

I just merged patch #13302 (and it's 3.7 equivalent) which fixes a scrubber crash.
This was causing other patches to fail regression.

Requesting a rebase of patches (especially 3.7 pending) that were blocked due to
this.
Thanks a lot for this venky, kotresh, Emmanuel. I re-triggered the builds.

I observed the following crash in one of the runs for https://build.gluster.org/job/rackspace-regression-2GB-triggered/17819/console (3.7):
(gdb) bt
#0  0x000000000040ecff in glusterfs_rebalance_event_notify_cbk (
req=0x7f0e58006dbc, iov=0x7f0e6cadb5d0, count=1, myframe=0x7f0e58003a7c) at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1812
#1  0x00007f0e79a1274b in saved_frames_unwind (saved_frames=0x19ffe70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:366
#2  0x00007f0e79a127ea in saved_frames_destroy (frames=0x19ffe70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:383
#3  0x00007f0e79a12c41 in rpc_clnt_connection_cleanup (conn=0x19fea20)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:532 #4 0x00007f0e79a136cb in rpc_clnt_notify (trans=0x19fee70, mydata=0x19fea20,
    event=RPC_TRANSPORT_DISCONNECT, data=0x19fee70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-clnt.c:854
#5  0x00007f0e79a0fb76 in rpc_transport_notify (this=0x19fee70,
    event=RPC_TRANSPORT_DISCONNECT, data=0x19fee70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:546
#6  0x00007f0e6f1fd621 in socket_event_poll_err (this=0x19fee70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-t---Type <return> to continue, or q <return> to quit---
ransport/socket/src/socket.c:1151
#7 0x00007f0e6f20234c in socket_event_handler (fd=9, idx=1, data=0x19fee70,
    poll_in=1, poll_out=0, poll_err=24)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2356 #8 0x00007f0e79cc386c in event_dispatch_epoll_handler (event_pool=0x19c3c90,
    event=0x7f0e6cadbe70)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:575
#9  0x00007f0e79cc3c5a in event_dispatch_epoll_worker (data=0x7f0e68014970)
at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:678
#10 0x00007f0e78f2aa51 in start_thread () from ./lib64/libpthread.so.0
#11 0x00007f0e7889493d in clone () from ./lib64/libc.so.6
(gdb) fr 0
#0  0x000000000040ecff in glusterfs_rebalance_event_notify_cbk (
req=0x7f0e58006dbc, iov=0x7f0e6cadb5d0, count=1, myframe=0x7f0e58003a7c) at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c:1812 1812 in /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd-mgmt.c
(gdb) info locals
rsp = {op_ret = 0, op_errno = 0, dict = {dict_len = 0, dict_val = 0x0}}
frame = 0x7f0e58003a7c
ctx = 0x0
ret = 0
__FUNCTION__ = "glusterfs_rebalance_event_notify_cbk"
(gdb) p frame->this
$1 = (xlator_t *) 0x3a600000000000
(gdb) p frame->this->name
Cannot access memory at address 0x3a600000000000

Pranith

Thanks,

                 Venky

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux