Re: tests/bugs/snapshot/bug-1109889.t - snapd crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



<server_setvolume>

         */
        if (op_ret && !xl) {
                /* We would have set the xl_private of the transport to the                                                                          
                 * @conn. But if we have put the connection i.e shutting down                                                                        
                 * the connection, then we should set xl_private to NULL as it                                                                       
                 * would be pointing to a freed memory and would segfault when                                                                       
                 * accessed upon getting DISCONNECT.                                                                                                 
                 */
                gf_client_put (client, NULL);
                req->trans->xl_private = NULL;
        }

</server_setvolume>

The crash is in gf_client_put.  Code in gf_client_put reveals that client is dereferenced without NULL check. I am suspecting that this crash might've been uncovered/caused by [1] which fails any setvolume requests before server graph initialization (in which case client is NULL). Will send out a patch.

[1] http://review.gluster.org/11490

On Fri, Jul 3, 2015 at 6:02 PM, Raghavendra Bhat <rabhat@xxxxxxxxxx> wrote:
On 07/03/2015 03:37 PM, Atin Mukherjee wrote:
http://build.gluster.org/job/rackspace-regression-2GB-triggered/11898/consoleFull
has caused a crash in snapd with the following bt:

This seem to have crashed in server_setvolume (i.e. before the graph could be properly made available for i/o. snapview-server xlator is yet to come into the picture). But still I will try to reproduce it on my local setup and see what might be causing this.


Regards,
Raghavendra Bhat



#0  0x00007f11e2ed3ded in gf_client_put (client=0x0, detached=0x0)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/client_t.c:294
#1  0x00007f11d4eeac96 in server_setvolume (req=0x7f11c000195c)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/server/src/server-handshake.c:710
#2  0x00007f11e2c1e05c in rpcsvc_handle_rpc_call (svc=0x7f11d001b160,
trans=0x7f11c0000ac0, msg=0x7f11c0001810)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:698
#3  0x00007f11e2c1e3cf in rpcsvc_notify (trans=0x7f11c0000ac0,
mydata=0x7f11d001b160, event=RPC_TRANSPORT_MSG_RECEIVED,
     data="" at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:792
#4  0x00007f11e2c23ad7 in rpc_transport_notify (this=0x7f11c0000ac0,
event=RPC_TRANSPORT_MSG_RECEIVED, data="">      at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538
#5  0x00007f11d841787b in socket_event_poll_in (this=0x7f11c0000ac0)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2285
#6  0x00007f11d8417dd1 in socket_event_handler (fd=13, idx=3,
data="" poll_in=1, poll_out=0, poll_err=0)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2398
#7  0x00007f11e2ed79ec in event_dispatch_epoll_handler
(event_pool=0x13bb040, event=0x7f11d4eb9e70)
     at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570
#8  0x00007f11e2ed7dda in event_dispatch_epoll_worker (data="">      at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673
#9  0x00007f11e213e9d1 in start_thread () from ./lib64/libpthread.so.0
#10 0x00007f11e1aa88fd in clone () from ./lib64/libc.so.6


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel



--
Raghavendra G
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux