Re: Segfault at client3_3_create_cbk() when calling fops->create

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
 
The frame which you used for doing the STACK_WIND(create) was probably unwound by the time the create_cbk came back from the server? This is the most common/likely cause..


  You mean that the frame of readv/readv_cbk (where I'm calling the WIND to create/open) is already gone, even if there was no UNWIND yet? 
  If this is the case, how can I avoid that? I am doing a STACK_WIND(create or open) by the end of the readv_cbk, I tried increase the refcount of the frame, but there was no effect.

Best,
Gustavo.

 
Avati

On Thu, Oct 11, 2012 at 1:59 PM, Gustavo Bervian Brand <gugabrand@xxxxxxxxx> wrote:
Hello,

  I gave up of using syncop calls at my attempt of copy a file to a local node while the read is happening (at each readv_cbk in this case)... it was failing often and I could not benefit from the frame->local isolation between the readv/readv_cbk/<syncop calls> because the local context was lost when inside the function called by synctask_new.

  Anyway, I built a logic with wind/unwind calls instead, and I am getting a strange fault to begin with, out of my translator. At the readv_cbk() I am calling xl->fops->create with most of the parameters I was using with syncop_create (the fd, loc, flags, etc were created and populated locally while in the readv_cbk), but the execution is stopping at the path below because the myframe->local is NULL and the fd cannot be gathered at client3_3_create_cbk().

  At the backend the file was created (but not yet populated or set with attributes). 
  Does this kind of behavior is known, a bug for an unusual scenario, or what I am missing here?

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff3bc964f in client3_3_create_cbk (req=0x7ffff36751a4, iov=0x7ffff36751e4, count=1, myframe=0x7ffff5dcb2dc) at client-rpc-fops.c:2008
2008            fd    = local->fd;
(gdb)
(gdb) bt
#0  0x00007ffff3bc964f in client3_3_create_cbk (req=0x7ffff36751a4, iov=0x7ffff36751e4, count=1, myframe=0x7ffff5dcb2dc) at client-rpc-fops.c:2008
#1  0x00007ffff7944e8b in rpc_clnt_handle_reply (clnt=0x693860, pollin=0x6e5d80) at rpc-clnt.c:784
#2  0x00007ffff79451fc in rpc_clnt_notify (trans=0x6a3290, mydata=0x693890, event=RPC_TRANSPORT_MSG_RECEIVED, data="" at rpc-clnt.c:903
#3  0x00007ffff79416bb in rpc_transport_notify (this=0x6a3290, event=RPC_TRANSPORT_MSG_RECEIVED, data="" at rpc-transport.c:495
#4  0x00007ffff3466e20 in socket_event_poll_in (this=0x6a3290) at socket.c:1986
#5  0x00007ffff34672bd in socket_event_handler (fd=14, idx=1, data="" poll_in=1, poll_out=0, poll_err=0) at socket.c:2097
#6  0x00007ffff7b98fce in event_dispatch_epoll_handler (event_pool=0x6505e0, events=0x6c9c90, i=0) at event.c:784
#7  0x00007ffff7b991ad in event_dispatch_epoll (event_pool=0x6505e0) at event.c:845
#8  0x00007ffff7b99494 in event_dispatch (event_pool=0x6505e0) at event.c:945
#9  0x0000000000408ae0 in main (argc=7, argv=0x7fffffffe568) at glusterfsd.c:1814

Best,
Gustavo Bervian Brand
---------------------------------------------------------------------------------

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
https://lists.nongnu.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux