A rule of thumb while using syncops is that one shouldn't use them in a thread which cannot be blocked (these are mostly threads polling for events on sockets). In other words, syncops are synchronous calls and the thread calling syncop gets blocked till the syncop is complete. This can easily lead to deadlocks as in the situation you've encountered here. 1. syncop_getxattr is invoked in the poller thread which reads responses from a socket (lets say s1 connected to brick b1). 2. If syncop_getxattr (called in 1) needs a response from b1 for the call to complete, we've a deadlock as 2a. the thread calling syncop_getxattr needs to go back polling for incoming messages on socket s1 for syncop_getxattr to complete 2b. syncop_getxattr can't complete unless we read response from socket s1. One way you can remove the deadlock is to use a STACK_WIND (getxattr) instead of syncop_getxattr here. regards, Raghavendra ----- Original Message ----- > From: "Ankireddypalle Reddy" <areddy@xxxxxxxxxxxxx> > To: "Gluster Devel (gluster-devel@xxxxxxxxxxx)" <gluster-devel@xxxxxxxxxxx> > Sent: Friday, December 16, 2016 1:44:32 AM > Subject: syncop_getxattr stuck > > > > > > > Attachment (1): > > > 1 > > trusted-cachevol.tcp-fuse.vol [Download] (2.48 KB) > > > Hi, > > I am working on a sample xlator. The xlator is stuck in a syncop_getxattr > call. Here’s the stack trace. Attached is my gluster volume config file. > Please advise what could have possibly gone wrong. > > > > #0 0x00007fd42bb236d5 in pthread_cond_wait@@GLIBC_2.3.2 () from > /lib64/libpthread.so.0 > > (gdb) where > > #0 0x00007fd42bb236d5 in pthread_cond_wait@@GLIBC_2.3.2 () from > /lib64/libpthread.so.0 > > #1 0x00007fd42cd1aca3 in syncop_getxattr (subvol=0x7fd4180107d0, > loc=0x7fd41000752c, dict=0x7fd420447620, key=0x7fd41f293d08 > "trusted.glusterfs.archive-size", xdata_in=0x0, xdata_out=0x0) at > syncop.c:1675 > > #2 0x00007fd41f28ddb8 in get_arch_file_uint64_xattr (this=0x7fd4180107d0, > loc=0x7fd41000752c, name=0x7fd41f293d08 "trusted.glusterfs.archive-size", > val=0x7fd4204476e0, op_errno=0x7fd420447704) at archive.c:225 > > #3 0x00007fd41f28dea8 in get_arch_file_xattrs (this=0x7fd4180107d0, > loc=0x7fd41000752c, xattrs=0x7fd4204476e0, op_errno=0x7fd420447704) at > archive.c:263 > > #4 0x00007fd41f2919ba in archive_stat_cbk (frame=0x7fd410006fec, > cookie=0x7fd41000752c, this=0x7fd4180107d0, op_ret=0, op_errno=117, > buf=0x7fd410006074, xdata=0x0) at archive.c:903 > > #5 0x00007fd41f519619 in dht_attr_cbk (frame=0x7fd4100075dc, > cookie=0x7fd41000770c, this=0x7fd41800f0c0, op_ret=0, op_errno=0, > stbuf=0x7fd420447940, xdata=0x0) at dht-inode-read.c:291 > > #6 0x00007fd41f75ada8 in afr_stat_cbk (frame=0x7fd41000770c, cookie=0x0, > this=0x7fd41800d610, op_ret=0, op_errno=0, buf=0x7fd420447940, xdata=0x0) at > afr-inode-read.c:211 > > #7 0x00007fd41fa044c2 in client3_3_stat_cbk (req=0x7fd41000437c, > iov=0x7fd4100043bc, count=1, myframe=0x7fd4100040ec) at > client-rpc-fops.c:507 > > #8 0x00007fd42ca923b6 in rpc_clnt_handle_reply (clnt=0x7fd418054ef0, > pollin=0x7fd418056840) at rpc-clnt.c:790 > > #9 0x00007fd42ca92909 in rpc_clnt_notify (trans=0x7fd418055400, > mydata=0x7fd418054f48, event=RPC_TRANSPORT_MSG_RECEIVED, > data=0x7fd418056840) at rpc-clnt.c:970 > > #10 0x00007fd42ca8eb22 in rpc_transport_notify (this=0x7fd418055400, > event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fd418056840) at > rpc-transport.c:537 > > #11 0x00007fd421efcd0e in socket_event_poll_in (this=0x7fd418055400) at > socket.c:2265 > > #12 0x00007fd421efd26c in socket_event_handler (fd=16, idx=3, > data=0x7fd418055400, poll_in=1, poll_out=0, poll_err=0) at socket.c:2395 > > #13 0x00007fd42cd3aadc in event_dispatch_epoll_handler (event_pool=0xb96810, > event=0x7fd420447ea0) at event-epoll.c:571 > > #14 0x00007fd42cd3aef9 in event_dispatch_epoll_worker (data=0xbece80) at > event-epoll.c:674 > > #15 0x00007fd42bb1fdc5 in start_thread () from /lib64/libpthread.so.0 > > #16 0x00007fd42b46321d in clone () from /lib64/libc.so.6 > > > > Thanks and Regards, > > Ram > ***************************Legal Disclaimer*************************** > "This communication may contain confidential and privileged material for the > sole use of the intended recipient. Any unauthorized review, use or > distribution > by others is strictly prohibited. If you have received the message by > mistake, > please advise the sender by reply email and delete the message. Thank you." > ********************************************************************** > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel