Re: dht / afr crashes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you get a gdb backtrace from the coredump of this crash?

avati

2009/1/10 Corin Langosch <corinl@xxxxxx>:
> Hi again,
>
> I just recompiled gluster for debug and crashed it again the same way as
> before.
>
> Here's the r1.log / backtrace:
>
> 2009-01-09 19:59:54 W [dht.c:598:dht_lookup] client-dht: incomplete layout
> failure for path=/
> 2009-01-09 19:59:54 W [fuse-bridge.c:297:need_fresh_lookup] fuse-bridge:
> revalidate of / failed (Resource temporarily unavailable)
> 2009-01-09 20:00:06 E [socket.c:104:__socket_rwv] r3: writev failed (Broken
> pipe)
> 2009-01-09 20:00:06 E [saved-frames.c:148:saved_frames_unwind] r3: forced
> unwinding frame type(1) op(ENTRYLK)
> 2009-01-09 20:00:06 E [saved-frames.c:148:saved_frames_unwind] r3: forced
> unwinding frame type(1) op(ENTRYLK)
> 2009-01-09 20:00:06 E [socket.c:708:socket_connect_finish] r3: connection
> failed (Connection refused)
> pending frames:
> frame : type(1) op(LOOKUP)
> frame : type(1) op(LOOKUP)
>
> Signal received: 11
> configuration details:argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> tv_nsec 1
> package-string: glusterfs 1.4.0rc7
> /lib/libc.so.6[0x7f9bfe6db060]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(client_removexattr+0x16d)[0x7f9bfdc48672]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(protocol_client_xfer+0x403)[0x7f9bfdc42a24]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(client_entrylk+0x367)[0x7f9bfdc4b4a3]
> /lib/glusterfs/1.4.0rc7/xlator/cluster/afr.so(afr_unlock+0x5d2)[0x7f9bfda19f00]
> /lib/glusterfs/1.4.0rc7/xlator/cluster/afr.so(afr_write_pending_post_op_cbk+0xc4)[0x7f9bfda1a315]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(client_xattrop_cbk+0x2ec)[0x7f9bfdc4e366]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(protocol_client_interpret+0x1c6)[0x7f9bfdc538f2]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(protocol_client_pollin+0xd5)[0x7f9bfdc54545]
> /lib/glusterfs/1.4.0rc7/xlator/protocol/client.so(notify+0x123)[0x7f9bfdc54690]
> /lib/glusterfs/1.4.0rc7/transport/socket.so[0x7f9bfcd8b4c6]
> /lib/glusterfs/1.4.0rc7/transport/socket.so[0x7f9bfcd8b7b4]
> /lib/libglusterfs.so.0[0x7f9bfee68d99]
> /lib/libglusterfs.so.0[0x7f9bfee68f6e]
> /lib/libglusterfs.so.0(event_dispatch+0x73)[0x7f9bfee69284]
> ../bin/sbin/glusterfs(main+0xc50)[0x405136]
> /lib/libc.so.6(__libc_start_main+0xe6)[0x7f9bfe6c6466]
> ../bin/sbin/glusterfs[0x402419]
> ---------
>
> Corin
>
> On 09.01.2009 19:28, Corin Langosch wrote:
>
> Hi,
>
> for testing purposes I setup dht with afr with 4 bricks using the latest
> 1.4rc7.
>
> Making a simple "rsync -av --delete-during /etc /mount/r1" and killing
> some node during the copy crashes some random glusterfs nodes.
>
> For the exact volume specs etc. please have a look at the attached logs.
>
> start r1 r2 r3 r4
> rsync -av --delete-during /etc /mount/r1
> kill r3
> (sometimes crashes happen here...)
> wait some seconds
> start r3
> (sometimes crashes happen here...)
>
> All data directories and mount points are empty before starting the tests.
>
> If it's an error with my volume descriptions, if you need more info etc.
> please let me know too.
>
> Corin
>
>
>
> ________________________________
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux