Re: glusterfs 3.0.0 crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Are you running servers with 2.0.x version?.

Regards
--
Harshavardhana
Gluster - http://www.gluster.com


On Thu, Dec 17, 2009 at 9:52 PM, Vijay Bellur <vijay@xxxxxxxxxxx> wrote:
Can you please generate a backtrace and send across the client log?

Thanks,
Vijay


elsif wrote:
Same result without iothreads:

pending frames:
frame : type(1) op(LOOKUP)
frame : type(1) op(LK)
frame : type(1) op(LK)
frame : type(1) op(LK)

patchset: 2.0.1-886-g8379edd
signal received: 11
time of crash: 2009-12-17 07:43:40
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.0
[0xffffe400]
/opt/gluster/lib/libglusterfs.so.0(data_destroy+0x3f)[0xf7f4b23f]
/opt/gluster/lib/libglusterfs.so.0(data_unref+0x60)[0xf7f4b2e0]
/opt/gluster/lib/libglusterfs.so.0(dict_destroy+0x46)[0xf7f4bb46]
/opt/gluster/lib/libglusterfs.so.0(dict_unref+0x60)[0xf7f4bc80]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_local_cleanup+0x86)[0xf7532c76]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_self_heal_completion_cbk+0x151)[0xf75540f1]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_sh_entry_done+0xc1)[0xf7558de1]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_self_heal_entry+0x4e)[0xf7558e3e]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_sh_metadata_done+0x2e2)[0xf7554dd2]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_self_heal_metadata+0x40)[0xf7554e20]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_sh_missing_entries_done+0x11d)[0xf75519bd]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_self_heal+0x347)[0xf7553c87]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so[0xf7535312]
/opt/gluster/lib/glusterfs/3.0.0/xlator/cluster/replicate.so(afr_revalidate_lookup_cbk+0x1f3)[0xf75355d3]
/opt/gluster/lib/glusterfs/3.0.0/xlator/protocol/client.so(client_lookup_cbk+0x687)[0xf7580e37]
/opt/gluster/lib/glusterfs/3.0.0/xlator/protocol/client.so(protocol_client_interpret+0x245)[0xf756cb75]
/opt/gluster/lib/glusterfs/3.0.0/xlator/protocol/client.so(protocol_client_pollin+0xcf)[0xf756cd1f]
/opt/gluster/lib/glusterfs/3.0.0/xlator/protocol/client.so(notify+0xd2)[0xf7572832]
/opt/gluster/lib/libglusterfs.so.0(xlator_notify+0x3f)[0xf7f5172f]
/opt/gluster/lib/glusterfs/3.0.0/transport/socket.so(socket_event_poll_in+0x3d)[0xf7f3aafd]
/opt/gluster/lib/glusterfs/3.0.0/transport/socket.so(socket_event_handler+0xab)[0xf7f3abbb]
/opt/gluster/lib/libglusterfs.so.0[0xf7f6c78a]
/opt/gluster/lib/libglusterfs.so.0(event_dispatch+0x21)[0xf7f6b571]
/opt/gluster/sbin/glusterfs(main+0xb53)[0x804b9b3]
/lib/libc.so.6(__libc_start_main+0xe5)[0xf7de25c5]
/opt/gluster/sbin/glusterfs[0x8049b61]
---------


Vijay Bellur wrote:
 
elsif wrote:
   
Here is the tail end of my config file:

volume dist
 type cluster/distribute
 subvolumes replicate1 replicate2 replicate3 replicate4 replicate5
replicate6 replicate7 replicate8 replicate9 replicate10 replicate11
replicate12 replicate13 replicate14 replicate15 replicate16 replicate17
replicate18
end-volume

volume iothreads
 type performance/io-threads
 option thread-count 16
 subvolumes dist
end-volume

The replicate volumes are made up of three subvolume clients.
         
Can you please remove iothreads from your client configuration and
give it a shot?

Thanks,
Vijay

   


 



_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux