Re: Crash with mainline-2.5 patch 240 and dbench

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dale,
this bug is fixed now. please tla update to get the fix.

thanks,
avati

2007/6/28, Anand Avati <avati@xxxxxxxxxxxxx>:

Dale,
  We too got this bug recently. this fix is on the way.

avati

2007/6/28, Dale Dude <dale@xxxxxxxxxxxxxxx>:
>
> Running dbench produces this backtrace:
>
> #0  0xb75b5dff in unify_lookup_cbk (frame=0x86f9bc0, cookie=0x8055e08,
> this=0x80560b8, op_ret=0, op_errno=146211160, inode=0x8745fe0,
> buf=0x87ed060) at unify.c:310
> #1  0xb75c2249 in client_lookup_cbk (frame=0x8ec7598, args=0x8a004c0) at
>
> client-protocol.c:3797
> #2  0xb75c36f5 in notify (this=0x8055e08, event=2, data=0x808dc70) at
> client-protocol.c:4184
> #3  0xb7f36ae2 in transport_notify (this=0x8055e08, event=1) at
> transport.c:152
> #4  0xb7f3725e in sys_epoll_iteration (ctx=0x8b70158) at epoll.c:54
> #5  0xb7f36c8d in poll_iteration (ctx=0x8b70158) at transport.c:260
> #6  0x0804a2e9 in main (argc=3, argv=0xbf95b494) at glusterfs.c:341
>
> ==========================
> glusterfs.log:
> 2007-06-27 15:51:40 C [ fuse-bridge.c:455:fuse_fd_cbk] glusterfs-fuse:
> open() got EINTR
> 2007-06-27 16:04:17 C [common-utils.c:205:gf_print_trace]
> debug-backtrace: Got signal (11), printing backtrace
> 2007-06-27 16:04:17 C [common-utils.c :207:gf_print_trace]
> debug-backtrace: /lib/libglusterfs.so.0(gf_print_trace+0x2d)
> [0xb7f355f9]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: [0xffffe420]
> 2007-06-27 16:04:17 C [common-utils.c :207:gf_print_trace]
> debug-backtrace: /lib/glusterfs/1.3.0-pre5/xlator/protocol/client.so
> [0xb75c2249]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace:
> /lib/glusterfs/1.3.0-pre5/xlator/protocol/client.so(notify+0x855)
> [0xb75c36f5]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: /lib/libglusterfs.so.0(transport_notify+0x37)
> [0xb7f36ae2]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: /lib/libglusterfs.so.0(sys_epoll_iteration+0xd7)
> [0xb7f3725e]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: /lib/libglusterfs.so.0(poll_iteration+0x1d)
> [0xb7f36c8d]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: [glusterfs] [0x804a2e9]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xd2)
> [0xb7de7ea2]
> 2007-06-27 16:04:17 C [common-utils.c:207:gf_print_trace]
> debug-backtrace: [glusterfs] [0x8049c31]
>
> =============================
> glusterfs-client.vol
> volume server1
>          type protocol/client
>          option transport-type tcp/client     # for TCP/IP transport
>          option remote-host 127.0.0.1     # IP address of the remote
> brick
>          option remote-subvolume volumenamespace
> end-volume
>
> volume server1vol1
>          type protocol/client
>          option transport-type tcp/client     # for TCP/IP transport
>          option remote-host 127.0.0.1     # IP address of the remote
> brick
>          option remote-subvolume clusterfs1
> end-volume
>
> ###################
>
> volume bricks
>   type cluster/unify
>   option namespace server1
>   option readdir-force-success on  # ignore failed mounts
>   subvolumes server1vol1
>
>   option scheduler rr
>   option rr.limits.min-free-disk 5 #%
> end-volume
>
> volume writebehind   #writebehind improves write performance a lot
>   type performance/write-behind
>   option aggregate-size 131072 # in bytes
>   subvolumes bricks
> end-volume
>
> #volume statprefetch
>   #type performance/stat-prefetch
>   #option cache-seconds 20
>   #subvolumes writebehind
> #end-volume
>
> ================================
> glusterfs-server.vol:
> volume clusterfs1
>   type storage/posix
>   option directory /volume1
> end-volume
>
> #volume clusterfs1
>    #type performance/io-threads
>    #option thread-count 8
>    #subvolumes volume1
> #end-volume
>
> #######
>
> volume volumenamespace
>   type storage/posix
>   option directory /volume.namespace
> end-volume
>
> ###
>
> volume clusterfs
>   type protocol/server
>   option transport-type tcp/server
>   subvolumes clusterfs1 volumenamespace
>   option auth.ip.clusterfs1.allow *
>   option auth.ip.volumenamespace.allow *
> end-volume
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



--
Anand V. Avati




--
Anand V. Avati


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux