Dale, the logs dont seem to pipoint the exact bug, but in any case the region of code in which you have hit the bug no more exists in the codebase. Please let us know in case you come across any other issues. thanks, avati 2007/7/10, Dale Dude <dale@xxxxxxxxxxxxxxx>:
linux 2.6.15 (dapper), fuse 2.6.5, dell 2950 w/4gigs. glusterfs process dies with the below log. Samba is mounted over and after a few mins of load from 4 different servers the mount dies. There is no core file. Im not using remote volumes as you will see from the config. Happens with remove volumes too. 2007-07-09 16:46:37 C [common-utils.c:208:gf_print_trace] debug-backtrace: Got signal (11), printing backtrace 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0(gf_print_trace+0x2b) [0xb7fa891d] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: [0xffffe420] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0 [0xb7fac398] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0 [0xb7fac87f] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0(inode_update+0x44) [0xb7fac8fa] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: [glusterfs] [0x804afe0] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0 [0xb7fb1ea2] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/libglusterfs.so.0(call_resume+0x40) [0xb7fb2141] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/glusterfs/1.3.0-pre5.2/xlator/performance/io-threads.so [0xb7605788] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/tls/i686/cmov/libpthread.so.0 [0xb7f78341] 2007-07-09 16:46:38 C [common-utils.c:210:gf_print_trace] debug-backtrace: /lib/tls/i686/cmov/libc.so.6(__clone+0x5e) [0xb7f0d4ee] ============ client.conf: volume server1 type protocol/client option transport-type tcp/client # for TCP/IP transport option remote-host 127.0.0.1 # IP address of the remote brick option remote-subvolume volumenamespace end-volume volume server1vol1 type protocol/client option transport-type tcp/client # for TCP/IP transport option remote-host 127.0.0.1 # IP address of the remote brick option remote-subvolume clusterfs1 end-volume volume volume1 type storage/posix option directory /volume1 end-volume volume volumenamespace type storage/posix option directory /volume.namespace end-volume ################### volume bricks type cluster/unify #option namespace server1 option namespace volumenamespace option readdir-force-success on # ignore failed mounts #subvolumes server1vol1 subvolumes volume1 option scheduler rr option rr.limits.min-free-disk 5 #% end-volume volume writebehind #writebehind improves write performance a lot type performance/write-behind option aggregate-size 131072 # in bytes subvolumes bricks end-volume volume readahead type performance/read-ahead option page-size 65536 # unit in bytes option page-count 16 # cache per file = (page-count x page-size) subvolumes writebehind end-volume volume iothreads type performance/io-threads option thread-count 10 subvolumes readahead end-volume _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxx http://lists.nongnu.org/mailman/listinfo/gluster-devel
-- Anand V. Avati