Is it possible that volume mount returns before fuse_init() is executed
? if that's true, then the core is generated because just after mounting
the volume, statedumps are requested to determine when all ec childs are
up. The code in fuse's dump assumes that fuse_init() has already been
called when a statedump is generated.
#0 0x00007f75dae83137 in fuse_itable_dump (this=0x2079be0) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mount/fuse/src/fuse-bridge.c:4988
4988 inode_table_dump(priv->active_subvol->itable,
(gdb) print priv->init_recvd
$14 = 0 '\000'
Xavi
On 14/01/16 08:33, Xavier Hernandez wrote:
The failure happens when a statedump is generated. For some reason
priv->active_subvol is NULL, causing a segmentation fault:
(gdb) t 1
[Switching to thread 1 (LWP 4179)]
#0 0x00007f75dae83137 in fuse_itable_dump (this=0x2079be0) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mount/fuse/src/fuse-bridge.c:4988
4988 inode_table_dump(priv->active_subvol->itable,
(gdb) bt
#0 0x00007f75dae83137 in fuse_itable_dump (this=0x2079be0) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/mount/fuse/src/fuse-bridge.c:4988
#1 0x00007f75e30f8a11 in gf_proc_dump_xlator_info (top=0x2079be0) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/statedump.c:506
#2 0x00007f75e30f96e9 in gf_proc_dump_info (signum=10, ctx=0x2055010)
at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/statedump.c:832
#3 0x0000000000409894 in glusterfs_sigwaiter (arg=0x7ffceb7dba50) at
/home/jenkins/root/workspace/rackspace-regression-2GB-triggered/glusterfsd/src/glusterfsd.c:2015
#4 0x00007f75e23a6a51 in start_thread () from ./lib64/libpthread.so.0
#5 0x00007f75e1d1093d in clone () from ./lib64/libc.so.6
(gdb) list
4983 return -1;
4984
4985 priv = this->private;
4986
4987 gf_proc_dump_add_section("xlator.mount.fuse.itable");
4988 inode_table_dump(priv->active_subvol->itable,
4989 "xlator.mount.fuse.itable");
4990
4991 return 0;
4992 }
(gdb) print priv->active_subvol
$5 = (xlator_t *) 0x0
Does this sound familiar to anyone ?
Xavi
On 14/01/16 08:08, Xavier Hernandez wrote:
I'm looking it.
On 14/01/16 08:03, Atin Mukherjee wrote:
[1] has caused a regression failure with a core from the mentioned test.
Mind having a look?
[1]
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17579/consoleFull
Thanks,
Atin
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel