some more data about the volume: Volume Name: Volume-xxxxxxx Type: Distributed-Replicate Volume ID: b19cc9e2-071e-4f68-95e3-7c3e26d263a8 Status: Started Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: gluster1cr:/opt/gluster/bricks/MintVelvet-brick1/brick1 Brick2: gluster2cr:/opt/gluster/bricks/MintVelvet-brick1/brick1 Brick3: gluster3cr:/opt/gluster/bricks/MintVelvet-brick1/brick1 Brick4: gluster1cr:/opt/gluster/bricks/MintVelvet-brick2/brick2 Brick5: gluster2cr:/opt/gluster/bricks/MintVelvet-brick2/brick2 Brick6: gluster3cr:/opt/gluster/bricks/MintVelvet-brick2/brick2 Options Reconfigured: features.barrier: disable performance.readdir-ahead: on 2015-10-07 14:45 GMT+02:00 <muhammad.aliabbas@xxxxxx>: > Hey Guys, > > > > This is my first email to the group. I am not sure if it’s the right forum > or if there are any conventions to raise issues. > > > > Incident Overview: > > =============== > > Gluster daemon on individual servers crash, with crashdump in log file. This > has started happening since last upgrade from 3.6.x to 3.7.4. > > > > Setup Topology > > ============= > > 2 Clusters, each comprise of 3 nodes. > > Volume configured as replicated x 2 and distributed x 2 > > Geo-replication amongst clusters as they are in different data centre. > > > > Infrastructure Information > > =================== > > System OS: CentOS 6.7 > > Kernel version: 2.6.32-504.23.4.el6.x86_64 > > Gluster RPM Version: glusterfs-3.7.4-2.el6.x86_64 > > > > > > Crash Dump From One of the Incident: > > ============================== > > patchset: git://git.gluster.com/glusterfs.git > > signal received: 6 > > time of crash: > > 2015-10-07 12:11:21 > > configuration details: > > argp 1 > > backtrace 1 > > dlfcn 1 > > libpthread 1 > > llistxattr 1 > > setfsid 1 > > spinlock 1 > > epoll.h 1 > > xattr.h 1 > > st_atim.tv_nsec 1 > > package-string: glusterfs 3.7.4 > > /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x7f410b691936] > > /usr/lib64/libglusterfs.so.0(gf_print_trace+0x32f)[0x7f410b6b149f] > > /lib64/libc.so.6(+0x326a0)[0x7f410a0316a0] > > /lib64/libc.so.6(gsignal+0x35)[0x7f410a031625] > > /lib64/libc.so.6(abort+0x175)[0x7f410a032e05] > > /lib64/libc.so.6(+0x70537)[0x7f410a06f537] > > /lib64/libc.so.6(+0x75e66)[0x7f410a074e66] > > /lib64/libc.so.6(+0x789ba)[0x7f410a0779ba] > > /usr/lib64/libglusterfs.so.0(iobref_destroy+0x54)[0x7f410b6cac54] > > /usr/lib64/libgfrpc.so.0(rpc_transport_pollin_destroy+0x1e)[0x7f410b45d3de] > > /usr/lib64/glusterfs/3.7.4/rpc-transport/socket.so(+0xabf4)[0x7f40fe73fbf4] > > /usr/lib64/glusterfs/3.7.4/rpc-transport/socket.so(+0xc7bd)[0x7f40fe7417bd] > > /usr/lib64/libglusterfs.so.0(+0x8b0a0)[0x7f410b6f70a0] > > /lib64/libpthread.so.0(+0x7a51)[0x7f410a77da51] > > /lib64/libc.so.6(clone+0x6d)[0x7f410a0e79ad] > > --------- > > > > > > Regards, > > Ali > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users