This bt similar to https://bugzilla.redhat.com/show_bug.cgi?id=1209461 . This crash due to race between clean up thread and rpc event thread .It is known issue , It doesn't have any serious impact to functionality apart from creating core dump file and log message. for more info : https://bugzilla.redhat.com/show_bug.cgi?id=1209461 I have sent patch for rpc listener clean up (http://review.gluster.org/10197 ) , but glusterd fini() function is not invoking before exit(0) due to code comment in cleanup_and_exit() ( no idea why it is commented ). void cleanup_and_exit (int signum) { ------ ------- glusterfs_pidfile_cleanup (ctx); exit (0); #if 0 /* TODO: Properly do cleanup_and_exit(), with synchronization */ if (ctx->mgmt) { /* cleanup the saved-frames before last unref */ rpc_clnt_connection_cleanup (&ctx->mgmt->conn); rpc_clnt_unref (ctx->mgmt); } /* call fini() of each xlator */ trav = NULL; if (ctx->active) trav = ctx->active->top; while (trav) { if (trav->fini) { THIS = trav; trav->fini (trav); } trav = trav->next; } #endif } Regards Anand.N On 05/12/2015 09:09 AM, Christopher Pereira wrote: > On 10-05-2015 6:26, Niels de Vos wrote: >> On Sat, May 09, 2015 at 06:34:55AM -0300, Christopher Pereira >> wrote: >>> Core was generated by `glusterd --xlator-option *.upgrade=on >>> -N'. Program terminated with signal 11, Segmentation fault. #0 >>> 0x00007f489c747c3b in ?? () >>> >>> [...] > > Bug reported here: > https://bugzilla.redhat.com/show_bug.cgi?id=1220623 > > _______________________________________________ Gluster-devel mailing > list Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel |
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel