Hi, I am running this version of corosync: corosync-1.2.3-36.el6.x86_64 on a 3-node cluster running RHEL 6 / kernel 2.6.33.9-rt31.75.el6rt.x86_64 I am running pacemaker also, but with no services (yet). I caused the dump by doing this on the third (last) node: ifdown ib0 && sleep 5 && ifup ib0. This is some info from the core file: ... Core was generated by `corosync'. Program terminated with signal 6, Aborted. #0 0x00000032a4a32885 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install corosync-1.2.3-36.el6.x86_64 (gdb) where #0 0x00000032a4a32885 in raise () from /lib64/libc.so.6 #1 0x00000032a4a34065 in abort () from /lib64/libc.so.6 #2 0x00000032a4a2b9fe in __assert_fail_base () from /lib64/libc.so.6 #3 0x00000032a4a2bac0 in __assert_fail () from /lib64/libc.so.6 #4 0x00007f69bbb2eaa6 in ?? () from /usr/lib64/libtotem_pg.so.4 #5 0x00007f69bbb33a93 in ?? () from /usr/lib64/libtotem_pg.so.4 #6 0x00007f69bbb33db9 in ?? () from /usr/lib64/libtotem_pg.so.4 #7 0x00007f69bbb2b2a4 in rrp_deliver_fn () from /usr/lib64/libtotem_pg.so.4 #8 0x00007f69bbb27e4a in ?? () from /usr/lib64/libtotem_pg.so.4 #9 0x00007f69bbb23dba in poll_run () from /usr/lib64/libtotem_pg.so.4 #10 0x0000000000406c1e in main () (gdb) Is this a know bug? How to deal with this? (I am new to corosync - I just started evaluating it for a product that required H/A - sic! :-) Also, although it core-dump only once - I can get it in a bind quite easily bu doing the above (ifdown/ifup). Very often Totem simply does not recover/does not converge/keeps spinning. Below is my corosync.conf: # Please read the corosync.conf.5 manual page compatibility: none totem { version: 2 secauth: off threads: 0 token: 500 retransmits_before_loss: 5 interface { ringnumber: 0 bindnetaddr: 10.10.100.0 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: no to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } Thanks a lot in advance for any advice on what to do with this problem. -- Mo Po _______________________________________________ discuss mailing list discuss@xxxxxxxxxxxx http://lists.corosync.org/mailman/listinfo/discuss