I think it is really a bug, and I tested it. if the network between mon.0 and mon.1 is cut off, it is easy to reproduce. mon.0 \ \ \ \ mon.1 -------------- mon.2 mon.0 win the election between mon.0 and mon.2, while mon.1 win the election between mon.1 and mon.2. the network of mon.0 and mon.1 is cut off, there is no way to elect the leader monitor. 2017-07-04 13:57 GMT+08:00 Z Will <zhao6305@xxxxxxxxx>: > Hi: > I am testing ceph-mon brain split . I have read the code . If I > understand it right , I know it won't be brain split. But I think > there is still another problem. My ceph version is 0.94.10. And here > is my test detail : > > 3 ceph-mons , there ranks are 0, 1, 2 respectively.I stop the rank 1 > mon , and use iptables to block the communication between mon 0 and > mon 1. When the cluster is stable, start mon.1 . I found the 3 > monitors will all can not work well. They are all trying to call new > leader election . This means the cluster can't work anymore. > > Here is my analysis. Because mon will always respond to leader > election message, so , in my test, communication between mon.0 and > mon.1 is blocked , so mon.1 will always try to be leader, because it > will always see mon.2, and it should win over mon.2. Mon.0 should > always win over mon.2. But mon.2 will always responsd to the election > message issued by mon.1, so this loop will never end. Am I right ? > > This should be a problem? Or is it was just designed like this , and > should be handled by human ? > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com