Hi, Should I put some asymmetry in the cluster config (and/or the qdisk heuristic scripts) of a two-node cluster to make (only) one node decide to continue its services in case of a split-brain problem? Or are heartbeat failures ignored *if* the quorum disk still shows that the other node is running ok (or does this only affect whether to fence or not)? During my tests (disconnecting nodes, shared storage via working) I got tons of messages like qdiskd[4012]: <crit> A master exists, but it's not me?! qdiskd[4012]: <crit> Critical Error: More than one master found! and I finally had to reboot one node to make this "master race" be solved. I'm getting more confused now, some samples would help... As said yesterday, what's really missing is a clear picture of the algorithm CMAN uses to determine membership, calculate votes and quorum, etc. P.S. While testing disconnectivity I got an almost synchronous (on both nodes) kernel panic in gfs_lockd (still using the 42.0.3.EL kernel, will upgrade soon...). -- -- Jos Vos <jos@xxxxxx> -- X/OS Experts in Open Systems BV | Phone: +31 20 6938364 -- Amsterdam, The Netherlands | Fax: +31 20 6948204 -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster