For a time I was restricted in the hardware I could use for a cluster, and the hardware used in the two node cluster had the following fence devices: node1 = power fencing through IPMI node2 = manual fencing Whenever node1 was fenced by node2, node1 would power down as expected. Whenever node2 was fenced by node1, node1 would "manually fence" node2. What basically happens from there is CMAN on node1 detects that the node2 is still participating in the cluster, and because it can't remove node2 from the cluster, node1 removes itself instead. This is to stop damage to shared filesystems. But the impact on your user base is that you've just lost both nodes on the cluster until an admin manually intervenes - which serves little purpose when your trying to achieve a clustered high availability. Stewart On Thu Feb 19 16:39 , ESGLinux sent: >Hello, > >I can promisse you that the cluster runs with fencing at all, ;-) > >But it runs in a way absolutelly impredectible. > >I´m using xen because I´m just testing different scenarios before put the cluster in a real production system, which is the final goal, > > >I am going to read the faqs you send me (the other link I´ve already read and it does not resolve my doubts, but I think because I didnt understand the problematic, now I´ll see... ) > >All of this about fencing makes me wonder. If all fails, and a node is complety lost an its imposible to complete the fence process of it, > >what happens? In my actual situation without any fence, the cluster doesnt works at all. > >Greetings and thanks for the information, > >ESG > > > > > > > >-- > >Linux-cluster mailing list > >Linux-cluster@xxxxxxxxxx > >https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster