Ok, after a while, things is starting to work here. I have defined a service, with a lvm-cluster resource, and my (still very simple) script that launchs the VM and destroys them. Service is moved from one server to another, when some goes down, and VM are recreated. But, I could launch the same VM (with the same LV as its disk) manually on the slave node, even though it was mounted on master. I guess that it was expected, right?, once you have warned about that before. "Actually it would not avoid an admin doing, by hand, an erratic mount but a bugzilla ticket has been opened by Brem Belguebli to fix an LVM issue that is causing that behaviour." But, rgmanager was able to mount it on both servers simultaneosly too. I runned a test, disconecting the heartbeat link, making one server to be fenced, and the VM launched on the "winner" as expected. But when the "loser" server came back, still without heartbeat link, it launched the same VM again, and service appeared as running locally on boths nodes. I guess lvm-cluster should avoid this, shouldn't ? I have not understood, because this behauvior is the same as if I had not defined any lvm or lvm-cluster resource to the service. Both nodes see the LV, anyway, and could mount it whenever them want. What could be wrong? Besides that, I can't relocate the service to another node by hand, with clusvcadm, what causes the service to fail and become inactive, forcing me to disable and enable it again. Thanks. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster