Hi Edson, El jue, 27-08-2009 a las 12:14 -0300, Edson Marquezani Filho escribió: > Ok, after a while, things is starting to work here. I have defined a > service, with a lvm-cluster resource, and my (still very simple) > script that launchs the VM and destroys them. Service is moved from > one server to another, when some goes down, and VM are recreated. > > But, I could launch the same VM (with the same LV as its disk) > manually on the slave node, even though it was mounted on master. I > guess that it was expected, right?, once you have warned about that > before. > > "Actually it would not avoid an admin doing, by hand, an erratic mount > but a bugzilla ticket has been opened by Brem Belguebli to fix an LVM > issue that is causing that behaviour." > Yes, you can bypass the exclusive flag by hand if you dont take it into account when activating the exclusive LV on a second node. While the exclusive LV is mounted in other node, the command "lvchange -aey XXX/YYY" should give you an error message. "lvchange -ay XXX/YYY" will bypass it. > But, rgmanager was able to mount it on both servers simultaneosly too. > I runned a test, disconecting the heartbeat link, making one server to > be fenced, and the VM launched on the "winner" as expected. But when > the "loser" server came back, still without heartbeat link, it > launched the same VM again, and service appeared as running locally on > boths nodes. I guess lvm-cluster should avoid this, shouldn't ? > This should not happen. Have you set "exclusive=yes" into the resource definitions in cluster.conf? Can we have a copy of your current cluster.conf? > I have not understood, because this behauvior is the same as if I had > not defined any lvm or lvm-cluster resource to the service. Both nodes > see the LV, anyway, and could mount it whenever them want. > > What could be wrong? > > Besides that, I can't relocate the service to another node by hand, > with clusvcadm, what causes the service to fail and become inactive, > forcing me to disable and enable it again. > I tested it and it should work. Same answer as before: give us a copy of your current cluster.conf. > Thanks. > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster Cheers, Rafael -- Rafael Micó Miranda -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster