I read the Red Hat Magazine article on this topic[1], but have come to realize that it might not be exactly what I am going for. I want to have a group of nodes that run a group of virtual machines with automated failover. I set things up how the article described but realized I didn't want the gfs mount in the fstab file. I would like the gfs mount described in the cluster.conf file so that as nodes are added or removed the mount will follow the changes (I know about the 1 journal per node so have created a few extra already). When I add a service to mount the gfs resource, it only gets mounted on one node as is to be expected thinking in terms of other resources.
I started thinking about this and it almost seems like gfs is unnecessary. Should I have a file system per virtual machine that wouldn't need to be gfs since only one node will ever run a virtual machine at a time? Then mount/umount the file system as the virtual machine was migrated in the cluster?
It seems like I am missing something about how this should be setup and I would really appreciate any tips or ideas. I will include my cluster.conf in case it provides any more info.
As a side note, what is with all the errors from system-config- kickstart telling me my config file is invalid if it was generated by conga. Both versions are updated to the newest available.
Attachment:
cluster.conf
Description: Binary data
[1] http://www.redhatmagazine.com/2007/08/23/automated-failover-and-recovery-of-virtualized-guests-in-advanced-platform/ thanks, -- matt whiteley <whiteley@xxxxxxx>
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster