matt whiteley wrote:
On Jun 25, 2008, at 4:41 PM, Joe Royall wrote:
Why not use lvm backed vms, 1 per vm, share the entire partition with
all the lvms via ISCSI to each dom0 and run clvm on the dom0s. The
lvms do not need to be mounted in dom0. You can then use RHCS to
failover vms between dom0s. Consider putting all the vms on a single
node into a single resource group and only allow 1 group to operate
on a single node. You can then configure N+1 redundancy.
--
Joe Royall
Red Hat Certified Architect
We already have all of the nodes attached to a san vi fibre channel,
so I would rather not just provide storage from the san through
another box as an iscsi target to these 4. It seems like it would add
a layer of complexity and performance bottlenecking.
It seems like I could do the first half of what you talk about,
instead of making a gfs filesystem in the clvm lv and using files
there for the vm backends, I could make multiple lvs in the clvm vg
and use one for each vm. This gets around using gfs at all, and I
could just have a resource per vm for it's lv. I am not sure how I
would specify this in cluster.conf so that the lv would get mounted on
the proper node that was going to run a vm. From what I have read, a
<vm> element can't be a child of a <service> element and there doesn't
seem to be any other way to define a relationship between the two.
If you are using clvmd there is no need to create a resource in
cluster.conf for the LV. Clustered vg's can be active and used on every
node in the cluster, so there is no need to have it failed over from one
to the other. Just create your vm with the LV as its backend storage and
then create a vm resource for it
<vm autostart="1" name="myguest" path="/etc/xen"/>
John
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster