Re: shared disk with virsh migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'd have to do some research to verify, but I'm guessing that iSCSI (in option 3) would use the traditional SCSI reservation mechanism to prevent problems associated with multiple access.

    /Harry

On 09/16/2011 06:26 PM, Alan Wood wrote:
Hi all,

I'm trying to decide whether I really need a cluster implementation to do
what I want to do and I figured I'd solicit opinions.
Essentially I want to have two machines running as virtualization hosts
with libvirt/kvm.  I have shared iSCSI storage available to both hosts and
have to decide how to configure the storage for use with libvirt.  Right
now I see three possibilities:
1.  Setting an iSCSI storage pool in libvirt
  	Pros:   Migration seems painless, including live migration
  	Cons:   Need to pre-allocate LUNs on iSCSI box.
  		Does not seem to take advantage of iSCSI offloading or multipathing
2.  Setting up a two-node cluster and running CLVM
  	Pros:   Very flexible storage management (is snapshotting supported yet in clvm?)
  		Automatic failover
  	Cons:	Cluster infrastructure adds complexity, more potential for bugs
  		Possible split brain issues?
3.  A single iSCSI block device with partitions for each VM mounted on both hosts
  	Pros:	Easy migration, setup
  	Cons:	Two hosts accessing the same block device outside of a
  		cluster seems like it might lead to disaster

Right now I actually like option 3 but I'm wondering if I really am asking
for trouble accessing a block device simultaneously on two hosts without a
clustering infrastructure.  I did this a while back with a shared-SCSI box
and it seemed to work.  I would never be accessing the same partition on
both hosts and I understand that all partitioning has to be done while the
other host is off, but is there something else I'm missing here?

Also, are people out there running option 2?  Does it make sesne to set up
a cluster as small as 2-nodes for HA virtualization or do I really need
more nodes for it to be worthwhile?  I do have all the fencing
infrastructure I might need (PDUs and Dracs).

any help would be appreciated.  thanks
-alan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux