GFS2 and VM High Availability/DRS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Howdy, thanks for all your answers here.  With your help (particularly Digimer), I was able to set up my little two node GFS2 cluster.  I can't pretend yet to understand everything, but I have a blossoming awareness of what and why and how.

The way I finally set it up for my test cluster was
  1. LUN on SAN
  2. configured through ESXi as RDM
  3. RDM made available to OS
  4. parted RDM device
  5. pvcreate/vgcreate/lvcreate to create logical volume on device
  6. mkfs.gfs2 to create GFS2 filesystem on volume supported by clvmd and cman, etc
It works and that's great.  BUT the lit says VMWare's vMotion/HA/DRS doesn't support RDM (though others say that isn't a problem)

I am setting up GFS2 on CentOS running on VMWare and a SAN.  We want to take advantage of VMWare's High Availability (HA) and Distributed Resource Scheduler (DRS) which allow the VM cluster to migrate a guest to another host if the guest becomes unavailable for any reason.  I've come across some contradictory statements regarding the compatibility of RDMs and HA/DRS.  So naturally, I have some questions:

1)  If my shared cluster filesystem resides on an RDM on a SAN and is available to all of the ESXi hosts, can I use HA/DRS or not?  If so, what are the limitations?  If not, why not?

2)  If I cannot use an RDM for the cluster filesystem, can I use VMFS so vmware can deal with it?  What are the limitations of this?

3)  Is there some other magic way using iSCSI connectors or something bypassing vmware?  Anyone have experience with this?

Wes

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux