Justin,
I do something similar, but not quite as complex.
I have a replicated (x2) Gluster volume where I drop thin-provisioned iSCSI volume files to be served up via tgtd on CentOS6. I mount the Gluster volume locally on both Gluster servers (FUSE driver), then point the
tgtd daemons to the image files. I use this as a back-end for a VMware ESXi datastore, so I'm using the multipath iSCSI functionality of ESXi to handle the failover between the nodes.
I was previously doing this with DRBD/Pacemaker/Corosync, but ESXi freaks out when all paths to a datastore go down, and it takes ~2-5 seconds for the entire cluster stack to go down and come back up during an orderly failover (to say nothing of a catastrophic failover), so that model just didn't work for me.
I've since done some testing with just a simple VIP in Keepalived on top of Gluster using LIO and the libgfapi stuff on CentOS7, and that seemed to work great -- but I have some other incompatibilities with CentOS7, so I decided not to pursue that for this
project -- maybe another one on the horizon.
I briefly tried testing the libgfapi driver on CentOS6 with my current production setup, but when I started the rebuild tgtd instance it gave my iSCSI LUNs different LUN numbers so my ESXi cluster didn't recognize them as different paths to the same LUN.
I couldn't be bothered to work out the reason for this change, so I just switched back in the meantime. I will probably play with it a bit in a test environment when I have time -- the libgfapi stuff should be faster and more efficient than going through
the FUSE stuff.
What would the Pacemaker CRM handle for you, besides a shared VIP? Would you want it to start/stop the iSCSI target daemon as well? (If so, why?) Is there any reason to use a full CRM for this versus a simple VIP in something like keepalived?
Good luck, and let us know how you get on!
Regards,
Jon Heese
Sent: Monday, April 20, 2015 10:14 PM
To: gluster-users@xxxxxxxxxxx
Subject: GlusterFS with iSCSI and PaceMaker
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users