On Fri, Jan 17, 2014 at 7:37 AM, <Mike.Peters@xxxxxxxxxxxx> wrote: >> -----Original Message----- >> From: James [mailto:purpleidea@xxxxxxxxx] >> Sent: 17 January 2014 11:58 >> To: Mike Peters >> Cc: gluster-users@xxxxxxxxxxx >> Subject: Re: Gluster Failover Mount >> >> On Fri, Jan 17, 2014 at 6:46 AM, <Mike.Peters@xxxxxxxxxxxx> wrote: >> > Hi, >> > >> > I am currently testing GlusterFS and am looking for some advice. My setup >> uses the latest 3.4.2 packages from www.gluster.org SLES11-SP3. >> > >> > I currently have a storage pool shared read-write across 2 gluster server >> nodes. This seems to work fine. However, I would also like to mount this pool >> on 4 further client machines running a legacy web application. Because of >> some limitations in the application, I would like to be able to tell these >> servers to mount the storage pool from one particular gluster server node, >> but to fail over to the second node if and only the first node becomes >> unavailable. I can mount the storage on the client nodes with both gluster >> nodes specified or with only one node specified but cannot see a way in the >> documentation of preferring one particular node and having the second >> node configured as a fail over. Is this possible? What am I missing? >> >> You do realize that the initial connection is just for retrieving the volfiles, and >> then all hosts are used, right? If so, carrying on: >> >> You can use VRRP and a VIP to specify which host to mount from. An >> example of this is done in my Puppet-Gluster setup: >> https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs- >> with-puppet-gluster-vagrant/ >> >> You can specify ordering of the VIP with priority arguments to keepalived for >> example. >> >> You can also specify more than one server on the mount command for >> glusterfs. I forget the syntax for that, but it's easy to google. >> >> I hope this answers your questions! >> >> James > > Hi James, > > That definitely sounds feasible. I hadn't thought of doing it at that layer. We use ldirectord for load balancing the webservices so I'll give that a shot this afternoon. No worries! It's always helpful to get fresh eyes on a problem sometimes. > > Thanks for your help. My pleasure. Let me know how it goes! > > Mike James _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users