On 22/10/2010 1:14 PM, Mike Hanby wrote: > Thanks Patrick, > > In that sort of configuration, wouldn't having a failover configuration where one server can take over another servers brick negate the need for replication? or, wouldn't replicating negate the need for the corosync/pacemaker config, i.e. server 1 goes down, no problem since replication will sync it up once it comes back online? > > Thanks, > > Mike > > -----Original Message----- > From: Patrick Irvine [mailto:pirv at cybersites.ca] > Sent: Friday, October 22, 2010 2:48 PM > To: Mike Hanby; gluster-users at gluster.org > Subject: RE: Question about Volume Type when bricks are on SAN > > Hi mike > > -----Original Message----- > From: gluster-users-bounces at gluster.org > [mailto:gluster-users-bounces at gluster.org] On Behalf Of Mike Hanby > Sent: Friday, October 22, 2010 12:23 PM > To: gluster-users at gluster.org > Subject: Question about Volume Type when bricks are on SAN > Hey Mike, Yes what you say it defiantly true. We are doing this for a few reasons 1. I don't like the idea of data only being in sync on one brick at any time. If hardware fails the back end storage (be it SAN ISCSI etc) will become out if sync. Hence some of your data now only exists on only one storage point. And the system will stay that way until the failed hardware is fixed. Could be days or even more. By doing this method the data is only out of sync for just a few minutes (or how ever long it takes to do a full sync). or if you don't perform a full sync. only a few files (what ever changed during the fail over) are out of sync. 2. This method allows us to perform load balancing of the provided services when both nodes are up and a way to fail over to only one node in event of failure. Please note: I'm not a professional. This is just how we see things, and how we have attempted to solve the issues we saw/see before us :) also I have been accused of being extremely data paranoid and that the setup is over kill! We just think its the "right" way for us. Pat >> One final question, is there were a way in Gluster to have a Distributed >> with failover, where if server2 dies, server1 can mount server2's LUN, once >> server2 was back online, server1 could be told to stop hosting the brick >> and return it to server2. > In gluster (bye it's self) ... no, but through corosync/pacemaker yes. > > > I am currently doing just that but with ISCSI. > In my case: > > 2 Gluster servers A& B > 5 Gluster clients 1 to 5 > > A and B each attach individual ISCSI targets, mount them and then server > them with gluster > > The clients 1 to 5 then mount a replicated gluster share made from servers A > & B > > If A should go down, then B will attach to the ISCSI target A was using and > then re-server it for the clients. When A comes backup up, B stops > servering A's resources and disconnects from A's ISCSI target so A can bring > it all back online as normal. > > As a note I am using corosync/pacemaker to control starting, stopping and > moving of the required resource. > > I hope this helps > > Pat > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >