Best Practices for Gluster Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Trying the attachment again, email is so complicated! 

Craig 

----- Original Message ----- 
From: "Craig Carl" <craig at gluster.com> 
To: "Marcus Bointon" <marcus at synchromedia.co.uk> 
Cc: "gluster-users at gluster.org Users" <gluster-users at gluster.org> 
Sent: Thursday, May 13, 2010 10:01:11 PM GMT -08:00 US/Canada Pacific 
Subject: Re: Best Practices for Gluster Replication 

Marcus et all - Good discussion all around. A couple of points to clear up some of the terminology and a couple of architecture questions that haven't been answered. 1. The Gluster File System client is designed to be installed on devices that are consuming the storage. By installing the client here you get - 1a. Mirror on write, simultaneous writes to any number of mirrors. 2a. Storage server failure is transparent to your application. 3a. Significant performance benefits. 2. In a majority of installations the user uses the Gluster File System client wherever possible but often needs to access the Gluster Cluster via NFS or CIFS or some other NAS style protocol. Gluster is designed to support those needs. There are a some concepts that are important to understand Gluster's behavior when Gluster client isn't being used - 2a. Any file can be accessed from any node at any time. The physical location of the file is irrelevant. 2b. The entire distributed filesystem can be accessed by all protocols at the same time. 2c. Only the Gluster client can communicate with the Gluster server daemon. 2d. Only the Gluster client can mirror or replicate. 2e. The Gluster client can be installed on a Gluster server. 2f. Fundamental to NFS, CIFS, etc is the idea that their clients access a single IP address for storage. (Gluster client is a solution to this problem!) If the remote storage server that they have mounted fails they have no way to access the storage. 2g. The user is expected to provide some method of ensuring that when clients access the Gluster cluster via NFS et all that the number of connections to any one node are about the same as all the other nodes. The user is also expected to provide a method of ensuring that if a storage server fails the NFS, CIFS, etc client has the opportunity to connect to another storage server. Customers usually use RRDNS, UCARP, Haproxy, or enterprise load balancing hardware (F5, ACE, etc) for this IP failover / balancing layer. That sounds more complicated than it is. We install the Gluster client on the server, mount the distributed filesystem just like on any other host and then re-export that mount as NFS, CIFS, etc. We install that stack on every storage node. A user supplied layer on top of that balances inbound connections among the nodes. I've got a new pretty picture that tries to simplify some of this. It is a really rough draft, your feedback is appreciated. We (Gluster Inc) are working hard to find better ways to describe the big picture Gluster architecture to you, our users. Any ideas, language, concepts, pictures, questions you can't find the answers to, (42!) anything at all you think might help please send it my way! -- Craig Carl Gluster, Inc. Cell - (408) 829-9953 (California, USA) Gtalk - craig.carl at gmail.com ----- Original Message ----- From: "Marcus Bointon" To: "gluster-users at gluster.org Users" Sent: Thursday, May 13, 2010 7:43:26 AM GMT -08:00 US/Canada Pacific Subject: Re: Best Practices for Gluster Replication On 13 May 2010, at 16:28, Burnash, James wrote: > I'm also not sure how I would go about setting this up with 2 NFS servers - would this be some sort of load balancing solution (using round robin DNS or an actual load balancer), or would this be implemented by having each NFS server responsible for only exporting a given portion of the whole Glusterfs backend storage. I'm not really sure of the best way to do it - NFS isn't really my thing. I assume that there are load balancing / failover solutions (haproxy, pound, heartbeat etc) that can deal with NFS - it would help if the balancer understood NFS at some kind of transactional level (as they can for HTTP). I would export each of the different gluster portions you want as separate NFS share points. Marcus -- Marcus Bointon Synchromedia Limited: Creators of http://www.smartmessages.net/ UK resellers of info at hand CRM solutions marcus at synchromedia.co.uk | http://www.synchromedia.co.uk/ _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
_______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux