Best Practices for Gluster Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Marcus,

Thank you for your clear and helpful explanation. The explanation of scaling and writes makes things very clear to me (not always an easy task!)

With regards to your NFS suggestion, I will dig further into reading up how this would work - I know the capability exists in the 3.x Glusterfs series, but I haven't paid that much attention to it - yet. I do find myself wondering what the client throughput performance tradeoff would be in having NFS be the protocol used between the client and the actual Glusterfs storage - though that would mean not having to deploy client software to all the (100s) of app servers needing access.

I'm also not sure how I would go about setting this up with 2 NFS servers - would this be some sort of load balancing solution (using round robin DNS or an actual load balancer), or would this be implemented by having each NFS server responsible for only exporting a given portion of the whole Glusterfs backend storage.

More research to do while my customer is already subjecting my current implementation to real life production loads ... and I bite my fingernails figuring out how to implement changes without major impact to the running platform.

<sigh>

If it was easy they wouldn't pay us the big ... err .. adequate(?) bucks :-)

James

-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Marcus Bointon
Sent: Thursday, May 13, 2010 10:05 AM
To: gluster-users at gluster.org Users
Subject: Re: Best Practices for Gluster Replication

I sent this yesterday, but it didn't seem to get through.

On 12 May 2010, at 16:29, Burnash, James wrote:

> Volgen for a raid 1 solution creates a config file that does the mirroring on the client side - which I would take as an implicit endorsement from the Gluster team (great team, BTW). However, it seems to me that if the bricks replicated between themselves on our 10Gb storage network, it could save a lot of bandwidth for the clients and conceivably save them CPU cycles an I/O as well.

Unfortunately not. The shared-nothing architecture is what enables gluster (and similarly constructed systems like memcache) to scale on an O(1) basis. Memcache's consistent hash mechanism is a thing of beauty.

If the clients know where they're supposed to write to (say for 2-way AFR), you have a worst case of connecting to 2 gluster servers, even if you have a thousand servers. If the client knows nothing (and thus can write anywhere), you'd need a million connections to handle the same config as every server would have to connect to every other.

You can get away with poor scalability for small systems, but that's not what gluster is about. Convenience is often inversely proportional to scalability.

You could avoid the end-client complexity, and keep replication traffic off your client network by using something like a pair of servers acting as an NFS gateway in front of gluster. That way your apps can connect to a simple NFS share, but have the gluster back end hidden behind the gateways but inside your 10G network. Not sure if there are problems with that, but similar structures have been mentioned on here recently.

Caveat: not being a mathematician, I may have this all wrong :)

Marcus
--
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of info at hand CRM solutions
marcus at synchromedia.co.uk | http://www.synchromedia.co.uk/


_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently delete the original and any copy of any e-mail and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free. The sender therefore does not accept liability for any errors or omissions in the contents of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications. http://www.knight.com


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux