On Tue, Feb 07, 2012 at 05:11:01PM -0800, Liam Slusser wrote: > d) On the gluster side of things, we use a "raid10" type of setup. We > replicate two sets of 4 bricks striped together (type = > cluster/distribute), so we have two complete copies of our data. We > break this mirror on our public facing feed servers. We have two > feed servers running apache with a custom in-house apache module to > handle the actual serving of data. Each server only talks to one side > of gluster - so we intensionally break glusters replication on > feeding. If one of our filers goes offline we have to disable that > feed server in our load balancer and then of course repair any data > that wasn't replicated with a "ls -alR". We've found that disabling > gluster's replication on our feed side increased performance > dramatically because it wasn't having to do read-repairs checking. Interesting - how do you achieve the 'breaking' of the pair? Do you just create new distributed volumes containing the same bricks but only from one side? I think they may be trying to prevent you doing this in future: https://github.com/gluster/glusterfs/commit/cf944c8ad5da87bce15b08d0bbb2ecd62e553d86 but I'm sure you can get around it with symlinks or something. > e) I have a very small 2mb cache in our gluster clients. We have such > a large volume/library that getting a cache hit almost never happens > so don't waste the memory. How is that tuned? Is there a mount option for it? > f) My apache module rewrites incoming URIs to load balance incoming > requests to two different gluster mounts on the filesystem. Each > gluster mount is its own client talking to the same server over > different gigabit ethernet links to different glusterfsd daemons > running on different ports. ie 192.168.1.50:6996 and > 192.168.2.50:6997. Does that mean: you're exporting the same filesystems as different bricks? (Otherwise I can't see how you bind the different ports) > I haven't tried the newer version of gluster 3.x as everything just > sort of works for the most part on the 2.x code. There are gotchas > and things that annoy me but for the most part everything works very > well. I was able to replace my old Isilon storage for less then the > annual cost of the support contract and doubling the space in the > process! :-) Regards, Brian.