Our smaller cluster, 60tb, stores media data and acts as our CDN feed system. Its a pretty simple setup, the front end is two Dell 1950 servers running Apache mounting gluster via the fuse client. We use bonded gigabit ethernet on the back side to two supermicro 4u 24 bay servers. Each server has another supermicro 4u 24 bay chassis hanging off the back connected via SAS. Both servers are mirrors of each other. Drives are desktop Seagate 1.5tb drives connected to a 3ware 9690 SAS card. We make one huge 24 drive raid6 volume (~30tb) as a brick and use gluster to glue it all together. Performace is decent - we've pushed nearly 800mbit of web traffic with it. Our Juniper firewall only has gigabit anyway so I don't know how much more I could push if I went to 10g. One weird thing I've noticed is 500mbit of web traffic is about double that on the backend which is why we use bonded ethernet for the backend. Another trick we do is our two frontend webservers only mount one server each - so webserver A only mounts gluster server A. We found that the over head of gluster constantly verifing the files were in sync added a ~20% overhead. All the clients that actually write the data of course mount both servers so the files mirrored correctly. Email me privately if you want more detail. Liam On Nov 15, 2010 6:57 AM, "Rudi Ahlers" <Rudi at softdux.com> wrote: