Greetings all,
I am setting up a new pair of servers in the data center to be used exclusively as Gluster storage servers for VMWare. Each server is identical with 16G RAM, dual 10G NICs, 12x 600G 15K-RPM SAS drives, and an LSI RAID controller. Each will be running CentOS 6.5 with the latest patches and the 3.16 ML kernel as well as gluster 3.5.2 from the standard RPM packages.
I have done a ton of research on gluster and have been testing it heavily in my development environment. I created a new replicated volume using a single 3TB brick per server, and the performance has been very good so far. I spun up a few Win7 test VMs running IOMeter and can easily hit +100MB/sec read/writes on each client. I have also run through some failover scenarios using ucarp (power off conditions, etc), and Gluster has worked very well!
Unfortunately, my lab only has a couple of Hypervisor hosts, and I have not been able to load up a bunch of VMs to do a full-scale test. Thus, I am wondering if I should modify any volume settings to ensure Gluster continues to perform this well once we go to production.
From previous emails on this mailer list, I see some people have adjusted things like network.ping-timeout, performance.cache-size, performance.cache-[min|max]-file-size, performance,write-behind-window-size, etc.
In addition, I found this URL which points to some undocumented performance settings: http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented#Performance
Can someone please share their thoughts/best-practices when using Gluster to host datastore files in a 10G network environment?
Thanks in advance,
-Ron
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users