Hi, On Sun, Jul 20, 2008 at 11:51 AM, Anand Avati <avati@xxxxxxxxxxxxx> wrote: > write-behind is not being used in your configuration. You need to chain the > performance translators. After chaining the translators: #### glusterfs-server.vol #### volume eon0 type storage/posix option directory /export/eon0 end-volume volume iothreads-eon0 type performance/io-threads #option thread-count 4 # deault is 1 option cache-size 64MB subvolumes eon0 end-volume volume writebehind-eon0 type performance/write-behind #option aggregate-size 131072 # in bytes option aggregate-size 1MB # default is 0bytes option flush-behind on # default is 'off' subvolumes iothreads-eon0 end-volume volume readahead-eon0 type performance/read-ahead # enabled on server and client 57MB/s -> 61MB/s option page-size 65536 ### in bytes option page-count 16 ### memory cache size is page-count x page-size per file subvolumes writebehind-eon0 end-volume volume server type protocol/server option transport-type tcp/server option auth.ip.eon0.allow 10.2.179.* subvolumes eon0 end-volume #### #### glusterfs-client.vol #### volume eon0 type protocol/client option transport-type tcp/client option remote-host porpoise-san option remote-subvolume eon0 end-volume volume iothreads-eon0 type performance/io-threads #option thread-count 4 # deault is 1 option cache-size 64MB subvolumes eon0 end-volume volume writebehind-eon0 type performance/write-behind # huge boost, 12MB/s -> 50MB/s #option aggregate-size 131072 # in bytes # 50MB/s -> 54MB/s option aggregate-size 1MB # default is 0bytes # 54MB/s -> 57MB/s option flush-behind on # default is 'off' subvolumes iothreads-eon0 end-volume volume readahead-eon0 type performance/read-ahead option page-size 65536 ### in bytes #option page-size 131072 ### in bytes option page-count 16 ### memory cache size is page-count x page-size per file subvolumes writebehind-eon0 end-volume volume io-cache-eon0 type performance/io-cache # Doesn't really help option cache-size 64MB # default is 32MB # This gives a little bit of boost 9MB/s to 12MB/s option page-size 1MB #128KB is default option #option priority *.h:3,*.html:2,*:1 # default is '*:0' option priority *:0 # 500 KB/s boost option force-revalidate-timeout 2 # default is 1 subvolumes readahead-eon0 end-volume #### I'm getting 60-62MB/s with Gluster, When I use NFS without specifying rsize or wsize I also get 60MB/s. When I use the NFS mount options rsize=262144,wsize=262144 and do a dd with the "standard" bs=4k I get 80MB/s (increasing bs= no longer helps for NFS at this point) . With Gluster I can get 100-110MB/s if I use bs=128k (which is faster than GFS2!). So once again the main question here is there something for Gluster like the rsize or wsize options or is there something else I can tweak in the translators that would be the equivalent of increasing the global rsize and wsize? If not is there something I can patch in Fuse or in the Gluster sources to do this? Thanks, Sabuj