>>> We are attempting to run WRF to a shared glusterfs filesystem. It is >>> not going well at all. WRF does a bunch of fseeks as well as writes and >>> I'm seeing at least a 30:1 slowdown between accessing the underlying >>> filesystem vs going through glusterfs. A coworker has written a trivial >>> test program that implements this sort of access pattern. Currently >>> using the 1.4.0rc7 version. It is an unfair to compare clustered filesystems and local disk filesystems directly. You can try a few optimizations - the io-threads is pretty much useless on the client side in 1.4/2.0 branch (since the introduction of non blocking sockets). You can remove read-ahead too since I understand your IO pattern largely involves random IO, and for sequential IO glusterfs can achieve link max speed on Gig/E even without read-ahead. After these changes you might also want to try with and without write-behind because these performance translators are meant to be used with streaming IO. Do you have a comparison against NFS? (since you are using glusterfs in a single server mode anyways). Can you also post the test program which simulates your work load? Thanks, avati