Although it's not a PulseAudio issue per-se, I thought I'd let you all know that after some discussion with a very helpful colleague who maintains the servers the NFS mounts are done from, it looks like I've found the cause and solution to the large file size issue. My colleague suggested it might be a block size issue and that I look at the files in question using stat. This revealed me at foo:~> stat test_gdbm.dbm File: `test_gdbm.dbm' Size: 3146027 Blocks: 6176 IO Block: 1048576 regular file That's a big block size - 1MB. Doing the same on the server that exports the NFS mount gives a block size of only 8KB. The NFS mounts weren't being done with any rsize or wsize specified by the client so it would appear that on Linux the values default to 1048576 and that gdbm uses this value in the absence of any other being specified. (Presumably PulseAudio doesn't specify a value when creating the files.) It looks like an empty gdbm database takes 3 blocks, then the data PulseAudio causes it to take another block. 3.1MB occupies 4x1MB blocks. With the files created on the local disk the IO Block size is 4KB and the files PulseAudio creates are 13KB . 13KB occupies 4x4KB blocks. If I specify an rsize and wsize of 32768 for the NFS mounts then the files are created with a size of ~97KB. Specifying smaller values results in correspondingly smaller files. Tests done writing files to the NFS mount using dd appear to indicate that specifying a value for rsize and wsize does not harm performance. With or without specifying a value the read and write speed was always the maximum possible over a 10/100 connection. regards, mike