Hi Benny, > According to dd's man page, the f{,date}sync options tell it to > "physically write output file data before finishing" > If you kill it before that you end up with dirty data in the cache. > What exactly are you trying to measure, what is the expected application > workload? I changed my test to do what you were doing instead of killing dd's, etc. The end application is DB2 and it is using multiple processes and I wanted to simulate that with micro-benchmarks. The only reliable way to benchmark bandwidth for multiple processes is to kill the tests after running them for some time instead of letting them run till conclusion. > ext3 mount options: noatime > nfs mount options: rsize=65536,wsize=65536 > dd options: bs=64k count=10k conv=fsync > > (write results average of 3 runs) > write local disk: 47.6 MB/s > write loopback nfsv3: 30.2 MB/s > write remote nfsv3: 29.0 MB/s > write loopback nfsv4: 37.5 MB/s > write remote nfsv4: 29.1 MB/s > > read local disk: 50.8 MB/s > read loopback nfsv3: 27.2 MB/s > read remote nfsv3: 21.8 MB/s > read loopback nfsv4: 25.4 MB/s > read remote nfsv4: 21.4 MB/s I used the exact same options you are using, and here is the results averaged across 3 runs: Write local disk 58.5 MB/s Write loopback nfsv3: 29.42 MB/s (50% drop) Reading (file created from /dev/urandom, somehow I am getting in GB/sec while your results were comparable to write's): Read local disk: 2.77 GB/s Read loopback nfsv3: 2.86 GB/s (higher for some reason) Thanks, - KK -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html