On Wed, Feb 24, 2010 at 11:29:34AM +0800, Dave Chinner wrote: > On Wed, Feb 24, 2010 at 10:41:01AM +0800, Wu Fengguang wrote: > > With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS > > readahead size 512k*15=7680k is too large than necessary for typical > > clients. > > > > On a e1000e--e1000e connection, I got the following numbers > > > > readahead size throughput > > 16k 35.5 MB/s > > 32k 54.3 MB/s > > 64k 64.1 MB/s > > 128k 70.5 MB/s > > 256k 74.6 MB/s > > rsize ==> 512k 77.4 MB/s > > 1024k 85.5 MB/s > > 2048k 86.8 MB/s > > 4096k 87.9 MB/s > > 8192k 89.0 MB/s > > 16384k 87.7 MB/s > > > > So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight) > > can already get near full NFS bandwidth. > > > > The test script is: > > > > #!/bin/sh > > > > file=/mnt/sparse > > BDI=0:15 > > > > for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384 > > do > > echo 3 > /proc/sys/vm/drop_caches > > echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb > > echo readahead_size=${rasize}k > > dd if=$file of=/dev/null bs=4k count=1024000 > > done > > That's doing a cached read out of the server cache, right? You It does not involve disk IO at least. (The sparse file dataset is larger than server cache.) > might find the results are different if the server has to read the > file from disk. I would expect reads from the server cache not > to require much readahead as there is no IO latency on the server > side for the readahead to hide.... Sure the result will be different when disk IO is involved. In this case I would expect the server admin to setup the optimal readahead size for the disk(s). It sounds silly to have client_readahead_size > server_readahead_size Thanks, Fengguang -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html