On Fri, Jul 23, 2010 at 6:57 AM, Suresh Jayaraman <sjayaraman@xxxxxxx> wrote: > On 07/22/2010 11:10 PM, David Howells wrote: >> Suresh Jayaraman <sjayaraman@xxxxxxx> wrote: >> >>> As it can been seen, the performance while reading when data is cache >>> hot (disk) is not great as the network link is a Gigabit ethernet (with >>> server having working set in memory) which is mostly expected. >> >> That's what I see with NFS and AFS too. >> >>> (I could not get access to a slower network (say 100 Mb/s) where the real >>> performance boost could be evident). >> >> ethtool? >> > > Thanks for the pointer. Here are the results on a 100Mb/s network: <snip> Excellent data - thx > As noted by Andreas, the read performance with more number of clients > would be more interesting as the cache can positively impact the > scalability. However, I don't have a number of clients or know a way to > simulate a large number of cifs clients. You could simulate increased load by running multiple smbtorture instances from each real client, and perhaps some local dbench like activity run locally on the server. > The cache also can positively > impact the performance on heavily loaded network and/or server due to > reduction of network calls to the server. Reminds me a little about the discussions during the last few SMB2 plugfests: http://channel9.msdn.com/posts/Darryl/Peer-Content-Branch-Caching-and-Retrieval-Presentation/ and an earlier one (I couldn't find the newer version of this talk) http://channel9.msdn.com/pdc2008/ES23/ and -- Thanks, Steve -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html