On Mon, Jun 30, 2008 at 11:26:54AM -0400, Jeff Layton wrote: > Recently I spent some time with others here at Red Hat looking > at problems with nfs server performance. One thing we found was that > there are some problems with multiple nfsd's. It seems like the I/O > scheduling or something is fooled by the fact that sequential write > calls are often handled by different nfsd's. This can negatively > impact performance (I don't think we've tracked this down completely > yet, however). Yes, we've been trying to see how close to full network speed we can get over a 10 gig network and have run into situations where increasing the number of threads (without changing anything else) seems to decrease performance of a simple sequential write. And the hypothesis that the problem was randomized IO scheduling was the first thing that came to mind. But I'm not sure what the easiest way would be to really prove that that was the problem. And then once we really are sure that's the problem, I'm not sure what to do about it. I suppose it may depend partly on exactly where the reordering is happening. --b. > > Since you're just doing some single-threaded testing on the client > side, it might be interesting to try running a single nfsd and testing > performance with that. It might provide an interesting data point. > > Some other thoughts of things to try: > > 1) run the tests against an exported tmpfs filesystem to eliminate > underlying disk performance as a factor. > > 2) test nfsv4 -- nfsd opens and closes the file for each read/write. > nfsv4 is statelful, however, so I don't believe it does that there. > > As others have pointed out though, testing with client and server on > the same machine is not necessarily eliminating performance > bottlenecks. You may want to test with dedicated clients and servers > (maybe on a nice fast network or with a gigE crossover cable or > something). -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html