Jim Callahan wrote: > I'm trying to determine the most optimal way to have a single NFS > client copy large numbers (100-1000) of fairly large (1-50M) files [...] I'd like to propose a new rule of thumb: to be considered "fairly large", a file should be larger than the capacity of a USB key which could be comfortably swallowed. > [...] Since the number of permutations of these three settings are > large I was hoping that I might get some advise from this list about a > range of values we should be investigating and any unpleasant > interactions between these levels of settings we should be aware of to > narrow our search. Also, if there are other major factors outside > those listed I'd appreciate being pointed in the right direction. Try http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/130-lca2008-nfs-tuning-secrets-d7.odp -- Greg Banks, P.Engineer, SGI Australian Software Group. the brightly coloured sporks of revolution. I don't speak for SGI. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html