Sorry for the delay. > This ends up caching and the write back should happen with larger > sizes. > Is this an issue with write size only or read size as well? Did you > test > read size something like below? > > dd if=[nfs_dir]/foo bs=1M count=500 of=/dev/null > > You can create sparse "foo" file using truncate command. I have not tested read speeds yet since this is a bit trickier due to avoiding the client cache. I would suspect similar results since we have mirrored read/write settings in all locations (we're aware of). > > > > > > > Also, what kernel versions are you on? > > > > RH6.3, 2.6.32-279.el6.x86_64 > > NFS client and NFS server both using the same distro/kernel? Yes - identical. Would multipath play any role here? I would suspect it would only help, not hinder. I have run Wireshark against the slave and the master ports with the same result - a max of ~32K packet size, regardless of the settings I listed in my original post. -Jason -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html