Can be related to flushing of dirty pages. How much memory do you have on the clients? You can play with vm.dirty_background_bytes kernel parameter to get better performance. Linux default is 10% of the RAM (vm.dirty_background_ratio ), which can produce spikes on IO if too much data accumulated in fs cache, while you, probably, smooth continues writing. Tigran. ----- Original Message ----- > From: "Fu, Yong" <yong.fu@xxxxxxxxx> > To: linux-nfs@xxxxxxxxxxxxxxx > Sent: Friday, September 9, 2016 7:34:54 AM > Subject: NFS write throughput not constant > Hi, > I have some test on 10Gbe against NFS-based storage recently, and found the > throughput of writing was not constant, the performance of writing drop down > periodically. > My nfs client(version 3 and version 4 all tried) resident CentOS > 6.6(2.6.32-504.el6.x86_64), NFS server is OpenMediaVault(5 ssd with stripe) > > On a single mount point(or single nfs client), the avg throughput only can reach > 430 MB/s, and two nfs clients aggregation can reach 700 MB/s, I can see there > is periodical drop of network traffic graph both client and server side, and I > am sure it's a nfs issue by lots of other tests. > > I also found the nfs client commit procedure happened at the time that writing > performance also dropped, and I have read B7 > section(http://nfs.sourceforge.net) and think the relationship between them, > but no many ideas, can someone help me pinpoint the root cause of this issue? > > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html