On Wed, Mar 28, 2018 at 06:35:53PM +0300, Antti Tönkyrä wrote: > On 2018-03-28 17:59, J. Bruce Fields wrote: > >On Wed, Mar 28, 2018 at 10:54:06AM -0400, bfields wrote: > >>On Wed, Mar 28, 2018 at 02:04:57PM +0300, daedalus@xxxxxxxxxxxxxxx wrote: > >>>I came across a rather annoying issue where a single NFS client > >>>caused resource starvation for NFS server. The server has several > >>>storage pools which are used, in this particular case a single > >>>client did fairly large read requests and effectively ate all nfsd > >>>threads on the server and during that other clients were getting > >>>hardly any I/O through to the other storage pool which was > >>>completely idle. > >>What version of the kernel are you running on your server? > 4.15.10 on the system I am testing on. > >I'm thinking that if it includes upstream 637600f3ffbf "SUNRPC: Change > >TCP socket space reservation" (in upstream 4.8), then you may want to > >experiment setting the sunrpc.svc_rpc_per_connection_limit module > >parameter added in ff3ac5c3dc23 "SUNRPC: Add a server side > >per-connection limit". > > > >You probably want to experiment with values greater than 0 (the default, > >no limit) and the number of server threads. > That helps for the client slowing down the whole server, thanks for > the tip! We should probably revisit 637600f3ffbf "SUNRPC: Change TCP socket space reservation". There's got to be some way to keep high bandwidth pipes filled with read data without introducing this problem where a single client can tie up every server thread. Just out of curiosity, do you know (approximately) the network and disk bandwidth in this case? --b. -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html