Re: sunrpc: socket buffer size tuneable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 25, 2013 at 03:35:21PM -0500, J. Bruce Fields wrote:
> On Fri, Jan 25, 2013 at 03:21:07PM -0500, J. Bruce Fields wrote:
> > 
> > > On Fri, Jan 25, 2013 at 01:29:35PM -0600, Ben Myers wrote:
> > > > Hey Bruce & Jim & Olga,
> > > > 
> > > > On Fri, Jan 25, 2013 at 02:16:20PM -0500, Jim Rees wrote:
> > > > > J. Bruce Fields wrote:
> > > > > 
> > > > >   On Thu, Jan 24, 2013 at 06:59:30PM -0600, Ben Myers wrote:
> > > > >   > At 1020 threads the send buffer size wraps and becomes negative causing
> > > > >   > the nfs server to grind to a halt.  Rather than setting bufsize based
> > > > >   > upon the number of nfsd threads, make the buffer sizes tuneable via
> > > > >   > module parameters.
> > > > >   > 
> > > > >   > Set the buffer sizes in terms of the number of rpcs you want to fit into
> > > > >   > the buffer.
> > > > >   
> > > > >   From private communication, my understanding is that the original
> > > > >   problem here was due to memory pressure forcing the tcp send buffer size
> > > > >   below the size required to hold a single rpc.
> > > > 
> > > > Years ago I did see wrapping of the buffer size when tcp was used with many
> > > > threads.  Today's problem is timeouts on a cluster with a heavy read
> > > > workload... and I seem to remember seeing that the send buffer size was too
> > > > small.
> > > > 
> > > > >   In which case the important variable here is lock_bufsize, as that's
> > > > >   what prevents the buffer size from going too low.
> > > > 
> > > > I tested removing the lock of bufsize and did hit the timeouts, so the overflow
> > > > is starting to look less relevant.  I will test your minimal overflow fix to
> > > > see if this is the case.
> > > 
> > > The minimal overflow fix did not resolve the timeouts.
> > 
> > OK, thanks, that's expected.
> > 
> > > I will test with this to see if it resolves the timeouts:
> > 
> > And I'd expect that to do the job--but at the expense of some tcp
> > bandwidth.  So you end up needing your other module parameters to get
> > the performance back.
> 
> Also, what do you see happening on the server in the problem case--are
> threads blocking in svc_send, or are they dropping replies?

Oh, never mind, right, it's almost certainly svc_tcp_has_wspace failing:

	required = atomic_read(&xprt->xpt_reserved) + serv->sv_max_mesg;
	if (sk_stream_wspace(svsk->sk_sk) >= required)
		return 1;
	set_bit(SOCK_NOSPACE, &svsk->sk_sock->flags);
	return 0;

That returns 0 once sk_stream_wspace falls below sv_max_mesg, so we
never take the request and don't get to the point of failing in
svc_send.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux