Re: sunrpc: socket buffer size tuneable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 25, 2013 at 09:12:55PM +0000, Myklebust, Trond wrote:
> > -----Original Message-----
> > From: linux-nfs-owner@xxxxxxxxxxxxxxx [mailto:linux-nfs-
> > owner@xxxxxxxxxxxxxxx] On Behalf Of Ben Myers
> > Sent: Friday, January 25, 2013 3:35 PM
> > To: J. Bruce Fields
> > Cc: Olga Kornievskaia; linux-nfs@xxxxxxxxxxxxxxx; Jim Rees
> > Subject: Re: sunrpc: socket buffer size tuneable
> > 
> > Hey Bruce,
> > 
> > On Fri, Jan 25, 2013 at 03:21:07PM -0500, J. Bruce Fields wrote:
> > > > On Fri, Jan 25, 2013 at 01:29:35PM -0600, Ben Myers wrote:
> > > > > Hey Bruce & Jim & Olga,
> > > > >
> > > > > On Fri, Jan 25, 2013 at 02:16:20PM -0500, Jim Rees wrote:
> > > > > > J. Bruce Fields wrote:
> > > > > >
> > > > > >   On Thu, Jan 24, 2013 at 06:59:30PM -0600, Ben Myers wrote:
> > > > > >   > At 1020 threads the send buffer size wraps and becomes negative
> > causing
> > > > > >   > the nfs server to grind to a halt.  Rather than setting bufsize based
> > > > > >   > upon the number of nfsd threads, make the buffer sizes tuneable
> > via
> > > > > >   > module parameters.
> > > > > >   >
> > > > > >   > Set the buffer sizes in terms of the number of rpcs you want to fit
> > into
> > > > > >   > the buffer.
> > > > > >
> > > > > >   From private communication, my understanding is that the original
> > > > > >   problem here was due to memory pressure forcing the tcp send
> > buffer size
> > > > > >   below the size required to hold a single rpc.
> > > > >
> > > > > Years ago I did see wrapping of the buffer size when tcp was used
> > > > > with many threads.  Today's problem is timeouts on a cluster with
> > > > > a heavy read workload... and I seem to remember seeing that the
> > > > > send buffer size was too small.
> > > > >
> > > > > >   In which case the important variable here is lock_bufsize, as that's
> > > > > >   what prevents the buffer size from going too low.
> > > > >
> > > > > I tested removing the lock of bufsize and did hit the timeouts, so
> > > > > the overflow is starting to look less relevant.  I will test your
> > > > > minimal overflow fix to see if this is the case.
> > > >
> > > > The minimal overflow fix did not resolve the timeouts.
> > >
> > > OK, thanks, that's expected.
> > >
> > > > I will test with this to see if it resolves the timeouts:
> > >
> > > And I'd expect that to do the job--
> > 
> > It did.
> > 
> > > but at the expense of some tcp
> > > bandwidth.  So you end up needing your other module parameters to get
> > > the performance back.
> > 
> > I didn't put a timer on it, so I'm not sure.  Any ideas for an alternate fix?
> > 
> 
> Why is it not sufficient to clamp the TCP values of 'snd' and 'rcv' using sysctl_tcp_wmem/sysctl_tcp_rmem?
> ...and clamp the UDP values using sysctl_[wr]mem_min/sysctl_[wr]mem_max?.

Yeah, I was just looking at that--so, Ben, something like:

	echo "1048576 1048576 4194304" >/proc/sys/net/ipv4/tcp_wmem

But I'm unclear on some of the details: do we need to set the minimum or
only the default?  And does it need any more allowance for protocol
overhead?

Regardless, it's unfortunate if the server's buggy by default.

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux