Re: [PATCH 2/2] lockd: set svc_serv->sv_maxconn to a more reasonable value

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 17 Oct 2008 17:18:33 -0400
"J. Bruce Fields" <bfields@xxxxxxxxxxxx> wrote:

> On Fri, Oct 17, 2008 at 02:26:10PM -0400, Jeff Layton wrote:
> > The default method for calculating the number of connections allowed
> > per RPC service arbitrarily limits single-threaded services to 80
> > connections. This is too low for services like lockd and artificially
> > limits the number of TCP clients that it can support.
> > 
> > Have lockd set a default sv_maxconn value to RLIMIT_NOFILE for the
> > lockd thread (usually this will be 1024). Also add a module parameter
> > to allow an admin to set this to an arbitrary value at module load
> > time.
> 
> I guess this is OK.
> 
> As long as we're picking a number out of thin air, I'd rather we make
> that obvious, instead of making it look like we made some kind of
> sophisticated choice that the poor reader will feel obliged to
> understand.
> 
> So I'd be for a default that's just a constant, until someone has a
> better idea.
> 

I'm OK with that too. I just used this since Neil suggested it. I have
no idea what a reasonable value should really be. I suppose this should
probably be set to the maximum number of clients we expect to support
(assuming 1 connection to lockd from each client).

1024 seems like a decent enough place to start. Big enough to allow for
a lot of clients, but not so big that the host will bog down if we hit
that number of connections.

> What would actually happen if we allowed too many connections?  What
> would fail first?  Is there some way to detect that situation and use
> that to drop connections?
> 

I'm not clear on this either. Here's my naive take (could be very 
wrong):

My best guess is that we'll end up in a situation where lockd (or
whatever service) eats up a bunch of CPU time trying to service
all of the requests.

I suppose that "legitimate" sockets will stall out since they may not
be getting serviced in a timely manner. Their buffers will be full, so
the TCP layer will shrink the window down to 0.

I could be way off base here though...

As far as detecting that, I'm not sure. Maybe we could somehow look
at how many sockets are waiting for their buffers to be cleared. It may
take a long time for that to happen though, a modern CPU can probably
handle a lot of sockets before it ends up in the weeds.

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux