Re: AW: Still much more than 350 sockets needed!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wednesday 26 April 2006 06:51am, Andrew Haley wrote:
> Wiese, Hendrik writes:
[snip]
>  > Please try to open that many tcp connections between two IPs and tell me
>  > if it works!
>
> Try it yourself.  I reckon you'll run out of file descriptors before
> you hit a socket limit.  To do any more you'll have to fork() in the
> server and client.
[snip]

I tried out that simple on my notebook (FC5) and on a pair of SLES9 test 
boxes.  In both cases, the client ran out of file descriptors to open sockets 
after opening 1020 (on my notebook) and 1021 (on the SLES9 boxes) 
simultaneous connections.  The exact error message when the client dies is 
"No socket: Too many open files".

On my notebook, I ran the tests over loopback, so both the client and server 
had a connection at the same time.  I checked on my notebook and there were 
32 network sockets open before (and after) the test that had nothing to do 
with the test.

So, it starts to look like the client is being stopped from opening more file 
handles, simultaneously.

I reran the multi-host test using a second client to simultaneously connect to 
the same server.  The server never dies (I'm running the server unprivileged, 
too).  The clients both died at the same point as before.

The next test was one client connecting to two servers.  The result was that 
one server got 510 connections and the other got 511 before the client died.  
Seems to reinforce the theory that the client isn't being allowed any more 
file handles than 1024 total (remember handles 0, 1 & 2 are stdin, stdout & 
stderr, respectively).

The last test I ran was to repeat the one client -> two servers test, running 
the client as root.  The result was the same as running the client as an 
unprivileged user.

So, let's take a look at /proc/sys/fs/: nothing obvious there.  The next thing 
I would do is to look for other /proc/sys/ parameters that indicate a limit 
of 1024 open file handles for the process, but, I've already spent 20 minutes 
on this and I have other things to finish up today.

At this point, I have to think that the problem Hendrik Wiese is reporting 
(they're system is limited to only 350 simultaneous connections between two 
IPs) is due to either a lack of memory, the client already has lots of open 
file handles or a bug in the client code (which is impossible for me to 
determine without looking at that code).  Personally, no.2 feels the most 
likely.
-- 
Lamont R. Peterson <lamont@xxxxxxxxxxxx>
Senior Instructor
Guru Labs, L.C. [ http://www.GuruLabs.com/ ]
GPG Key fingerprint: F98C E31A 5C4C 834A BCAB  8CB3 F980 6C97 DC0D D409

Attachment: pgpXXvaTvXH0P.pgp
Description: PGP signature

-- 
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux