Re: running out of file descriptors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bryan Christ wrote:

> >> I am writing a multi-threaded application which services hundreds of
> >> remote connections for data transfer.  Several instances of this
> >> program are run simultaneously.  The problem is that whenever the
> >> total number of active user connections (cumulative total of all open
> >> sockets tallied from all process instances) reaches about 700 the
> >> system appears to run out of file descriptors.  I have tried raising
> >> the open files limit via "ulimit -n" and by using the setrlimit()
> >> facility.  Neither of these seem to help.  I am currently having to
> >> limit the number of processes running on the system to 2 instances
> >> allowing no more than 256 connections each.
> >
> > Have you tried editing /etc/security/limits.conf  (or equivalent file
> > on your system) to increase the max number of open files?
> 
> It seems that would be the same as setting RLIMIT_NOFILE via
> setrlimt() or the same as using the userspace tool "ulimit -n".  Am I
> wrong?  Isn't this the same?

Is your daemon running as root? If not, it cannot increase any hard
resource limit. Are you checking the return value (and errno) from
setrlimit()?

BTW, what do you mean by "appears to run out of file descriptors"?
Which system call fails, and with what error?

-- 
Glynn Clements <glynn@xxxxxxxxxxxxxxxxxx>
--
To unsubscribe from this list: send the line "unsubscribe linux-c-programming" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Assembler]     [Git]     [Kernel List]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [C Programming]     [Yosemite Campsites]     [Yosemite News]     [GCC Help]

  Powered by Linux