Re: running out of file descriptors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 16, 2009 at 12:18 AM, Joe Damato <ice799@xxxxxxxxx> wrote:
> On Sun, Feb 15, 2009 at 9:48 PM, Bryan Christ <bryan.christ@xxxxxxxxx> wrote:
>> I am writing a multi-threaded application which services hundreds of
>> remote connections for data transfer.  Several instances of this
>> program are run simultaneously.  The problem is that whenever the
>> total number of active user connections (cumulative total of all open
>> sockets tallied from all process instances) reaches about 700 the
It seems that would be the same as setting RLIMIT_NOFILE via
setrlimt() or the same as using the userspace tool "ulimit -n".  Am I
wrong?  Isn't this the same?

>> system appears to run out of file descriptors.  I have tried raising
>> the open files limit via "ulimit -n" and by using the setrlimit()
>> facility.  Neither of these seem to help.  I am currently having to
>> limit the number of processes running on the system to 2 instances
>> allowing no more than 256 connections each.
>
> Have you tried editing /etc/security/limits.conf  (or equivalent file
> on your system) to increase the max number of open files?
>
> perhaps something like:
> *              -       nofile         524288
>
> is what you want?
>
> joe
>



-- 
Bryan
<><
--
To unsubscribe from this list: send the line "unsubscribe linux-c-programming" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Assembler]     [Git]     [Kernel List]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [C Programming]     [Yosemite Campsites]     [Yosemite News]     [GCC Help]

  Powered by Linux