Search squid archive

Re: Ongoing Running out of filedescriptors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Mardi 9 Février 2010 19:34:13, Amos Jeffries a écrit :
> On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
> 
> <luis.daniel.lucio@xxxxxxxxx> wrote:
> > Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
> >> I don't know what to do with my current squid, I even upgraded to
> >> 3.0.STABLE21 but, the problem persist every three days:
> >> 
> >> /usr/local/squid/sbin/squid -v
> >> Squid Cache: Version 3.0.STABLE21
> >> configure options:  '--prefix=/usr/local/squid'
> 
> '--sysconfdir=/etc/squid'
> 
> >> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
> >> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
> >> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
> >> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
> >> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
> >> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
> >> 
> >> I built with --with-maxfd=32768 option but, when squid is started it
> 
> says
> 
> >> is working with only 1024 filedescriptor.
> >> 
> >> I even added the following to the squid.conf:
> >> 
> >> max_open_disk_fds 0
> >> 
> >> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
> 
> don't
> 
> >> know what to do. Here's part of cache.log:
> <snip logs>
> 
> > You got a bug! that behaivor happens when a coredump occurs in squid,
> > please
> > file a ticket with gdb output, rice debug at maximum if you can.
> 
> WTF are you talking about Luis? None of the above problems have anything
> to do with crashing Squid.
> 
> They are in order:
> 
> "WARNING! Your cache is running out of filedescriptors"
>  * either the system limits being set too low during run-time operation.
>  * or the system limits were too small during the configure and build
> process.
>    -> Squid may drop new client connections to maintain lower than desired
> traffic levels.
> 
>   NP: patching the kernel headers to artificially trick squid into
> believing the kernel supports more by default than it does is not a good
> solution. The ulimit utility exists for that purpose instead.
> <snip kernel patch>
> 
> 
> "Unsupported method attempted by 172.16.100.83"
>  * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
>   -> Squid will drop these connections.
> 
> "clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
> or directory"
>  * NAT interception is failing to locate the NAT table entries for some
> client connection.
>  * usually due to configuring the same port with "transparent" option and
> regular traffic.
>  -> for now Squid will treat these connections as if the directly
> connecting box was the real client. This WILL change in some near future
> release.
> 
> 
> As you can see in none of those handling operations does squid crash or
> core dump.
> 
> 
> Amos


Amos, that is the exactly behaivor I did have with a bug, dont you remember 
the DIGEST bug that makes squid restart internaly? HNO did help me, but the 
fact is that is a symptom of a coredump internal restart because he complains 
his sq is already compilled with more than 1024.

After retarting, I did have 1024 descriptors, no matters i did compile with 
64k FDs.


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux