Hello Jenny, Thanks for your answer. Sorry I haven't wrote but my hashsize is already in the same value as conntrack_max. I have some out of memory in dmesg: Nov 17 15:43:13 02 kernel: Out of socket memory And in cache.log I was not able to find any CommBind. I am reading about this port ranges (ephemeral). I think my squid is using too many sockets: sockets: used 16662 TCP: inuse 28433 orphan 12185 tw 2191 alloc 28787 mem 18786 UDP: inuse 8 mem 0 RAW: inuse 1 FRAG: inuse 0 memory 0 And it has about 16k files open right now. I will try to find a way to make more ports available. Thanks! Att, Nataniel Klug -- De: Jenny Lee [mailto:bodycare_5@xxxxxxxx] Enviada em: quinta-feira, 17 de novembro de 2011 14:30 Para: listas.nata@xxxxxxxxxxxx; squid-users@xxxxxxxxxxxxxxx Assunto: RE: Squid box dropping connections Prioridade: Alta > I am running CentOS v5.1 with Squid-2.6 STABLE22 and Tproxy > (cttproxy-2.6.18-2.0.6). My kernel is kernel-2.6.18-92. This is the most > reliable setup I ever made running Squid. My problem is that I am having > serious connections troubles when running squid over 155000 conntrack > connections. > > From my clients I start losing packets to router when the > connections go over 155000. My kernel is prepared to run over 260k > connections. ... > $SYS net.ipv4.netfilter.ip_conntrack_max=262144 > Just because you have conntract max at 260K does not mean that you can handle 260K connections. You will need to increase hashsize as well: echo 262144 > /sys/module/ip_conntrack/parameters/hashsize I would be checking kernel logs for "conntrack overflows" and cache log for "commBind" errors. You might need to increase ephemeral port ranges to 64K (don't know if this would apply to tproxy though). Jenny PS: I am not responsible if this blows up your datacenter. It works for me when i am doing 500-600 reqs/sec with CONNECTs on forward proxy.