Search squid archive

Re: Excessive TCP memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

Steps to reproduce are not exactly everything since squid works fine in many other scenarios.
I do not know this specific system but if you are talking about 1-4k open connections it should not be a big problem for many servers.
The issue in hands is a bit different.
Have you tried tuning the ipv4\net using sysctl to see if it affects anything?
What I can offer is to build a tiny ICAP service that will use a 204 on every request and then moves on.
If the same happens with the dummy service it's probably a very bad scenario and if not then we can try to think
if there is something unique about your setup.

I have not seen this issue in my current testing setup which includes 3.5.19 + ICAP url filtering service.

Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: eliezer@xxxxxxxxxxxx


-----Original Message-----
From: squid-users [mailto:squid-users-bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Deniz Eren
Sent: Tuesday, June 14, 2016 11:07 AM
To: squid-users@xxxxxxxxxxxxxxxxxxxxx
Subject: Re:  Excessive TCP memory usage

Little bump :)

I have posted bug report with steps to reproduce. The problem still exists and I am curious whether anyone else is having the same problem, too.

http://bugs.squid-cache.org/show_bug.cgi?id=4526

On Wed, May 25, 2016 at 1:18 PM, Deniz Eren <denizlist@xxxxxxxxxxxxx> wrote:
> When I listen to connections between squid and icap using tcpdump I 
> saw that after a while icap closes the connection but squid does not 
> close, so connection stays in CLOSE_WAIT state:
>
> [root@test ~]# tcpdump -i any -n port 34693
> tcpdump: WARNING: Promiscuous mode not supported on the "any" device
> tcpdump: verbose output suppressed, use -v or -vv for full protocol 
> decode listening on any, link-type LINUX_SLL (Linux cooked), capture 
> size 96 bytes
> 13:07:31.802238 IP 127.0.0.1.icap > 127.0.0.1.34693: F
> 2207817997:2207817997(0) ack 710772005 win 395 <nop,nop,timestamp
> 104616992 104016968>
> 13:07:31.842186 IP 127.0.0.1.34693 > 127.0.0.1.icap: . ack 1 win 3186 
> <nop,nop,timestamp 104617032 104616992>
>
> [root@test ~]# netstat -tulnap|grep 34693
> tcp   215688      0 127.0.0.1:34693             127.0.0.1:1344
>      CLOSE_WAIT  19740/(squid-1)
>
> These CLOSE_WAIT connections do not timeout and stay until squid 
> process is killed.
>
> 2016-05-25 10:37 GMT+03:00 Deniz Eren <denizlist@xxxxxxxxxxxxx>:
>> 2016-05-24 21:47 GMT+03:00 Amos Jeffries <squid3@xxxxxxxxxxxxx>:
>>> On 25/05/2016 5:50 a.m., Deniz Eren wrote:
>>>> Hi,
>>>>
>>>> After upgrading to squid 3.5.16 I realized that squid started using 
>>>> much of kernel's TCP memory.
>>>
>>> Upgrade from which version?
>>>
>> Upgrading from squid 3.1.14. I started using c-icap and ssl-bump.
>>
>>>>
>>>> When squid was running for a long time TCP memory usage is like below:
>>>> test@test:~$ cat /proc/net/sockstat
>>>> sockets: used *
>>>> TCP: inuse * orphan * tw * alloc * mem 200000
>>>> UDP: inuse * mem *
>>>> UDPLITE: inuse *
>>>> RAW: inuse *
>>>> FRAG: inuse * memory *
>>>>
>>>> When I restart squid the memory usage drops dramatically:
>>>
>>> Of course it does. By restarting you just erased all of the 
>>> operational state for an unknown but large number of active network connections.
>>>
>> That's true but what I mean was squid's CLOSE_WAIT connections are 
>> using too much memory and they are not timing out.
>>
>>> Whether many of those should have been still active or not is a 
>>> different question. the answer to which depends on how you have your 
>>> Squid configured, and what the traffic through it has been doing.
>>>
>>>
>>>> test@test:~$ cat /proc/net/sockstat
>>>> sockets: used *
>>>> TCP: inuse * orphan * tw * alloc * mem 10
>>>> UDP: inuse * mem *
>>>> UDPLITE: inuse *
>>>> RAW: inuse *
>>>> FRAG: inuse * memory *
>>>>
>>>
>>> The numbers you replaced with "*" are rather important for context.
>>>
>>>
>> Today again I saw the problem:
>>
>> test@test:~$ cat /proc/net/sockstat
>> sockets: used 1304
>> TCP: inuse 876 orphan 81 tw 17 alloc 906 mem 29726
>> UDP: inuse 17 mem 8
>> UDPLITE: inuse 0
>> RAW: inuse 1
>> FRAG: inuse 0 memory 0
>>
>>>> I'm using Squid 3.5.16.
>>>>
>>>
>>> Please upgrade to 3.5.19. Some important issues have been resolved. 
>>> Some of them may be related to your TCP memory problem.
>>>
>>>
>> I have upgraded now and problem still exists.
>>
>>>> When I look with "netstat" and "ss" I see lots of CLOSE_WAIT 
>>>> connections from squid to ICAP or from squid to upstream server.
>>>>
>>>> Do you have any idea about this problem?
>>>
>>> Memory use by the TCP system of your kernel has very little to do 
>>> with Squid. Number of sockets in CLOSE_WAIT does have some relation 
>>> to Squid or at least to how the traffic going through it is handled.
>>>
>>> If you have disabled persistent connections in squid.conf then lots 
>>> of closed sockets and FD are to be expected.
>>>
>>> If you have persistent connections enabled, then fewer closures 
>>> should happen. But some will so expectations depends on how high the 
>>> traffic load is.
>>>
>> Persistent connection parameters are enabled in my conf, the problem 
>> occurs especially with connections to c-icap service.
>>
>> My netstat output is like this:
>> netstat -tulnap|grep squid|grep CLOSE
>>
>> tcp   211742      0 127.0.0.1:55751             127.0.0.1:1344
>>      CLOSE_WAIT  17076/(squid-1)
>> tcp   215700      0 127.0.0.1:55679             127.0.0.1:1344
>>      CLOSE_WAIT  17076/(squid-1)
>> tcp   215704      0 127.0.0.1:55683             127.0.0.1:1344
>>      CLOSE_WAIT  17076/(squid-1)
>> ...(hundreds)
>> Above ones are connections to c-icap service.
>>
>> netstat -tulnap|grep squid|grep CLOSE Active Internet connections 
>> (servers and established)
>> Proto Recv-Q Send-Q Local Address               Foreign Address
>>      State       PID/Program name
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:45182
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.2.177:50020
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.2.172:60028
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:44049
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:55054
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.2.137:52177
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:43542
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.155:39489
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.0.147:38939
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:38754
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.0.164:39602
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.0.147:54114
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.6.180:57857
>>      CLOSE_WAIT  15245/(squid-1)
>> tcp        1      0 192.168.2.1:8443            192.168.0.156:43482
>>      CLOSE_WAIT  15245/(squid-1)
>> ...(about 50)
>> Above ones are connections from https port to client.
>>
>> As you can see recv-q for icap connections allocate more memory but 
>> connections from https_port to upstream server connections allocate 
>> only one byte.
>>
>>  What can be done to close these unused connections?
>>
>> The problem in this thread seems similar:
>> http://www.squid-cache.org/mail-archive/squid-users/201301/0092.html
>>
>>> Amos
>>>
>>> _______________________________________________
>>> squid-users mailing list
>>> squid-users@xxxxxxxxxxxxxxxxxxxxx
>>> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux