Search squid archive

Re: Slow Internet with squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 20 Jul 2011 11:12:04 -0400, Wilson Hernandez wrote:
Hello.

I am puzzled to see how my bandwidth is used when running squid. I have a total of 25M/3M of bandwidth, lately I've noticed with iptraf that my external interface traffic/bandwidth is almost maxed out at 24.8M and my internal interface (squid) is only at 2.9M as a result most clients have
been calling saying "their internet is slow".

I'm wondering why that big of a difference on the interfaces' traffic.

This is what cachemgr shows:

Squid Object Cache: Version 3.1.14

Start Time:	Fri, 15 Jul 2011 08:01:48 GMT
Current Time:	Wed, 20 Jul 2011 14:39:02 GMT


Connection information for squid:
Number of clients accessing cache:	113
Number of HTTP requests received:	5198204
Number of ICP messages received:	0
Number of ICP messages sent:	0
Number of queued ICP replies:	0
Number of HTCP messages received:	0
Number of HTCP messages sent:	0
Request failure ratio:	 0.00
Average HTTP requests per minute since start:	684.2
Average ICP messages per minute since start:	0.0
Select loop called: 479758718 times, 0.950 ms avg
Cache information for squid:
Hits as % of all requests:	5min: 23.2%, 60min: 19.4%
Hits as % of bytes sent:	5min: -219.3%, 60min: -314.7%
Memory hits as % of hit requests:	5min: 13.2%, 60min: 9.5%
Disk hits as % of hit requests:	5min: 64.6%, 60min: 62.5%
Storage Swap size:	66028580 KB
Storage Swap capacity:	64.5% used, 35.5% free
Storage Mem size:	1042556 KB
Storage Mem capacity:	100.0% used,  0.0% free
Mean Object Size:	23.52 KB
Requests given to unlinkd:	0
Median Service Times (seconds)  5 min    60 min:
HTTP Requests (All):   0.12106  0.02069
Cache Misses:          0.24524  0.30459
Cache Hits:            0.05046  0.02899
Near Hits:             0.17711  0.22004
Not-Modified Replies:  0.00307  0.00091
DNS Lookups:           0.31806  0.17048

DNS is very slow as well. Probably due to remote queries over this full link.


Please help me understand why this is happening and if there is a
solution to make squid perform better.

Squid "optimizes web delivery" as the slogan goes. So when the server side is acting very inefficiently it can consume more than the client side. Could be any of these or a few other things I'm not aware of:

1) client requests an object. Squid has it cached, but server is requiring 'must-revalidate'. While revalidating the server forces an entire new object back at squid, along with a timestamp stating it has not changed. Squid only sends the small no-change reply to the client.

2a) client requests a small range of an object. Squid passes this on. Server replies with again, forcing an entire new object back at squid. Squid only sends the small range asked for to the client.

2b) client requests a small range of an object. Squid passes this on but requests the full object (refresh_offset_limit). Server replies with the whole object. Squid stores it and only sends the small range asked for to the client.

3) client requests info about an object (HEAD). Squid relays this request on. Server replies, forcing an entire new object back at squid. Squid only sends the small header asked for to the client.

4) client requests an object, then abandons it before receiving the reply. Squid continues to wait and receive it, in hopes that it can be stored. If not storable it may be discarded and the cycle repeat. Or it could be stored but never again requested. This behaviour is modified by the quick_abort_* directives.


Or it could be you configured an open proxy. Configuration problems can allow external access to external sites. When discovered attackers can use this and consume all your external bandwidth. Usually its caused by mistakenly removing or bypassing the controls on CONNECT tunnels. Though it can also happen on other requests.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux