Preetish wrote:
Hi Everybody,
We have a Squid 2.6.STABLE13 running on an
OpenBSD box along with packet filtering(earlier we used to run it on
FedoraCore4). The machine is a P4 3.4 GHz with 1 GB RAM and running
a cache of 30 GB. The external link speed is 4 Mbps. We use the DNS
server of our ISP .the internet connection is pathetic . The details
of the output of the squid client is as follows.
Squid Object Cache: Version 2.6.STABLE13
Start Time: Tue, 07 Aug 2007 23:55:51 GMT
Current Time: Wed, 08 Aug 2007 10:19:24 GMT
Connection information for squid:
Number of clients accessing cache: 761
Number of HTTP requests received: 436323
Number of ICP messages received: 0
Number of ICP messages sent: 0
Number of queued ICP replies: 0
Request failure ratio: 0.00
Average HTTP requests per minute since start: 699.7
Average ICP messages per minute since start: 0.0
Select loop called: 578899 times, 64.628 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 28.4%, 60min: 23.5%
Byte Hit Ratios: 5min: 19.2%, 60min: 19.5%
Request Memory Hit Ratios: 5min: 12.1%, 60min: 13.9%
Request Disk Hit Ratios: 5min: 41.5%, 60min: 41.7%
Storage Swap size: 13067832 KB
Storage Mem size: 190880 KB
Mean Object Size: 19.80 KB
Requests given to unlinkd: 0
Hi Preetish,
Your caching seems fine.
But...
Median Service Times (seconds) 5 min 60 min:
HTTP Requests (All): 11.37373 8.22659
Cache Misses: 15.72468 12.00465
Cache Hits: 5.06039 4.07741
Near Hits: 14.89826 12.00465
Not-Modified Replies: 3.86308 3.28534
DNS Lookups: 6.80420 4.17707
ICP Queries: 0.00000 0.00000
Your 4 mbps connection link seems really really slow. Maybe as you say,
your ISP could be creating this problem for you in the first place.
Do you get it through a satellite link? I think that your high service
response time of 15 seconds is related to your DNS settings.
Why don't you create your own caching name sever in your squid box. It
won't take much resources.
By the way, what is your ping latency to www.yahoo.com?
Resource usage for squid:
UP Time: 37413.255 seconds
CPU Time: 34220.020 seconds
CPU Usage: 91.46%
CPU Usage, 5 minute avg: 95.51%
CPU Usage, 60 minute avg: 97.02%
Process Data Segment Size via sbrk(): 0 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 65
Since your average number of connections for your squid box is just
about 700 per minute, you should investigate why your CPU usage is
unusually high. Squid-2.6.13 is usually very CPU friendly.
Memory accounted for:
Total accounted: 275284 KB
memPoolAlloc calls: 52025731
memPoolFree calls: 49328868
File descriptor usage for squid:
Maximum number of file descriptors: 1024
Largest file desc currently in use: 855
Number of file desc currently in use: 675
Files queued for open: 7
Available number of file descriptors: 342
Reserved number of file descriptors: 100
Store Disk files open: 27
IO loop method: kqueue
I think that you definitely have to increase your file descriptors for
your OS.
Sooner or later, you are going to face serious problems due to this
restriction because you are approaching 85% from your above data.
Internal Data Structures:
662573 StoreEntries
29686 StoreEntries with MemObjects
29517 Hot Object Cache Items
660137 on-disk objects
A few of my Squid configuration which i think you guys may require
are as follows:
cache_mem 400 MB
Try using a lower cache_mem value, say
cache_mem 32 MB
maximum_object_size 20480 KB
maximum_object_size_in_memory 20 KB
fqdncache_size 4096
cache_dir aufs /var/squid/cache 32768 64 256
cache_dns_program /usr/local/libexec/dnsserver
dns_children 32
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 5223
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 5223 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager all
Probably you need to add the following:
acl mynetwork src 192.168.0.0/24
http_access allow mynetwork
http_access deny all
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
# one who can access services on "localhost" is a local user
http_access deny to_localhost
reply_body_max_size 20971520 allow all
append_domain .xxx.xx.xx
Is the number of request on my server too high.even my CPU utilization
is too high.Do we need to upgrade the machine .Please help
For your hardware, the number of requests is not high. It should easily
handle up to 6000 requests per minute. Your load is just 10% of this
currently.
I don't know but your packet filtering setup might also be creating this
problems for you. But I don't have that extensive knowledge of PF.
Can you post:
squidclient mgr:5min | grep client
I don't think that you should upgrade your hardware to resolve your problem.
Check your access.log and cache.log. I definitely think that you will be
able to catch important things there.
Thanking you...
Regards
Preetish
--
With best regards and good wishes,
Yours sincerely,
Tek Bahadur Limbu
(TAG/TDG Group)
Jwl Systems Department
Worldlink Communications Pvt. Ltd.
Jawalakhel, Nepal
http://www.wlink.com.np