On 28/09/10 12:03, Rich Rauenzahn wrote:
Hi, Our squid servers are consistently goes over their configured disk limits. I've rm'ed the cache directories and started over several times... yet they slowly grow to over their set limit and fill up the filesystem. These are du -sk's of the squid directories: squid2: 363520856 /squid/ squid3: 343399160 /squid/ squid4: 356593016 /squid/
Okay so, about 350 GB per disk is used....
These are clearly over the 300,000K limit -- and the swap stat files are less than 1MB.
Um, you mean the 300 GB limit configured. 307,200,000 KB to be precise.
cachemgr.cgi says we are using way less: Store Directory Statistics: Store Entries : 5698 Maximum Swap Size : 307200000 KB Current Store Swap Size: 275852960 KB Current Capacity : 90% used, 10% free Store Directory Statistics: Store Entries : 5479 Maximum Swap Size : 307200000 KB Current Store Swap Size: 260064224 KB Current Capacity : 85% used, 15% free Store Directory Statistics: Store Entries : 13510 Maximum Swap Size : 307200000 KB Current Store Swap Size: 276444512 KB Current Capacity : 90% used, 10% free
Which indicates that something other than Squid data cache is going into those disks. OR, the unlinkd eraser process is not working.
We're storing large files, some can be several GB in size. Here's the config: http_port 80 defaultsite=foo-squid http_port 8081 defaultsite=foo-squid icp_port 3130 cache_peer foo-download parent 80 0 no-query originserver name=foodownload weight=1 cache_peer foo-download1 parent 80 0 no-query originserver name=foodownload1 round-robin weight=10 connect-fail-limit=1 cache_peer foo-download2 parent 80 0 no-query originserver name=foodownload2 round-robin weight=10 connect-fail-limit=1 cache_peer foo-download3 parent 80 0 no-query originserver name=foodownload3 round-robin weight=10 connect-fail-limit=1 cache_peer foo-maven-repo parent 8081 0 no-query originserver name=foomavenrepo login=PASS cache_peer foo-squid1 sibling 80 3130 proxy-only name=foosquid1 cache_peer foo-squid2 sibling 80 3130 proxy-only name=foosquid2 cache_peer foo-squid3 sibling 80 3130 proxy-only name=foosquid3 cache_peer foo-squid4 sibling 80 3130 proxy-only name=foosquid4 http_access allow all acl mavenpath urlpath_regex ^/artifactory acl mavenpath urlpath_regex ^/nexus acl mavenport myport 8081 cache_peer_access foodownload deny mavenpath cache_peer_access foodownload deny mavenport cache_peer_access foodownload1 deny mavenpath cache_peer_access foodownload1 deny mavenport cache_peer_access foodownload2 deny mavenpath cache_peer_access foodownload2 deny mavenport cache_peer_access foodownload3 deny mavenpath cache_peer_access foodownload3 deny mavenport cache_peer_access foomavenrepo allow mavenpath cache_peer_access foomavenrepo allow mavenport cache_peer_access foomavenrepo deny all cache_replacement_policy heap LFUDA cache_dir aufs /squid/cache 300000 64 256 cache_replacement_policy heap GDSF cache_mem 2 GB cache_effective_user squid cache_effective_group squid cache_mgr alert@foo acl QUERY urlpath_regex /components/\? cache deny QUERY maximum_object_size 32 GB maximum_object_size_in_memory 8 MB quick_abort_min 0 KB quick_abort_max 0 KB cache_swap_low 90 cache_swap_high 95 redirect_rewrites_host_header off buffered_logs on log_icp_queries off access_log /var/log/squid/access.log cache_log none
You may want to locate and remove the "none" files created by the above logging. If they are in /squid/cache that is likely your problem.
cache_log contains the critical and important administrative messages when things go horribly wrong. There may be something logged there that will make your problem clearer.
Amos -- Please be using Current Stable Squid 2.7.STABLE9 or 3.1.8 Beta testers wanted for 3.2.0.2