> On 25/11/2014 9:06 a.m., Doug Sampson wrote: > > Recently due to squid 2.7 being EOL'ed, we migrated our squid > > server to version 3.4.9 on a FreeBSD 10.0-RELEASE running on 64-bit > > hardware. We started seeing paging file being swapped out > > eventually running out of available memory. From the time squid > > gets started it usually takes about two days before we see these > > entries in /var/log/messages as follows: > > > > +swap_pager_getswapspace(16): failed +swap_pager_getswapspace(16): > > failed +swap_pager_getswapspace(16): failed > > +swap_pager_getswapspace(12): failed +swap_pager_getswapspace(16): > > failed +swap_pager_getswapspace(12): failed > > +swap_pager_getswapspace(6): failed +swap_pager_getswapspace(16): > > failed > > > > Looking at the 'top' results, I see that the swap file has been > > totally exhausted. Memory used by squid hovers around 2.3GB out of > > the total 3GB of system memory. > > > > I am not sure what is causing these memory leaks. After rebooting, > > squid-internal-mgr/info shows the following statistics: > > > > Squid Object Cache: Version 3.4.9 Build Info: Start Time: Mon, 24 > > Nov 2014 18:39:08 GMT Current Time: Mon, 24 Nov 2014 19:39:13 GMT > > Connection information for squid: Number of clients accessing > > cache: 18 Number of HTTP requests received: 10589 Number of ICP > > messages received: 0 Number of ICP messages sent: 0 Number of > > queued ICP replies: 0 Number of HTCP messages received: 0 Number of > > HTCP messages sent: 0 Request failure ratio: 0.00 Average HTTP > > requests per minute since start: 176.2 Average ICP messages per > > minute since start: 0.0 Select loop called: 763993 times, 4.719 ms > > avg Cache information for squid: Hits as % of all requests: 5min: > > 3.2%, 60min: 17.0% Hits as % of bytes sent: 5min: 2.0%, 60min: > > 6.7% Memory hits as % of hit requests: 5min: 0.0%, 60min: 37.2% > > Disk hits as % of hit requests: 5min: 22.2%, 60min: 33.2% Storage > > Swap size: 7361088 KB Storage Swap capacity: 58.5% used, 41.5% > > free Storage Mem size: 54348 KB Storage Mem capacity: 3.9% > used, > > 96.1% free Mean Object Size: 23.63 KB Requests given to unlinkd: 1 > > Median Service Times (seconds) 5 min 60 min: HTTP Requests > > (All): 0.10857 0.19742 Cache Misses: 0.10857 0.32154 > > Cache Hits: 0.08265 0.01387 Near Hits: > > 0.15048 0.12106 Not-Modified Replies: 0.00091 0.00091 DNS > > Lookups: 0.05078 0.05078 ICP Queries: 0.00000 > > 0.00000 Resource usage for squid: UP Time: 3605.384 seconds CPU > > Time: 42.671 seconds CPU Usage: 1.18% CPU Usage, 5 minute avg: > > 0.72% CPU Usage, 60 minute avg: 1.17% Maximum Resident Size: 845040 > > KB Page faults with physical i/o: 20 Memory accounted for: Total > > accounted: 105900 KB memPoolAlloc calls: 2673353 > > memPoolFree calls: 2676487 File descriptor usage for squid: > > Maximum number of file descriptors: 87516 Largest file desc > > currently in use: 310 Number of file desc currently in use: > > 198 Files queued for open: 0 Available number of > > file descriptors: 87318 Reserved number of file descriptors: 100 > > Store Disk files open: 0 Internal Data > > Structures: 311543 StoreEntries 4421 StoreEntries with MemObjects > > 4416 Hot Object Cache Items 311453 on-disk objects > > > > I will post another one tomorrow that will indicate growing > > memory/swapfile consumption. > > > > Here is my squid.conf: > > > > # OPTIONS FOR AUTHENTICATION # > > ------------------------------------------------------------------------ > ----- > > > > > # 1st four lines for > > auth_param basic children 5 auth_param basic realm Squid > > proxy-caching web server auth_param basic credentialsttl 2 hours > > auth_param basic casesensitive off # next three lines for kerberos > > authentication (needed to use usernames) # used in conjunction > > with "acl auth proxy_auth" line below #auth_param negotiate program > > /usr/local/libexec/squid/negotiate_kerberos_auth -i #auth_param > > negotiate children 50 startup=10 idle=5 #auth_param negotiate > > keep_alive on > > > > > > # ACCESS CONTROLS # > > ------------------------------------------------------------------------ > ----- > > > > > # Example rule allowing access from your local networks. > > # Adapt to list your (internal) IP networks from where browsing # > > should be allowed #acl manager proto cache_object acl manager > > url_regex -i ^cache_object:// /squid-internal-mgr/ acl adminhost > > src 192.168.1.149 acl localnet src 192.168.1.0/24 # RFC1918 > > possible internal network acl localnet src fc00::/7 # RFC > > 4193 local private network range acl localnet src fe80::/10 > > # RFC 4291 link-local (directly plugged) machines acl webserver src > > 198.168.1.35 acl some_big_clients src 192.168.1.149/32 #CI53 > > > > # We want to limit downloads of these type of files # Put this all > > in one line acl magic_words url_regex -i ftp .exe .mp3 .vqf .tar.gz > > .gz .rpm .zip .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav > > .dmg .mp4 .img # We don't block .html, .gif, .jpg and similar > > files, because they # generally don't consume much bandwidth > > But you do. Whenever the domain name or path contains any of the byte > sequences in that regex above. The entire websites > http://www.divx.com/ and http://isohunt.com/ for example. > > And whats wrong with adding more HITs ? even if they are small enough > not to use much cache space. > > <snip> > > > > # OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM # > > ------------------------------------------------------------------------ > ----- > > > > > hierarchy_stoplist cgi-bin ? > > > ... but you dont have neighbours. This is also deprecated anyway. > > > > > # MEMORY CACHE OPTIONS # > > ------------------------------------------------------------------------ > ----- > > > > > cache_mem 1366 MB > > #cache_mem 2134 MB #maximum_object_size_in_memory 64 KB > > maximum_object_size_in_memory 128 KB > > > > # DISK CACHE OPTIONS # > > ------------------------------------------------------------------------ > ----- > > > > > cache_replacement_policy heap LFUDA > > cache_dir aufs /data/squid/aufs_cache 4096 16 256 min-size=131073 > > cache_dir diskd /data/squid/diskd_cache 8192 16 256 Q1=64 Q2=72 > > max-size=131072 > > Why the segregation between diskd and aufs? > > The only difference between these cache types is the method if I/O > performed accessing the disk. AUFS is threaded SMP, diskd is > multi-process SMP. > > NP: FreeBSD 10 seem to have resolved the issues Squid AUFS has with > older BSD and people are now noticing the speed issues with diskd. > > The official recommendation is currently to use AUFS with FreeBSD 10+ > and diskd with older FreeBSD. > > > > #maximum_object_size 122880 KB maximum_object_size 153600 KB > > cache_swap_low 90 cache_swap_high 95 > > > > # LOGFILE OPTIONS # > > ------------------------------------------------------------------------ > ----- > > > > > access_log daemon:/data/squid/logs/access.log > > cache_store_log daemon:/data/squid/logs/store.log cache_swap_log > > /var/spool/squid/%s > > What is this %s ?? > > > logfile_rotate 28 > > > > # OPTIONS FOR TROUBLESHOOTING # > > ------------------------------------------------------------------------ > ----- > > > > > cache_log /data/squid/logs/cache.log > > # Leave coredumps in the first cache dir coredump_dir /data/squid > > > > # OPTIONS FOR EXTERNAL SUPPORT PROGRAMS # > > ------------------------------------------------------------------------ > ----- > > > > > diskd_program /usr/local/libexec/squid/diskd > > > > Unless you are replacing this helper with a custom-built one with > strange name this should not be configured explicitly in Squid-3. > > > > # OPTIONS FOR TUNING THE CACHE # > > ------------------------------------------------------------------------ > ----- > > > > > refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://office\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://windowsupdate\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://w2ksp[0-9]\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://download\.microsoft\.com/ 0 80% 20160 > > refresh_pattern http://download\.macromedia\.com/ 0 80% 20160 > > refresh_pattern http://ftp\.software\.ibm\.com/ 0 80% 20160 > > refresh_pattern cgi-bin 1 20% 2 refresh_pattern > > \.asp$ 1 20% 2 refresh_pattern \.acgi$ 1 > > 20% 2 refresh_pattern \.cgi$ 1 20% 2 > > refresh_pattern \.pl$ 1 20% 2 refresh_pattern > > \.shtml$ 1 20% 2 refresh_pattern \.php3$ 1 > > 20% 2 refresh_pattern \? 1 20% 2 > > refresh_pattern \.gif$ 10080 90% 43200 > > refresh_pattern \.png$ 10080 90% 43200 > > refresh_pattern \.jpg$ 10080 90% 43200 > > refresh_pattern \.ico$ 10080 90% 43200 > > refresh_pattern \.bom\.gov\.au 30 20% 120 > > refresh_pattern \.html$ 480 50% 22160 > > refresh_pattern \.htm$ 480 50% 22160 > > refresh_pattern \.css$ 480 50% 22160 > > refresh_pattern \.js$ 480 50% 22160 > > refresh_pattern \.class$ 10080 90% 43200 > > refresh_pattern \.zip$ 10080 90% 43200 > > refresh_pattern \.jpeg$ 10080 90% 43200 > > refresh_pattern \.mid$ 10080 90% 43200 > > refresh_pattern \.shtml$ 480 50% 22160 > > refresh_pattern \.exe$ 10080 90% 43200 > > refresh_pattern \.thm$ 10080 90% 43200 > > refresh_pattern \.wav$ 10080 90% 43200 > > refresh_pattern \.mp4$ 10080 90% 43200 > > refresh_pattern \.txt$ 10080 90% 43200 > > refresh_pattern \.cab$ 10080 90% 43200 > > refresh_pattern \.au$ 10080 90% 43200 > > refresh_pattern \.mov$ 10080 90% 43200 > > refresh_pattern \.xbm$ 10080 90% 43200 > > refresh_pattern \.ram$ 10080 90% 43200 > > refresh_pattern \.iso$ 10080 90% 43200 > > refresh_pattern \.avi$ 10080 90% 43200 > > refresh_pattern \.chtml$ 480 50% 22160 > > refresh_pattern \.thb$ 10080 90% 43200 > > refresh_pattern \.dcr$ 10080 90% 43200 > > refresh_pattern \.bmp$ 10080 90% 43200 > > refresh_pattern \.phtml$ 480 50% 22160 > > refresh_pattern \.mpg$ 10080 90% 43200 > > refresh_pattern \.pdf$ 10080 90% 43200 > > refresh_pattern \.art$ 10080 90% 43200 > > refresh_pattern \.swf$ 10080 90% 43200 > > refresh_pattern \.flv$ 10080 90% 43200 > > refresh_pattern \.x-flv$ 10080 90% 43200 > > refresh_pattern \.mp3$ 10080 90% 43200 > > refresh_pattern \.ra$ 10080 90% 43200 > > refresh_pattern \.spl$ 10080 90% 43200 > > refresh_pattern \.viv$ 10080 90% 43200 > > refresh_pattern \.doc$ 10080 90% 43200 > > refresh_pattern \.gz$ 10080 90% 43200 > > refresh_pattern \.Z$ 10080 90% 43200 > > refresh_pattern \.tgz$ 10080 90% 43200 > > refresh_pattern \.tar$ 10080 90% 43200 > > refresh_pattern \.vrm$ 10080 90% 43200 > > refresh_pattern \.vrml$ 10080 90% 43200 > > refresh_pattern \.aif$ 10080 90% 43200 > > refresh_pattern \.aifc$ 10080 90% 43200 > > refresh_pattern \.aiff$ 10080 90% 43200 > > refresh_pattern \.arj$ 10080 90% 43200 > > refresh_pattern \.c$ 10080 90% 43200 > > refresh_pattern \.cpt$ 10080 90% 43200 > > refresh_pattern \.dir$ 10080 90% 43200 > > refresh_pattern \.dxr$ 10080 90% 43200 > > refresh_pattern \.hqx$ 10080 90% 43200 > > refresh_pattern \.jpe$ 10080 90% 43200 > > refresh_pattern \.lha$ 10080 90% 43200 > > refresh_pattern \.lzh$ 10080 90% 43200 > > refresh_pattern \.midi$ 10080 90% 43200 > > refresh_pattern \.movie$ 10080 90% 43200 > > refresh_pattern \.mp2$ 10080 90% 43200 > > refresh_pattern \.mpe$ 10080 90% 43200 > > refresh_pattern \.mpeg$ 10080 90% 43200 > > refresh_pattern \.mpga$ 10080 90% 43200 > > refresh_pattern \.pl$ 10080 90% 43200 > > refresh_pattern \.ppt$ 10080 90% 43200 > > refresh_pattern \.ps$ 10080 90% 43200 > > refresh_pattern \.qt$ 10080 90% 43200 > > refresh_pattern \.qtm$ 10080 90% 43200 > > refresh_pattern \.rar$ 10080 90% 43200 > > refresh_pattern \.ras$ 10080 90% 43200 > > refresh_pattern \.sea$ 10080 90% 43200 > > refresh_pattern \.sit$ 10080 90% 43200 > > refresh_pattern \.tif$ 10080 90% 43200 > > refresh_pattern \.tiff$ 10080 90% 43200 > > refresh_pattern \.snd$ 10080 90% 43200 > > refresh_pattern \.wrl$ 10080 90% 43200 > > refresh_pattern ^ftp: 1440 60% 22160 > > refresh_pattern ^gopher: 1440 20% 1440 > > refresh_pattern -i (cgi-bin|\?) 0 0% 0 > > refresh_pattern . 480 50% 22160 > > > > That is a LOT of regex comparisions the proxy is having to do at least > once per-request. > > The special rules you have up the top for "cgi-bin" and "\?" are also > violating HTTP safe behaviour. The default rule we provide is highly > tuned to handle caching of those responses safely without breaking old > legacy scripts. > > > At least most of them end with $ anchor point to prevent random URLs > matching. > > > > # ADMINISTRATIVE PARAMETERS # > > ------------------------------------------------------------------------ > ----- > > > > > cache_mgr admin@xxxxxxxxxxx > > mail_from squid@xxxxxxxxxxx cache_effective_user squid > > cache_effective_group squid > > > > # DELAY POOL PARAMETERS # > > ------------------------------------------------------------------------ > ----- > > > > > delay_pools 2 > > delay_class 1 2 # When big_files are being downloaded, the first > > 5MB (625000 * 8 bits) are # downloaded at max network speed. Once > > the file size limit of 5MB is reached, # download speed drops to > > 438,000 bits or 3,504,000 MB per sec. Current # contracted Internet > > connection speed w/ TP is at 7MB per sec. delay_parameters 1 > > 750000/750000 438000/625000 > > > acl big_files url_regex -i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip > > .rar .avi .mpeg .mpe .mpg .qt .ram .rm .iso .raw .wav .dmg .mp4 > > .img .flv .wmv .divx .mov .bz2 .deb > > Another long list of regex patterns. Notice how these are permitted to > match anywhere in the entie URL. Including domain names. > > FTP traffic in particular is not guaranteed to be "big files". > > <snip> > > Intially, I set mem_cache=2134MB and after noticing these memory > > leaks, I dropped it down to 1344MB. Memory leaks are still > > occurring. > > > > Am I using anything that is known to cause memory leaks? > > > > If there is additional information that you need, please do not > > hesitate to ask! Thanks. > > A copy of the manager "mem" report would be very useful to see whats > using the memory. > Note that it is a TSV format, so please save as .tsv file and attach. > rather than cut-n-pasting inline. > Thanks, Amos, for your pointers. I've commented out all the fresh_patterns lines appearing above the last two lines. I also have dropped diskd in favor of using aufs exclusively, taking out the min-size parameter. I've commented out the diskd_program support option. In the previous version of squid (2.7) I had split the cache_dir into two types with great success using coss and aufs. Previously I had only aufs and performance wasn't where I wanted it. Apparently coss is no longer supported in the 3.x version of squid atop FreeBSD. The pathname for the cache swap logs have been fixed. Apparently this came from a squid.conf example that I copied in parts. Would this be the reason why we are seeing the error messages in /var/log/messages regarding swapping mentioned in my original post? The hierarchy_stoplist line has been stripped out as you say it is deprecated. The mem .TSV file is attached herewith. Currently I have the cache_dir located on the OS disk and all of the cache logging files on a second drive. Is this the optimal setup of cache-dir and logs? Your comments are much appreciated! ~Doug
Current memory usage: Pool Obj Size Chunks Allocated In Use Idle Allocations Saved Rate (bytes) KB/ch obj/ch (#) used free part %Frag (#) (KB) high (KB) high (hrs) %Tot (#) (KB) high (KB) high (hrs) %alloc (#) (KB) high (KB) (#) %cnt %vol (#)/sec mem_node 4136 71340 288147 288147 0.03 74.594 71323 288079 288147 0.03 99.976 17 69 1721 309801 1.573 22.017 3.853 StoreEntry 104 326878 33199 33199 0.00 8.594 326878 33199 33199 0.00 100.000 0 0 13 36512 0.185 0.065 0.507 Short Strings 40 554511 21661 21661 0.00 5.607 554511 21661 21661 0.00 100.000 0 0 80 8184851 41.562 5.625 78.492 HttpHeaderEntry 56 257314 14072 14072 0.00 3.643 257314 14072 14072 0.00 100.000 0 0 48 2280215 11.579 2.194 22.882 HttpReply 280 25315 6923 6923 0.00 1.792 25315 6923 6923 0.00 100.000 0 0 18 152929 0.777 0.736 1.633 MemObject 240 25314 5933 5933 0.00 1.536 25314 5933 5933 0.00 100.000 0 0 8 36273 0.184 0.150 0.560 MD5 digest 16 326878 5108 5108 0.00 1.322 326878 5108 5108 0.00 100.000 0 0 2 61500 0.312 0.017 0.732 cbdata MemBuf (9) 64 25343 1584 1584 0.00 0.410 25342 1584 1584 0.00 99.996 1 1 5 274112 1.392 0.301 2.753 HttpHdrCc 96 15945 1495 1495 0.00 0.387 15944 1495 1495 0.00 99.994 1 1 5 128911 0.655 0.213 1.335 Medium Strings 128 10717 1340 1340 0.35 0.347 10646 1331 1340 0.35 99.338 71 9 22 522464 2.653 1.149 4.846 4K Buffer 4096 272 1088 1088 4.62 0.282 66 264 1088 4.62 24.265 206 824 928 47084 0.239 3.314 0.388 cbdata IdleConnList (30) 4160 190 772 772 3.07 0.200 2 9 772 3.07 1.053 188 764 772 29152 0.148 2.084 0.284 16K Buffer 16384 44 704 704 3.04 0.182 0 0 704 3.04 0.000 44 704 704 52933 0.269 14.902 0.492 cbdata clientReplyContext (18) 4320 145 612 612 23.67 0.158 36 152 612 23.67 24.828 109 460 612 78438 0.398 5.822 0.687 cbdata ClientSocketContext (17) 4256 145 603 603 23.67 0.156 36 150 603 23.67 24.828 109 454 603 78438 0.398 5.736 0.687 LRU policy node 24 25261 593 593 0.00 0.153 25261 593 593 0.00 100.000 0 0 1 2656 0.013 0.001 0.243 Long Strings 512 538 269 269 0.34 0.070 520 260 269 0.34 96.654 18 9 22 72155 0.366 0.635 0.615 8K Buffer 8192 33 264 264 3.05 0.068 1 8 264 3.05 3.030 32 256 256 8168 0.041 1.150 0.067 64K Buffer 65536 4 256 256 24.78 0.066 0 0 256 24.78 0.000 4 256 256 69 0.000 0.078 0.001 HttpRequest 1744 145 247 247 23.67 0.064 36 62 247 23.67 24.828 109 186 247 78515 0.399 2.353 0.687 Comm::Connection 192 1297 244 244 23.67 0.063 423 80 244 23.67 32.614 874 164 215 382259 1.941 1.261 3.232 16KB Strings 16384 14 224 224 3.08 0.058 0 0 224 3.08 0.000 14 224 224 3142 0.016 0.885 0.034 4KB Strings 4096 50 200 200 1.48 0.052 36 144 200 1.48 72.000 14 56 112 27853 0.141 1.960 0.250 32K Buffer 32768 4 128 128 3.07 0.033 0 0 128 3.07 0.000 4 128 128 115 0.001 0.065 0.001 ipcache_entry 128 997 125 125 1.11 0.032 920 115 125 1.11 92.277 77 10 10 8300 0.042 0.018 0.090 cbdata ConnStateData (15) 440 246 106 106 4.62 0.027 39 17 106 4.62 15.854 207 89 100 36245 0.184 0.274 0.297 2K Buffer 2048 33 66 66 22.55 0.017 5 10 66 22.55 15.152 28 56 58 348209 1.768 12.253 3.167 cbdata ClientHttpRequest (16) 312 145 45 45 23.67 0.011 36 11 45 23.67 24.828 109 34 45 78438 0.398 0.421 0.687 1KB Strings 1024 42 42 42 0.89 0.011 15 15 42 0.89 35.714 27 27 33 20005 0.102 0.352 0.159 cbdata clientStreamNode (19) 128 290 37 37 23.67 0.009 72 9 37 23.67 24.828 218 28 37 156876 0.797 0.345 1.373 cbdata TunnelStateData (27) 224 144 32 32 23.67 0.008 35 8 32 23.67 24.306 109 24 32 18048 0.092 0.069 0.135 squidaio_small_bufs 4096 5 20 20 20.41 0.005 0 0 20 20.41 0.000 5 20 20 9163 0.047 0.645 0.088 MimeEntry 96 177 17 17 25.01 0.004 177 17 17 25.01 100.000 0 0 0 0 0.000 0.000 0.000 UFSStoreState::_queued_write 32 427 14 14 4.83 0.003 0 0 14 4.83 0.000 427 14 14 291989 1.483 0.161 2.971 ClientInfo 432 30 13 13 5.27 0.003 30 13 13 5.27 100.000 0 0 0 0 0.000 0.000 0.000 cbdata HttpStateData (28) 256 41 11 11 3.04 0.003 0 0 11 3.04 0.000 41 11 11 51685 0.262 0.227 0.479 cbdata store_client (22) 160 63 10 10 4.00 0.003 1 1 10 4.00 1.587 62 10 10 69898 0.355 0.192 0.634 cbdata UFSStoreState (31) 176 53 10 10 4.00 0.002 0 0 10 4.00 0.000 53 10 10 29112 0.148 0.088 0.255 cbdata FwdState (23) 168 53 9 9 3.08 0.002 0 0 9 3.08 0.000 53 9 9 51969 0.264 0.150 0.482 link_list 16 427 7 7 4.83 0.002 0 0 7 4.83 0.000 427 7 7 292152 1.484 0.080 2.972 cbdata ps_state (24) 248 27 7 7 1.81 0.002 0 0 7 1.81 0.000 27 7 7 70187 0.356 0.299 0.617 cbdata helper_server (8) 240 27 7 7 1.81 0.002 27 7 7 1.81 100.000 0 0 0 0 0.000 0.000 0.000 cbdata WriteRequest (33) 80 72 6 6 3.15 0.001 0 0 6 3.15 0.000 72 6 6 292344 1.484 0.402 2.971 cbdata DiskdFile (32) 104 52 6 6 4.00 0.001 0 0 6 4.00 0.000 52 6 6 28466 0.145 0.051 0.249 cbdata ConnOpener (26) 136 39 6 6 4.85 0.001 0 0 6 4.85 0.000 39 6 6 41173 0.209 0.096 0.349 Acl::AndNode 152 24 4 4 25.01 0.001 24 4 4 25.01 100.000 0 0 0 0 0.000 0.000 0.000 squidaio_micro_bufs 128 20 3 3 16.57 0.001 0 0 3 16.57 0.000 20 3 3 1291 0.007 0.003 0.013 cbdata CbDataList (29) 96 25 3 3 3.15 0.001 0 0 3 3.15 0.000 25 3 3 1157651 5.878 1.910 15.290 cbdata ReadRequest (36) 72 32 3 3 4.00 0.001 0 0 3 4.00 0.000 32 3 3 25308 0.129 0.031 0.224 aio_ctrl 112 20 3 3 16.57 0.001 0 0 3 16.57 0.000 20 3 3 10459 0.053 0.020 0.101 cbdata Logfile (11) 1120 2 3 3 25.01 0.001 2 3 3 25.01 100.000 0 0 0 0 0.000 0.000 0.000 aio_request 104 20 3 3 16.57 0.001 0 0 3 16.57 0.000 20 3 3 10459 0.053 0.019 0.101 cbdata Tree (4) 208 9 2 2 25.01 0.000 9 2 2 25.01 100.000 0 0 0 0 0.000 0.000 0.000 acl_ip_data 96 15 2 2 25.01 0.000 15 2 2 25.01 100.000 0 0 1 8 0.000 0.000 0.000 cbdata RebuildState (12) 688 2 2 2 25.01 0.000 0 0 2 25.01 0.000 2 2 2 0 0.000 0.000 0.000 cbdata DiskThreadsDiskFile (37) 88 15 2 2 4.31 0.000 0 0 2 4.31 0.000 15 2 2 581 0.003 0.001 0.006 aio_thread 40 32 2 2 24.98 0.000 32 2 2 24.98 100.000 0 0 0 0 0.000 0.000 0.000 ev_entry 48 26 2 2 1.79 0.000 13 1 2 1.79 50.000 13 1 1 396774 2.015 0.327 4.396 wordlist 16 73 2 2 20.98 0.000 14 1 2 20.98 19.178 59 1 1 184 0.001 0.000 0.002 helper_request 40 27 2 2 1.81 0.000 0 0 2 1.81 0.000 27 2 2 9263 0.047 0.006 0.090 cbdata generic_cbdata (25) 32 31 1 1 22.55 0.000 0 0 1 22.55 0.000 31 1 1 31839 0.162 0.018 0.292 ACLSourceIP 136 7 1 1 25.01 0.000 7 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLStrategised 152 6 1 1 25.01 0.000 6 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata BodyPipe (39) 152 6 1 1 21.10 0.000 0 0 1 21.10 0.000 6 1 1 2462 0.013 0.006 0.024 ACLStrategised 152 5 1 1 25.01 0.000 5 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLARP 144 5 1 1 25.01 0.000 5 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 FwdServer 24 27 1 1 1.81 0.000 0 0 1 1.81 0.000 27 1 1 70187 0.356 0.029 0.617 fqdncache_entry 160 4 1 1 25.01 0.000 3 1 1 25.01 75.000 1 1 1 0 0.000 0.000 0.000 cbdata CbDataList (2) 40 12 1 1 25.01 0.000 12 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata ACLFilledChecklist (21) 408 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 1 78573 0.399 0.551 0.687 cbdata CbDataList (1) 40 10 1 1 25.01 0.000 10 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata PortCfg (5) 344 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata RemovalPolicy (6) 104 3 1 1 25.01 0.000 3 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLStrategised 152 2 1 1 25.01 0.000 2 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLStrategised 152 2 1 1 25.01 0.000 2 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 Acl::NotNode 152 2 1 1 25.01 0.000 2 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLRegexData 16 18 1 1 25.01 0.000 18 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 RegexList 56 5 1 1 25.01 0.000 5 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLCertificateData 80 3 1 1 25.01 0.000 3 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata ErrorState (40) 240 1 1 1 24.98 0.000 0 0 1 24.98 0.000 1 1 1 297 0.002 0.001 0.003 cbdata StoreSearchHashIndex (34) 104 2 1 1 25.01 0.000 1 1 1 25.01 50.000 1 1 1 25 0.000 0.000 0.000 cbdata ClientRequestContext (20) 104 2 1 1 23.14 0.000 0 0 1 23.14 0.000 2 1 1 78572 0.399 0.140 0.687 cbdata IoResult (38) 40 5 1 1 20.41 0.000 0 0 1 20.41 0.000 5 1 1 9163 0.047 0.006 0.088 cbdata CbDataList (3) 64 3 1 1 25.01 0.000 3 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata helper (7) 168 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 UFSStoreState::_queued_read 40 4 1 1 20.90 0.000 0 0 1 20.90 0.000 4 1 1 159 0.001 0.000 0.001 ACLDestinationIP 136 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLSslErrorData 16 6 1 1 25.01 0.000 6 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata TcpAcceptor (14) 96 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 HttpHdrContRange 24 4 1 1 3.11 0.000 0 0 1 3.11 0.000 4 1 1 2682 0.014 0.001 0.035 ACLHTTPHeaderData 48 2 1 1 25.01 0.000 2 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLDomainData 16 5 1 1 25.01 0.000 5 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLStringData 16 5 1 1 25.01 0.000 5 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 StoreSwapLogData 72 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 334805 1.700 0.414 0.241 cbdata RemovalPurgeWalker (35) 72 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 179869 0.913 0.223 1.998 ACLUserData 24 3 1 1 25.01 0.000 3 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLMethodData 16 3 1 1 25.01 0.000 3 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 dwrite_q 48 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 545038 2.768 0.450 2.489 ACLNoteData 40 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 cbdata IoResult (41) 40 1 1 1 24.96 0.000 0 0 1 24.96 0.000 1 1 1 210230 1.068 0.144 2.248 StoreMetaSTDLFS 32 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 29036 0.147 0.016 0.254 ACLHierCodeData 32 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 StoreMetaMD5 32 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 29036 0.147 0.016 0.254 CacheDigest 32 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 ACLTimeData 32 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 StoreMetaURL 32 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 29036 0.147 0.016 0.254 HttpHdrRange 32 1 1 1 24.59 0.000 0 0 1 24.59 0.000 1 1 1 1348 0.007 0.001 0.018 StoreMetaVary 32 1 1 1 24.98 0.000 0 0 1 24.98 0.000 1 1 1 4125 0.021 0.002 0.033 ACLASN 16 2 1 1 25.01 0.000 2 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 StoreMetaObjSize 32 1 1 1 25.01 0.000 0 0 1 25.01 0.000 1 1 1 29036 0.147 0.016 0.254 dlink_node 24 1 1 1 24.99 0.000 0 0 1 24.99 0.000 1 1 1 25 0.000 0.000 0.000 HttpHdrRangeSpec 16 1 1 1 24.59 0.000 0 0 1 24.59 0.000 1 1 1 1348 0.007 0.000 0.018 ACLProtocolData 16 1 1 1 25.01 0.000 1 1 1 25.01 100.000 0 0 0 0 0.000 0.000 0.000 Total 1 1671646 386287 386287 0.00 100.000 1667471 381340 381499 0.05 98.719 4175 4948 5917 18021673 91.512 93.203 260.304 Cumulative allocated volume: 5.820 GB Current overhead: 31652 bytes (0.008%) Total Pools created: 123 Pools ever used: 113 (shown above) Currently in use: 71 String Pool Impact (%strings) (%volume) Short Strings 98 93 Medium Strings 2 6 Long Strings 0 1 1KB Strings 0 0 4KB Strings 0 1 16KB Strings 0 0 Other Strings 0 0 Large buffers: 0 (0 KB)
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users