Search squid archive

Re: My my squid hit ratio so low?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/07/2013 7:04 p.m., jinge wrote:
Dear Amos,

Thanks for your reply.


On 2013-7-5, at 下午2:39, Amos Jeffries <squid3@xxxxxxxxxxxxx <mailto:squid3@xxxxxxxxxxxxx>> wrote:

On 5/07/2013 6:26 p.m., jinge wrote:
Hi, all.

We use squid for a long time. And recently we upgrade our squid to 3.3.4 and begin to use SMP and rock.

This is our related configure.

include /usr/local/etc/squid/commonbk/bk.global-options.conf
include /usr/local/etc/squid/commonbk/bk.refresh-pattern.conf
include /usr/local/etc/squid/commonbk/bk.acl-define.conf
include /usr/local/etc/squid/commonbk/bk.acl-action.conf
#include /usr/local/etc/squid/squid.conf
cache_dir rock /cache1/rock 48000 max-size=31000 max-swap-rate=300 swap-timeout=300 cache_dir rock /cache2/rock 48000 max-size=31000 max-swap-rate=300 swap-timeout=300
workers 3
cpu_affinity_map process_numbers=1,2,3 cores=3,5,7
if ${process_number} = 1
include /usr/local/etc/squid/commonbk/backend5a.conf
endif
if ${process_number} = 2
include /usr/local/etc/squid/commonbk/backend5b.conf
endif
if ${process_number} = 3
include /usr/local/etc/squid/commonbk/backend5c.conf
endif

Lot of sub-config files there. What do they contain?

And the sub-config is some thing like this.
# NETWORK
# ---------------------------------------------------------------------------
http_port               192.168.2.1:3128 accel allow-direct ignore-cc
# DISK
# ----------------------------------------------------------------------------- cache_dir diskd /cache3/aufs/64k 24000 16 128 min-size=31001 max-size=65536


So 64KB maximum cacheable object? at least for that worker process.



access_log /dev/null

Do not waste rsources formatting log lines only to send them to /dev/null.

"access_log none" does what you want in a far more efficient way.

Thank you, I will follow your advise.


cache_log /var/log/squid/cache.log


and our machine

No LSB modules are available.
Distributor ID:Ubuntu
Description:Ubuntu Raring Ringtail (development branch)
Release:13.04
Codename:raring

total used free shared buffers cached Mem: 15G 14G 1.5G 0B 27M 6.7G
-/+ buffers/cache:       7.5G       8.1G
Swap:          16G         0B        16G

Linux 3.8.0-16-generic #26-Ubuntu SMP Mon Apr 1 19:52:57 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

/dev/sdb1       137G   13G  125G   9% /cache1
/dev/sdc1        69G   13G   57G  18% /cache2
/dev/sdd1        69G   37G   33G  54% /cache3
/dev/sde1        69G   31G   38G  45% /cache4
/dev/sdf1        69G   31G   39G  45% /cache5

we found that the Rock won't fill the cache_dir in 48GB. And our ratio is so low

Hits as % of all requests:5min: 4.3%, 60min: 4.3%
Hits as % of bytes sent:5min: 3.3%, 60min: 3.3%
Memory hits as % of hit requests:5min: 33.7%, 60min: 32.1%
Disk hits as % of hit requests:5min: 8.5%, 60min: 9.0%

These numbers are not adding up at all. For example 32% + 9% is not 100% of HIT requests - it is ~50% of requests (2 workers?). Possibly your real numbers are twice those % points. Still low though. Should be up around 80-90% for a reverse-proxy like you show in the config above.




Can anyone tell me what's wrong with my squid?

Not from just those numbers. This is where an access.log comes in handy. For identifying where the MISSes are and if any should have been HITs.



And to my surprise, we after a long time of running. Why my rock cache dir won't fill the whole storage as I config: cache_dir rock /cache1/rock 48000 max-size=31000 max-swap-rate=300 swap-timeout=300

Does the 4800 directive won't work, or there is any other reason?

That depends on what your website(s) contains in the way of objects. Very likely you have not served up 48GB worth of unique under-30.274 KB sized objects.


If you are able to I suggest a little experiment. Enable your access.log again and gather some statistics about the following: a) what count of objects are flowing through your Squid are under 31000 bytes and HIT vs MISS?

This will tell you what your Rock cache HIT-rate is. IME we should expect something roughly similar for the over-31000 byte objects as well although there may be less of them contributing towards the total HIT rates.

b) what count of objects are flowing through your Squid are over 31000 bytes and HIT vs MISS?

This will give you an idea of how much the UFS cache_dir are warping the overall ratios. Not being SMP-aware a request cached by one worker cannot be HIT on another, so yoru ration may be up to 60% decreased with 3 workers if the traffic is mostly going via these caches.


c) what count of objects are flowing through your Squid are over 64KB and any other cache_dir max-size you have configured in the per-worker configs?

This will give you an idea of whether those max-size limits are reasonable. For example if that worker whose sub-config you showed with cache3 max-size of 64KB was mostly receiving 70KB objects - it would have a very poor HIT ratio just because of that limit alone.


d) what the request method and response codes are? (both HTTP satus code and Squid log code).

This will give you a clue whether it is the nature of teh traffic itself (revalidation and/or nasty server response to revalidation, or non-cacheable methods being used by clients) is causing low HIT ratios.


Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux