Search squid archive

Re: How can I get Squid to use more memory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 30 Oct 2011 11:35:39 -0400, Ralph Lawrence wrote:
Hi,

How do I get Squid to use all my server memory as a cache and keep as
many objects as possible in there?

This is default behaviour.

cache_mem determines the size of in-RAM object cache. As your config says, "Feel free to use as much as needed". Limited only that the box must not start swapping under peak load. That will kill performance at the worst possible time.


Squid is a great piece of software.  I've been studying it's
configuration for the last few days now and I'm really impressed with
what I've seen.

We have a fairly complex reverse proxy configuration building.  I am
now building the first ( of eventually many ) load-balanced Squid
servers.  Each Squid reverse proxy has 2GB of ram and *only* runs
Squid.  Absolutely nothing else will be on the server.  The actually
site being cached is under 1GB in size on disk.  So conceptly Squid
should be able to cache the entire site in memory and we should only
see TCP_MEM_HIT in logs right?

In theory yes. But... dynamic or static site content?

The difference being that each small dynamic file on the web server disk could expand out into a great many variant copies in the cache. Just from small URL parameter differences. This could mean adding an few extra zeros to the cache size required.

Squid only caches one object per unique URL text value. So the total explosion is somewhat smaller than theoretically possible.


But right now, even under load the user is reporting Squid is using
13MB of memory and we're seeing a lot of TCP_HIT as opposed to
TCP_MEM_HIT.  How can we change this?

This sounds to me like it could be a few things. You did not indicate any release numbers so some of these may not apply:

* your squid might be an older one which does not promote disk objects back into memory when they get hot/popular again. In those release once an object gets less popular it gets sent to the disk cache, and served as HIT from there until something causes an update/refresh.

* the HIT objects might be >2MB big. The configuration below specifies that no >2MB object may be stored in memory.

* the objects may be arriving with no size indications. Squid is forced to assume "infinite" size (ie >2MB) and send them to disk immediately. Assuming promotion is available in your squid, it is not worth promoting from disk until the second request, which shows up as a HIT followed (maybe) by MEM_HIT.

* the objects may simply not be that popular. Stored in memory until newer objects push it out, then some time later gets a HIT after its on disk. - which HIT may promote it and the cycle repeats without actually being fast enough to be a MEM_HIT.


My config looks like...

# Feel free to use as much as needed
cache_mem     1200 MB

2GB of RAM on the box with 1.2 GB dedicated to object storage. Seems about right to me. Although only you have access to the RAM usage stats at peak traffic times to tweak that further.


# Keep all the memory used
memory_pools  on

# Don't let a few files hog memory.  We don't have many large files
anyway, but just in case
maximum_object_size_in_memory 2048 KB

# Replacement policy
cache_replacement_policy      heap LFUDA

Thanks,
Ralph

Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux