Search squid archive

Re: Squid 3.2.6 & hot object cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/01/2013 12:10 a.m., Ralf Hildebrandt wrote:
I have a general question regarding Squid 3.2.x and the hot object
cache.

According to what I found in the FAQ, the "hot object cache" is
located in RAM, taking an amount of RAM limited by cache_mem.

So, frequently requested objects are stored in this memory area. But
what is a hot item?

An item which was requested very recently (temporally close to the active traffic), with level of "heat" being the HIT rate on it. As has been documented by the WikiMedia guys, you can get 100% TCP_MEM_HIT rate when being "DoS'd" by genuine client requests for a popular object.


  How big can a hot item be?

Up to the configured maximum_object_size_in_memory (default of 512 KB).

  Are objects transferred
from the cache_mem to the disk cache (and vice versa).

Yes.

* New objects (which are small enough) start off in memory-only and when they reach "cold" enough HIT rate they get swapped out to a disk store. - object too big for memory get swapped out to disk immediately and only a short windows of their size being actively used by a client is kept in memory.

* In UFS/AUFS/diskd storage, objects stay on disk. When HIT thay can get swapped back into memory as a duplicate (but not swapped back out again when they go cold, unless invalidated by something).

* In COSS or Rock storage, object swapping in/out happens the same way. But only to "pages" of the storage area which are in memory at the time. Identical to a RAM-disk storage highly optimized to Squid access patterns and the HTTP average object size ranges.


The reason for my question: WOuld a RAM disk help to speed up the proxy?

That is really complicated.

It would speed up if you had a cache configuration with access latency worse than the RAM cache overhead costs. For example; RAM-disk is clearly faster than SSD or disk for the old popular UFS/AUFS/diskd storage type.

Rock and COSS storage types however are far more optimized for speed, using both disk and RAM storage in ther normal "disk" configuration. Ao a % of accesses from them will be *faster* than RAM-disk, and some will be slower to a HDD than UFS. Add an SSD behind Rock or COSS and the latency on those worst-case disk loads goes down to something comparible to RAM-disk, but you face SSD design limitations on write-ops and lifetime which are consumed by Squid faster than normal usage (ie they die a bit faster than manufacturer specs would indicate - still usable though).

Also, HTTP header paring is a significant overhead in the whole request process and a sizeable % of the traffic bytes. By the time you are getting close to RAM speeds on a large cache you are hitting traffic rates where the parsing speed bottlenecks more than the disk I/O. With HTTP/1.1 optimizations generating a lot of 304 responses the portion of parsing goes up significantly.


So,... YMMV, if you wish give it a try. And if you find it actually *is* faster some guys on the squid-dev mailing list would like to know by how much and why.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux