Search squid archive

Re: Re: Squid 3.2.6 & hot object cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/01/2013 2:00 a.m., babajaga wrote:
Rock and COSS storage types however are far more optimized for speed,
using both disk and RAM storage in ther normal "disk" configuration. <

Amos,

haven't you been a little bit too "generous" in your comments, especially
this referred one ?

I don't think so. They *have* been optimized for speed and are measurably so. I made no comment about bug-free state in any of the disk I/O modues. Just about speed versus a RAM disk.



I looked at the docs both for COSS and Rock, and the following excerpts made
me a bit skeptical:

1) COSS:
Changes in 3.3 cache_dir
     COSS storage type is lacking stability fixes from 2.6

When I read such a statement, I refuse to use this feature in a production
environment. Even in case, it has a lot of speed advantages. One crash might
wipe out all speed advantages.

As it was intended. Until somebody wants to do the portage its unlikely to change either. We have debated both removing COSS entirely or expending the effort to debug it fully. Neither debate came to a satisfactory conclusion yet. The developers do agree that: Rock was designed to do the same things as COSS and does them a bit better, and COSS is not worth our time fixing. If you or someone else has a different opinion patches are still welcome (so we are required to leave the COSS code present in 3.2+).

Note also that it is referring to the squid-3 version of COSS. There was some bug fixes that went into squid-2.6 and COSS in 2.7 has a proven track record for high performance now. Rock was built on that 2.7 track record with a few design fixes for lessons learned since COSS was created and SMP support.

2) Rock:
http://wiki.squid-cache.org/Features/RockStore#limitations
2a) Rock store is available since Squid version 3.2.0.13. It has received
some lab and limited deployment testing. It needs more work to perform well
in a variety of environments, but appears to be usable in some of them.
2b)Objects larger than 32,000 bytes cannot be cached when cache_dirs are
shared among workers.
2c)Current implementation uses OS buffers for simplicity.

When reading 2a) I start to be cautious again :-)

Good. It is a new feature, the small number of people using it so far give us confidence enough to promote it but not to say its bug-free. Problems may occur in a situation where nobody has tried using it. Also we are aware that startup time is slower with Rock than we would like. That is all 2a means.

By all means be cautious. But please do not let that stop you testing or using it. The more people we have using it the more confident we can be that it is bug-free.


2b) tells me, it very much depends upon the mean size/standard deviation of
the cached objects, whether using Rock really has an advantage. Might change
in the future with Rock-large, though.
2c) Makes the theoretical approach to evaluate performance advantages of
Rock almost impossible. Because you always have to consider the filesystem
used, with the respective options, having a huge impact on performance. So
the only serious approach right now to advocate possible performance
advantages would be after quite some benchmarking, using real workloads.
Which certainly are very site specific.
Because of the basic principle of Rock and Rock-large (which are like
filesystems themselves), using raw disk-I/O is possible in the future, at
least, which MIGHT THEN justify a general statement  "much more optimized to
speed".

The COSS model is a slice model the same way that a disk backed RAM-disk operates its swap pages. In both designs large chunks of memory are swapped in and out to fetch items stored somewhere within that chunk. Under the UFS on RAM-disk model these would be allocated random disk locations by the generic disk manager and each is swapped in individually only after being requested by the client. Under Rock/COSS requests within a certain time range of each other are assigned slots within one memory page/chunk - such that a client loading a page causes, with a high probability, the related objects, images, scripts - to be swapped in and ready to served directly from the RAM area slice before they are requested by the client. Overall this means the latency of a first-request is either the same as RAM or the same as disk I/O, PLUS the latency of followup related items is that of RAM *instead* of disk I/O - for a total net reduction in latency / gain in speed when loading a web page.

As you can see this is also very page-centric. If you are using Squid as gateway for a web app which does not have that type of page-centric temporal linkage between its requests the storage types become much closer in latency.

Yes, it is *complicated*, with a great many factors which we have not or cannot measure with any accuracy.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux