Search squid archive

Re: Re: Squid 3.2.6 & hot object cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 23/01/2013 5:04 a.m., babajaga wrote:
Amos,

Under Rock/COSS
requests within a certain time range of each other are assigned slots
within one memory page/chunk - such that a client loading a page causes,
with a high probability, the related objects, images, scripts - to be
swapped in and ready to served directly from the RAM area slice before
they are requested by the client. <
Wow, interesting approach; immediately, however, I think about redundant
caching of hot objects (hot in respect to different domains, referencing
them).

Anyway, having a few more similar remarks to your info,
this is not the right thread to discuss such questions; unfortunately in the
docs I found,
http://wiki.squid-cache.org/Features/RockStore#limitations
  and references within,
I did not find this type of info, you just supplied. But some more open
questions to me.

Could you give any further hints, where to find more design principles of
Rock(-large) ?

The Measurement-Factory would be the best place to ask that. As far as I am aware at this stage it was going to be the same as existing Rock with just a list of multiple pages per object. But that was discussed months ago and the particulars are likely to have changed due to code requirements since.


I do not mean the ultimate doc, the source code :-)
Having quite some design- and development experience in high-performance
file systems, also DB-like transactional ones, during the times when this
had to be done in Assembler
because of execution time and memory constraints, I expect quite some
similarities.

Very likely. It is a database of URI entities after all.

Amos


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux