Search squid archive

Re: I want to verify why squid wont cache a specific object.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.08.2012 12:22, Eliezer Croitoru wrote:
On 8/20/2012 2:37 AM, Eliezer Croitoru wrote:
On 8/20/2012 1:38 AM, Amos Jeffries wrote:

The FFFFFFFF is the file number/name where it is being stored. Since
this is an erased operation that is always the magic F value.

It is not 1-to-1 related to the object being cacheable. It just means the object *currently* stored needs removing. Non-cacheable objects the
RELEASE immediately follows storage, for cacheable objects being
replaced the erase of old content immediately follows storage of new
copies.
OK
<SNIP>
just a bit more interesting data.
there is a different between intercepted requests(NAT and  TPROXTY)
to using regular proxy http requests.

on regular proxy everything works fine and the file is being cached always.
(I use two squids.. both with url rewriter that causes the like
"store_url_rewite" effect on the cache.)
it works always for youtube on the same setup so I dont really know
what the cause can be.

it narrows down the bug into a very small area which is:
3.2.1 TPROXY\INTERCEPT + cache_peer + specific requests

	vs

3.2.1 regular proxy + cache_peer + specific requests

Ah. The Host verification may be affecting things there. If that fails the requested URL is marked non-cacheable (for the intercepting proxy). We don't yet have a good separation on the storage systems to make non-cacheables leave cached content alone. So this is likely to result in cached content being invalidated.

IIRC you were having trouble with verification preventing relay to peers. Which is a strong indication that the no-store flag will have been set by that destination trust check.



there is a different in the requests that was made on regular proxy
or intercepted requests that the url has a "@" sign in it but it's not
suppose to change a thing.

I will file a bug later but I want first to verify more about it.

If you wish. It is a minor regression for the use-cases where traffic is being fetched from sources other than ORIGINAL_DST. That content should still be cacheable as it was before. It is done this way for now so that swapping ORIGINAL_DST in for DIRECT at selection time will be safe, and that selection point is far too late to create cache entries when it turns out to be safely cacheable despite the verify failing.


However, I noted that your system is a two-layer proxy and both layers are MISS'ing. For the Host verification possibility only the gateway intercepting cache would be forced to MISS by those flags. The second layer is completely unaware of the intercept or Host error and treats it as normal forward-proxy traffic, caching and all. I would expect this situation to appear as MISS on your frontend proxy and HIT on your backend proxy before reaching the cloud proxy.

Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux