Search squid archive

Re: unexplainable MISSes (squid 2.7stable9)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So, the problem boils down to:

why a cashed version of the page with URL X will be invalidated by an access to the same URL and a different value in one of the headers listed in the Vary header? The store log consistently logs:

Mon 08 Nov 2010 11:27:50 PM CET RELEASE 200 text/html GET http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/ Mon 08 Nov 2010 11:27:50 PM CET SWAPOUT 200 text/html GET http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/ Mon 08 Nov 2010 11:27:50 PM CET SWAPOUT 200 x-squid-internal/vary GET http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/ Mon 08 Nov 2010 11:27:50 PM CET RELEASE 200 x-squid-internal/vary GET http://127.0.0.1:3128/VirtualHostBase/http/www.somewebsite.com:80/www/SITE/VirtualHostRoot/

each time I make a request that has one of the headers in vary with a different value.


On 11/08/2010 06:33 PM, Adrian Dascalu wrote:
Done some new tests and I found out that caching an URL that has a
different X-Username header will invalidate the other version of that
object.

Is this the intended behaviour? I mean, Vary header will just inform
there is a new version so everything else is discarded? If so, is there
a method for cacheing multiple versions of the same URL ?

Adrian

On 11/08/2010 02:33 PM, Adrian Dascalu wrote:
Hi,

I'm out of ideeas trying to debug cache misses that I cannot explain. As a last resort I'm sending this problem to the list with the hope that you could come up with some explanation and/or cure for this.

the setup is: squid 2.7stable9 on RHEL 5, configured as accel, 12 parents 1 sibling (another squid). Apache in front zope as parents.

For the root page I send requests from the same browser. The page is supposed to stay in cache for 1h. I've seen it behaving correctly one time (at squid startup) afterwards if i keep requesting the page a few times I will get a MISS long before the 3600s have passed.

I have checked and there is no PURGE for this URL in the mean time. There are some for other URL's deeper in the structure.

here's a request:

Host www.somewebsite.com

User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.5.9-1.fc11 Firefox/3.5.9Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

Accept-Language en-us,en;q=0.5

Accept-Encoding gzip,deflate

Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7

Keep-Alive 300

Connection keep-alive

Referer http://www.somewebsite.com/

Cookie __utma=173508663.4134765344646281700.1250060356.1271487209.1289208944.50; __utmb=173508663.59.10.1289208944; __utmc=173508663; __utmz=173508663.1289208944.50.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)

here's a HIT reply:

Date Mon, 08 Nov 2010 12:06:29 GMT

Server Zope/(Zope 2.9.10-final, python 2.4.3, linux2) ZServer/1.1 Plone/2.5.5

Content-Length 10131

Content-Language en

Content-Encoding gzip

Expires Fri, 10 Nov 2000 12:05:48 GMT

Vary Accept-Encoding,Accept,If-None-Match,X-Username

X-Caching-Rule-Id plone-containers

Cache-Control max-age=0, s-maxage=3600

Content-Type text/html;charset=utf-8

X-Header-Set-Id cache-in-proxy-1-hour

Age 40

X-Cache HIT from squid1.somewebsite.com

X-Cache-Lookup HIT from squid1.somewebsite.com:3128

Via 1.0 squid1.somewebsite.com:3128 (squid/2.7.STABLE9)

Keep-Alive timeout=8, max=100

Connection Keep-Alive

Long before the 3600s have passed ,from the same browser, I would get a MISS. The request headers are IDENTICAL and there is no PURGE. What else might invalidate the cached object?


Thank you,
Adrian






[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux