Search squid archive

Re: NEWBIE: force squid to store/cache xml responses?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Right, as I said, this is a specialized case, accelerating exactly ONE application server (actually a few, but just load-balanced of the same "site"). There is no way, if my config is correct, that either another Squid proxy will be able to leverage mine as a peer, nor any way a user can use mine as a proxy to any other website. Thus, the concerns about any problems manifesting "downstream" aren't an issue.

As for getting the app server to properly set the headers, I'm wondering why Tomcat isn't doing that in the first place? It's a pretty good/stable app server and usually serves content reliably. Could it be because Tomcat would be likely conforming to HTTP 1.1, not 1.0 and thus setting the headers differently?

Thanks again,
-AJ

----- Original Message ----- From: "Amos Jeffries" <squid3@xxxxxxxxxxxxx>
To: <squid-users@xxxxxxxxxxxxxxx>
Sent: Wednesday, August 11, 2010 10:12 PM
Subject: Re:  NEWBIE: force squid to store/cache xml responses?


On Wed, 11 Aug 2010 15:32:21 -0400, "AJ Weber" <aweber@xxxxxxxxxxx> wrote:
I have a specialized case where there are some xml config files that are

static -- not returning dynamic info like an rss feed or something.

The there is a problem on the web server sending the wrong Cache-Control:
and/or Expires: headers.

The best fix is to correct the problem at the source cause. Any hack you
configure in your Squid will only affect that one install of Squid not the
thousands of other proxies around the Internet or the client software which
has to deal with the objects on arrival.


I tried forcing squid 2.7 to cache these (accelerator mode), but can't
seem
to get it to work.

Currently I'm trying:
refresh_pattern -i \.xml$    2880    80%    14400 override-expire
override-lastmod ignore-no-cache ignore-private ignore-reload

This is the first refresh pattern and should match.  for example these
are
the entries in the store.log...
1281553624.015 RELEASE -1 FFFFFFFF 2C3544E359642CCE8719931EEC59536B  304

1281553620 1225808485        -1 text/plain -1/0 GET
http://test.test.com/crossdomain.xml
1281553664.484 RELEASE -1 FFFFFFFF 9511A5DBA70C624E5DD76FE143AFB909  304

1281553620 1225808485        -1 text/plain -1/0 GET
http://test.test.com/crossdomain.xml
1281554349.390 RELEASE -1 FFFFFFFF 4DBBD82F76E56E6B938A65898ABC3EDD  304

1281553620 1225808485        -1 text/plain -1/0 GET
http://test.test.com/crossdomain.xml
1281554413.359 RELEASE -1 FFFFFFFF 8B0D827E31D1D5725CD1EFB828EB9C67  304

1281553620 1225808485        -1 text/plain -1/0 GET
http://test.test.com/crossdomain.xml

I changed the servername here "to protect the innocent".

If I understand the log correctly, it seems like there is not an expires

header that is parsable, but shouldn't this be ignored given my
refresh_pattern options???

Look at what redbot.org says about the cache-control and expiry headers.
There are likely other things not being overridden. It's annoyingly common
for people to set these overrides on individual proxies, forcing web admin
to set an ever increasing number of alternative forced non-caching rules on
important short-lived objects.

And note, that override will force-cache *every* .xml object stored by
your Squid. Regardless of source, whether its a RSS feed or a facebook game
state update etc. Only allowing them to fetch new versions once per ten
days.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux