I'll describe the real scenario in a more detailed way, but I can't
disclose all of it.
There are a few machines, let's name them M1 to M9, that are processing
data.
From time to time, those machines should make HTTP requests to external
servers, that are business partners. All of these HTTP requests are in
the same format and have the following request headers:
* User-Agent: undisclosed_user_agent
* |Accept-Encoding: gzip, deflate
* Host: the_hostname_of_the_external_server
|* Expect: [nothing]
* Pragma: [nothing]
That's it, nothing more, nothing less.
On those servers, as we agreed, there should be an xml file in a
specific path. For instance:
http://foo.com/bar/daily-orders.xml
(I can't disclose the exact path here)
These files are re-generated from time to time. How often? I can't tell,
and it's not up to me.
Now, since there are a few thousands of business partners that generate
these xml files for my business, I thought that caching these xml files
in a single machine would be a good idea, since it should reduce
external traffic.
Therefore, I installed Squid3 on a specific machine, and updated M1-M9
HTTP clients to use the proxy server instead of directly fetching the
xml files.
For business considerations, when an xml file is cached, I don't need it
to be as fresh as possible. I want to reduce outgoing traffic as much as
I possible.
My business partners don't care about it, too. They also don't want to
change anything at all in their web servers. That's a fact I can't
change what so ever.
All I want is to have a local copy of the xml file for every external
server, that would be considered as "fresh" from T0 to T0+60minutes. For
my business needs, that's what I need. And if some of the xml files are
cached somewhere else, which is a rare scenario for this case, then I
can ignore that (business-wise)
I initially thought that the favicons example would simplify things
(since a lot of web sites have favicons, and it's a common knowledge),
but I wasn't aware of the special case of favicons. I apologize for the
time wasted about my simplified example.
I hope I shed more light about the subject.
Thanks!
On 23-Sep-13 11:21, Amos Jeffries wrote:
On 23/09/2013 7:21 p.m., Ron Klein wrote:
My example of favicons was to simplify the question. The real case is
different.
Then please tell us the real details. In full if possible.
favicon is one of the special-case type of URLs and like Eliezer and I
already mentioned there are some specific usage for them which
directly causes problems with your stated goals or even using it as a
simplified test case. Perhapse your real case is also using similar
special-case URLs with other problems - but nobody can assist with
that if you hide details.
So please at least avoid "favicon" references for the remainder of
this discussion. You have indicated that they are irrelevant.
I want to cache all "favicons" (that is, other resources, internally
used) for 60 minutes.
For a given "favicon", I'd like to have the following caching policy:
Anywho, ignoring all the protocol and UA special-case behaviour
factoids because you said that was a fake example...
The period of 60 minutes should start when the first consumer
consumes the favicon. Let's mark the time for that first request as
T0 (T Zero).
Your policy assumes and requires that your proxy is the only one
between users and the origin server. If your upstream at any stage
have a proxy the object age will not meet your T0 criterion - this is
why Last-Modified and Age headers are used in HTTP. To indicate an
objects time since creation regardless of whether the object might
have been newely generated by the origin, altered by an intermediary
or stored for some time by an intermediary or the origin itself
(server-side caching or static archive).
FWIW: I am working with a client at present who want to do this type
of caching for every URL in existence, but only for a few minutes.
They have a growing list of domain names where the policy has to be
disabled due to problems it causes to user traffic.
During T0 until T0+60minutes, this favicon should be considered as
"fresh", in terms of caching.
The single value of 60 in the refresh_pattern line "max" field along
with override-expire override-lastmod meets the above criteria.
However as I said earlier, freshness does not guarantee a HIT. There
are many other HTTP features which need to be considered on top of
that freshness to determine whether it HITs or MISSes.
After T0+60minutes, this favicon should be considered as "stale", in
terms of caching, and should be re-fetched by Squid, upon request.
There is no such thing as a refetch in HTTP caching.
There is only MISS or REFRESH. The revalidation may happen
transparently at any time and you never see it.
The favicon would be cached even if the original server explicitly
instructed not to cache nor store the favicon.
The refresh_pattern ignore-private and ignore-no-store meet that
criteria in a way. The object result from the current transaction will
be left in the cache regardless of what might happen to it on any
future or past ones.
Yes, I know it might be considered a bad practice,
As stated your caching policy is not particularly bad. The use/need of
ignore-private and ignore-no-store is the only bad thing and the
strong sign that you are possibly violating some law...
and perhaps illegal to some readers,
... so consulting a lawyer is recommended.
We provide those controls in Squid for specific use-cases. Yours may
or may not be one of those it is hard to tell from a fake example.
but I assure you that the other servers (the real web servers) that
provide the responses, are business partners and they gave me their
approval to override their caching policy. However, they don't want
to change their configuration and it's totally up to me to create my
caching layer.
They may not be willing to alter their public cache controls, but
Surrogate-Control features available in Squid offer an alternative
targeted caching policy to be emitted by their servers for your proxy.
This assumes they are willing to setup such alternative policy and you
configure your proxy as a reverse-proxy for their traffic.
Your whole problem would be solved by the upstream simply sending:
Surrogate-Control: max-age=3600;your_proxy_fqdn
And another thing: the clients are not web browsers. The clients
consuming these resources ("favicons" for sake of simplicity) are
software components using HTTP as their transport protocol.
Thanks for any advice on the subject.
Well...
you have a set of URLs with undefined behaviour differences from the
notably special-case ones in your example ...
being fetched by clients with undefined but very big behaviour
differences from the UA which would be fetching your example URLs ...
... and you want us to help with specific details about why your
config is not working as expected?
As the old cliche goes "insufficient data".
Amos