botemout@xxxxxxxxx wrote:
Greetings
We have an unusual problem.
One of our clients uses a program to request map data from one of our
servers. We have a reverse proxy squid cache in front of it. Client
hits squid, which gets data from web server and serves it back to client.
Here's the problem. Client uses broken code that times out before the
map data is finished processing. It then resubmits the request. It'll
do this about 10 times before finally dying. Each new request, however,
also times out so nothing is done and lots of time is wasted (as single
one of these requests might take 5 minutes to return).
So, I've proposed that we use url_rewrite_program to pass the request to
a program which makes the request to the webserver (and DOESN'T
timeout!), it then returns the same URL but by this time the object is
in the cache and the original squid process returns the data from the
cache.
Is this craziness? Anyone do anything like this before? Or is there
some better, easier way to handle this?
There are a couple of things worth trying:
* quick_abort_min/quick_abort_max. Make sure they are set to allow
squid to finish the first request even after client is gone. This will
at least let the retries have shorted generation times.
* If you are using Squid-2 collapsed_forwarding is your very best
friend. Cupled wit the above it will ensure only the first request gets
to the backend and later ones get served whatever data it came up with.
* Personally I'd also stick a 5-10 minute minimum cache time on all
results if the backend takes 5 minutes to generate.
Short caching times still lets you update relatively quickly, but you
want to have them stick around just long enough to catch followups and
if all clients suddenly got this terrible software they won't kill your
service with retries.
Amos
--
Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1