Search squid archive

Re: is this possible?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the hint.

I'm using the collapsed_forwarding and quick_abort_* options and seem to be having some luck. There are, however, oddities. It seems that sometimes additional requests are still being forwarded to the underlying web server. Other than "squidclient mgr:active_requests" are there any other ways of monitoring the status of subsequent requests which should be deferred to wait for the original?

thanks

JR

Amos Jeffries wrote:
botemout@xxxxxxxxx wrote:
Greetings

We have an unusual problem.

One of our clients uses a program to request map data from one of our servers. We have a reverse proxy squid cache in front of it. Client hits squid, which gets data from web server and serves it back to client.

Here's the problem. Client uses broken code that times out before the map data is finished processing. It then resubmits the request. It'll do this about 10 times before finally dying. Each new request, however, also times out so nothing is done and lots of time is wasted (as single one of these requests might take 5 minutes to return).

So, I've proposed that we use url_rewrite_program to pass the request to a program which makes the request to the webserver (and DOESN'T timeout!), it then returns the same URL but by this time the object is in the cache and the original squid process returns the data from the cache.

Is this craziness? Anyone do anything like this before? Or is there some better, easier way to handle this?

There are a couple of things worth trying:

* quick_abort_min/quick_abort_max. Make sure they are set to allow squid to finish the first request even after client is gone. This will at least let the retries have shorted generation times.

* If you are using Squid-2 collapsed_forwarding is your very best friend. Cupled wit the above it will ensure only the first request gets to the backend and later ones get served whatever data it came up with.


* Personally I'd also stick a 5-10 minute minimum cache time on all results if the backend takes 5 minutes to generate.

Short caching times still lets you update relatively quickly, but you want to have them stick around just long enough to catch followups and if all clients suddenly got this terrible software they won't kill your service with retries.

Amos

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux