Search squid archive

Re: read_timeout and "fwdServerClosed: re-forwarding"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



sorry for the late reply, i was seriously sick last week and basically 
dead to the world...

: > The problem I'm running into is figuring out a way to get the analogous 
: > behavior when the origin server is "up" but taking "too long" to respond 
: > to the validation requests.   Ideally (in my mind) squid would have a 

: Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
: behaves a little differently there than 2.6..

Alas ... I don't think i could convince my boss to get on board the idea 
of using a devel releases.  then again, i'm not too clear on how 
branch/release management is done in squid ... do merges happen from 
2.HEAD to 2.6 (in which case does 2.6.STABLE17 have the behavior you are 
refering to?) or will 2.HEAD ultimately become 2.7 once it's more stable?


: > "read_timeout" was the only option I could find that seemed to relate to 
: > how long squid would wait for an origin server once connected -- but it 
: > has the retry problems previously discussed.  Even if it didn't retry, and 
: > returned the stale content as soon as the read_timeout was exceeded, 
: > I'm guessing it wouldn't wait for the "fresh" response from the origin 
: > server to cache it for future requests.
: 
: read_timeout in combination with forward_timeout should take care of the
: timeout part...

what do you mean by "in combination with forward_timeout" ... 
forward_timeout is just the 'connect' timeout for origin server requests 
right?  so i guess you mean that if i have a magic value of XX seconds 
that i'm willing to wait for data to come back, that i need to set 
fowrad_timeout and read_timeout such that they add up to XX right?  but as 
you say, that just solves the tieout problem, it doesn't get me stale 
content.

In my case, i'm not worried about the "connect" time for the origin server 
-- if it doesn't connect right away give up, not problem there.  it's 
getting stale content to be returned if the total request time excedes XX 
seconds that i'm worried about (without getting a bunch of 
automatic retries)


So it kind of seems like i'm out of luck right?  my only option being to 
try 2.HEAD which *may* have the behavior i'm describing.


: > for a fresh response) -- but it doesn't seem to work as advertised (see 
: > bug#2126).
: 
: Haven't looked at that report yet.. but a guess is that the refresh
: failed due to read_timeout?

(actually that was totally orthoginal to the read_timeout issues ... with 
refresh_stale_hit set to Y seconds, all requets are still considered cache 
hits up to Y seconds afer they expire -- with no attempt to validate.)


-Hoss

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux