Search squid archive

Re: Re: Re[squid-users] verse Proxy, sporadic TCP_MISS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Henrik Nordstrom-5 wrote:
> 
> mån 2009-10-05 klockan 08:10 -0700 skrev tookers:
> 
>> Hi Henrik,
>> Thanks for your reply. I'm getting TCP_MISS/200 for these particular
>> requests so the file exists on the back-end,
> 
> Are you positively sure you got that on the first one? Not easy to tell
> unless there is absolutely no other status code reported in the logs for
> that URL. The access.log entry for the first may well be long after the
> crowd.
> 
>> Squid seems unable to store the
>> object in cache (quite possible due to a lack of free fd's), or possibly
>> due
>> to the high traffic volume.
> 
> Yes, both may cause Squid to not cache an object on disk. cache.log
> should give indications if fd's are the problem, and also in most cases
> when I/O load are the issue..
> 
> But neither lack of fds or high I/O load prevents Squid from handling
> the object as cachable. That is not dependent on being able to store the
> object on disk. But maybe there is glitches in the logic there... how
> collapsed_forwarding behaves on cachable objects which Squid decides
> should not be cached for some reason have not been tested.
> 
>> Is there any way to control the 'storm' of requests? I.e. Possibly force
>> the
>> object to cache (regardless of pragma:no-cache etc) or have some sort of
>> timer / sleeper function to allow only a small number of requests, for a
>> particular request, to goto the backend?
> 
> It's a tricky balance trying to address the problem from that angle.
> 
> Forcing caching of otherwise uncachable objects is one thing. This you
> can do via refresh_pattern based overrides, but best way it so convince
> the server to properly say that the object may be cached... but from
> your description that's not the problem here.
> 
> The problem with timer/sleeper is that you then may end up with a very
> long work queue of pending clients if the object never gets cachable..
> 
> Regards
> Henrik
> 
> 
> 


Hi Henrik,

We've had this issue occur on several occasions and each time I've checked
the access.log I can see the page scrolling with TCP_MISS/200 for the same
URL, a check on the back end docroot confirms the file exists which didnt
make sense as to why it wouldn't cache.
I probably should have described the nature of the requests in an earlier
post, so here we go... 
Squid is serving up a Flash application that reads in XML, many XML files
are generated on the back end every minute or so. The XML files are
generated with an epoch timestamp in the file name, the Flash gets a
'master' XML file every 30 seconds telling it which XML files it needs to
request. Now the files requested will only have a lifetime of 60 seconds, as
the next set of files will be ready (with a new timestamp) within 60
seconds. So on the Squids I have these requests caching for 1 hour, as the
likely hood of these files being updated again are practically zero. 
The back end servers have been configured to set cache headers, in this case
these files have their expire set to access plus 1 hour. 

I'm currently load testing a similar system to see if I can recreate the
problem seen in the 'live' environment.

Will let you know the results of that testing.

Thanks,
Tookers
-- 
View this message in context: http://www.nabble.com/Reverse-Proxy%2C-sporadic-TCP_MISS-tp25659879p25783919.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux