tookers wrote: > > > > Henrik Nordstrom-5 wrote: >> >> mån 2009-10-05 klockan 08:10 -0700 skrev tookers: >> >>> Hi Henrik, >>> Thanks for your reply. I'm getting TCP_MISS/200 for these particular >>> requests so the file exists on the back-end, >> >> Are you positively sure you got that on the first one? Not easy to tell >> unless there is absolutely no other status code reported in the logs for >> that URL. The access.log entry for the first may well be long after the >> crowd. >> >>> Squid seems unable to store the >>> object in cache (quite possible due to a lack of free fd's), or possibly >>> due >>> to the high traffic volume. >> >> Yes, both may cause Squid to not cache an object on disk. cache.log >> should give indications if fd's are the problem, and also in most cases >> when I/O load are the issue.. >> >> But neither lack of fds or high I/O load prevents Squid from handling >> the object as cachable. That is not dependent on being able to store the >> object on disk. But maybe there is glitches in the logic there... how >> collapsed_forwarding behaves on cachable objects which Squid decides >> should not be cached for some reason have not been tested. >> >>> Is there any way to control the 'storm' of requests? I.e. Possibly force >>> the >>> object to cache (regardless of pragma:no-cache etc) or have some sort of >>> timer / sleeper function to allow only a small number of requests, for a >>> particular request, to goto the backend? >> >> It's a tricky balance trying to address the problem from that angle. >> >> Forcing caching of otherwise uncachable objects is one thing. This you >> can do via refresh_pattern based overrides, but best way it so convince >> the server to properly say that the object may be cached... but from >> your description that's not the problem here. >> >> The problem with timer/sleeper is that you then may end up with a very >> long work queue of pending clients if the object never gets cachable.. >> >> Regards >> Henrik >> >> >> > > > Hi Henrik, > > We've had this issue occur on several occasions and each time I've checked > the access.log I can see the page scrolling with TCP_MISS/200 for the same > URL, a check on the back end docroot confirms the file exists which didnt > make sense as to why it wouldn't cache. > I probably should have described the nature of the requests in an earlier > post, so here we go... > Squid is serving up a Flash application that reads in XML, many XML files > are generated on the back end every minute or so. The XML files are > generated with an epoch timestamp in the file name, the Flash gets a > 'master' XML file every 30 seconds telling it which XML files it needs to > request. Now the files requested will only have a lifetime of 60 seconds, > as the next set of files will be ready (with a new timestamp) within 60 > seconds. So on the Squids I have these requests caching for 1 hour, as the > likely hood of these files being updated again are practically zero. > The back end servers have been configured to set cache headers, in this > case these files have their expire set to access plus 1 hour. > > I'm currently load testing a similar system to see if I can recreate the > problem seen in the 'live' environment. > > Will let you know the results of that testing. > > Thanks, > Tookers > Hi there, Just an update. I completed load testing last week and did not see any issues what so ever. Even with a scarce amount of file descriptors and high number of established connections I could not recreate the problems seen on our production Squids. It looks as though the problems I encountered are possibly due to an application server sitting in front of the Squids that is potentially throttling traffic. During the period of high load the application server reported severe loading and was running at pretty much full capacity, around 2.3Gbps before dropping packets. We are due to update the application server kernel which has been tuned to allow an additional 900Mbps capacity. I have also re-tuned the Squids to give a bit more of a cushion when we get high amounts of traffic. Many thanks for your advice and info Henrik, much appreciated. Thanks tookers -- View this message in context: http://www.nabble.com/Reverse-Proxy%2C-sporadic-TCP_MISS-tp25659879p25897814.html Sent from the Squid - Users mailing list archive at Nabble.com.