On 10/24/2017 06:34 AM, Vieri wrote: > Next time I get a 100% squid process that brings my proxy to a crawl > HTTP-wise, what can I try in order to get more info, and possibly see > the cause of this? Here are two additional tricks to consider: * If 100% CPU usage lasts for more than a few seconds, then attaching gdb to the running Squid worker and generating a few backtraces (a few seconds apart) may identify the code that got stuck. Make sure you give gdb a non-stripped Squid binary when doing that. Squid wiki should have more instructions about the whole process. Please note that the above is less likely to help if your Squid continues to respond to new requests while you see ~100% CPU usage. You may increase your chances by blocking new traffic from reaching the problematic Squid (e.g., by redirecting all traffic to other Squid(s) that you have in load balancing or hot standby mode OR allowing all traffic to bypass Squid). If you cannot easily remove a problematic Squid from user traffic path, then use this problem as an excuse to fix your deployment architecture so that you can :-). * Another potentially useful trick is to log mgr:events every minute or so. Perhaps 100% CPU usage is associated with a particular regularly scheduled event? Please note that some modern Squids abuse the events API for frequently occurring notifications. If you see hundreds or more events in your log, this trick may be too expensive/intrusive for your environment. HTH, Alex. _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users