On 18/08/2016 4:07 a.m., Alex Rousskov wrote: > On 08/17/2016 09:02 AM, Amos Jeffries wrote: > >> Your Squid is not even getting far enough to apply security rules to the >> garbage traffic. It is basically just doing: accept() connection, >> unmangle the NAT/TPROXY details, read(2) some bytes, try to parse - bam >> generate and send error page, close the TCP connection and log the event. > > *If* just a few clients doing the above can have a serious effect on > overall performance of a Squid instance running on decent hardware, then > we need to fix or optimize something. There is little Squid can do > against a powerful DDoS, but a few broken clients rarely mimic that. > I'm not convinced that the few req/sec is really that small. It could be 5 proper HTTP req/sec plus some hundreds of attempts to connect with the non-HTTP transactions. The latter wont show up in the mg:info report or SNMP req/sec stats (for HTTP requests/sec), but will only appear in the mgr:utilization report syscalls.sock.accepts counters and (maybe) the access.log. Omar: can you clarify how you are identifying the req/sec rates? > >> About the only thing you could do to speed it up is locate the error >> page templates and remove their contents. > > Also, *if* the clients do not open new connections until their old > connections are closed, then you may be able to slow them down > considerably by delaying those error responses. It may be possible to do > that with an external ACL helper (that delays responses) and > http_reply_access rules that target those specific error pages. > > > Disclaimer: I am not implying that the two conditions marked with "*If*" > above are true. I have not checked them. I don't think the delayer approach will work because these are parse error/abort responses that don't go near any ACL system. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users