On 19/10/2012 7:50 a.m., Leonardo Rodrigues wrote:
Hi,
Ihave some squid servers running on some companies and, from time
to time, i face some clientwith some bad software thatgenerates LOTS
of requests per second. And by a LOT i mean, sometimes, 90-100 RPS by
a single client. That usually happens on requests that are DENIED, so
they are processed quickly by squid.
Good. Squid can handle 10x-1000x times that req/sec rate with easily
rejected requests.
I was initially thinking of some kind of control on these, but
hey, requests are already being denied, there's nothing else i could
do based on deny/allow.
So i was thinking of some kind of delay_pool but based on the number
of requests per second. The idea was when a client (IP address)
reached N numbers per second, squid would introduce some random (or
fixed) delay on the replies, thus making the client slow down the
number of requests a little.
I'm pretty sure that this kind of configuration cannot be acchieved
using only normal squid parameters, but maybe there's some scriptthat
can be used with external_acl that can help me with these situations.
Do you ever faced a situation like this ? If yes, what did you do ?
The problem is that there is no way to identify the difference between
one of these broken clients, an intentional DoS attack, and another
proxy relaying requests for more than one client itself. The three cases
need very different handling and care taken with each not to get it
wrong. So we do not provide any single directive to limit these.
As you guessed, you could pass all requests via an external_acl_type
helper that takes the decisions when to respond quickly and when to add
extra delay - all it needs to do is hold its response to that request.
One other way is client_delay_pools which is available in Squid-3.2 and
later. Operates similar to delay_pools, but on the client connections.
Amos