Search squid archive

Re: Rate limiting bad clients?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/08/2016 5:39 p.m., Dan Charlesworth wrote:
> Hi all,
> 
> This is more of a squid-adjacent query. Hopefully relevant enough for someone here to help…
> 
> I’m sick of all these web apps that take it upon themselves to hammer proxies when they don’t get the response they want, like if they have to authenticate for example. On big networks, behind a forward proxy, there’s always a few computers with some software doing dozens of identical, failing, requests per second.
> 
> - What’s a good approach for rate limiting the clients computers which are doing this?
> - Can anyone point to a good tutorial for this using, say, iptables if that’s appropriate?
> 
> Any advice welcome.

HTTP being stateless Squid does not track correlations enough to do
anything like that.

I've been suggesting people add an external ACL helper that tracks
requests per client IP and tells Squid whether to accept or reject any
given request based on its history.
Recent Squid releases bundle ext_delayer_acl which is a Perl script that
can be adjusted for this type of thing.

Amos

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux