On 07/01/2025 14:55, Alex Rousskov wrote:
On 2025-01-07 04:49, Tony Albers wrote:
Is it possible in squid to ensure that a badly behaving backend
application doesn't eat up all squid resources?
Yes, especially if you know about that application behavior in advance.
You can configure Squid to start denying requests for the problematic
application once the number of concurrent requests for that application
exceeds some threshold.
You will probably have to use external ACLs to track that concurrency
level. I do not have a blueprint ready, but it should be doable in
principle.
E.g.: at work we have an Apache reverse proxy in front of a number of
backend hosts. If one of the backend applications misbehaves, this can
result in all of apache's worker processes being held up by this
application, resulting in apache hanging and all sites going offline.
In apache, AFAIK, there is no way to prevent this.
Squid worker processes are not dedicated to a single request or a single
application so, as Matus UHLAR has already said, the above scenario is
not going to happen with Squid (but an application can exhaust other
resources such as socket descriptors or memory, so Squid can be slowed
down in a similar scenario unless you configure it specially).
HTH,
Alex.
But can squid handle this scenario in a way that only the site with the
misbehaving application goes offline without pulling the other sites
down with it?
I understand that the way squid and apache works is different, but
that's not really important for me. I just want to use the best tool for
the job.
TIA,
/tony
Thanks Alex and Matus, much appreciated.
/tony
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
https://lists.squid-cache.org/listinfo/squid-users