Search squid archive

Re: Limit Download Accelerators using req_header and maxconn

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



molybtek wrote:
muhammad panji wrote:
molybtek wrote:
I'm trying to limit download accelerators making multiple connections,
the only reliable way I could think of to identify them is their use of
the Range header. However, if I block all range requests, then that would
stop even legitimate partial downloads. So I was thinking of using
maxconn as well so that single use of Range header is allowed.

acl Range req_header Range -i bytes
acl Max_Connections maxconn 2
http_access deny Range Max_Connections

The problem I'm having is the Max_Connections doesn't seem to be working.
Just wondering if anyone could give a suggestion on how to get it to
work, or other methods of limiting the use of download accelerators?
Thanks.

Why don't you use delaypools? even if users make lots of connection
the bandwidth that they can use will be limited
regards,


We do, but delay pools only limit the bandwidth between the user and the
squid server. From squid server to the internet, those multiple threads
still download at the maximum speed, and for some unknown reason, they
usually tend to hog much of the bandwidth leaving very little for everyone
else sharing the connection.

This is getting a bit off the path of your original question, but...

Have you changed quick_abort_min, quick_abort_max or quick_abort_pct? How about read_ahead_gap?

Delay pools should not download from the 'net faster than the data is delivered to the client*.

Chris

* If the client cancels the request, but the quick_abort_(min|max|pct) directive causes Squid to finish downloading the object, the delay pool is no longer used.

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux