On 1/01/2014 12:15 a.m., mxx@xxxxxxxxxxx wrote: > Hi, > > Maybe because most of the time squid is used differently I'm having > troubles finding an answer to this question. > It would be very nice if someone could help me out with this :) > > I only use it to filter ads and to redirect traffic to some domains > through different uplinks. I don't really need the caching. > > Squid 3.4 does all of that perfectly (Linux 3.12) in intercept mode. > But download managers using multiple connections concurrently to > download 1 file are only able to use 1 connection/destination anymore. Squid does not impose any such limitation. Unless you have explicitly configured the maxcon ACL to prohibit >1 connections per client IP. The default behaviour of a non-caching Squid should be exactly what you are requesting to happen. NOTE: download managers which open parallel connections are *degrading* the TCP congestion controls and reducing available network resources across the Internet. Reducing their parallel requests to a single fetch is actually a good thing. > > What I've found so far are only options like range_offset_limit in > regards to cache management. If you have configured that range limit or the related *_abort settings then they may cause a behaviour similar to what you describe. Not exactly a prohibition, but Squid downloading the entire object from start until the requested range is reached. Doing that N times in parallel can slow down the 2..N+1 transactions until they appear to be one-at-a-time occurances. > > Is it possible in any way to let squid pass through and simply ignore > all connection requests to destinations with certain Content-Types so a > client could connect multiple times to the destination concurrently? Content-Type is not known until after the request has been made and reply received back. What you ask is like deciding whether to make an investment now based on next years stock exchange prices (the URL can give hints of likelihood, but is not very reliable). Amos