On 22/07/2015 12:31 a.m., Jens Offenbach wrote: > Thank you very much for your detailed explainations. We want to use Squid in > order to accelerate our automated software setup processes via Puppet. Actually > Squid will host only a very short amount of large objects (10-20). Its purpose > is not to cache web traffic or little objects. Ah, Squid does not "host", it caches. The difference may seem trivial at first glance but it is the critical factor between whether a proxy or a local web server is the best tool for the job. >From my own experiences with Puppet, yes Squid is the right tool. But only because the Puppet server was using relatively slow python code to generate objects and not doing server-side caching on its own. If that situation has changed in recent years then Squids usefulness will also have changed. > The hit-ratio for all the hosted > objects will be very high, because most of our VMs require the same software stack. > I will update mit config regarding to your comments! Thanks a lot! > But actually I have still no idea, why the download rates are so unsatisfying. > We are sill in the test phase. We have only one client that requests a large > object from Squid and the transfer rates are lower than 1 MB/sec during cache > build-up without any form of concurrency. Have vou got an idea what could be the > source of the problem here? Why causes the Squid process 100 % CPU usage. I did not see any config causing the known 100% CPU bugs to be encountered in your case (eg. HTTPS going through delay pools guarantees 100% CPU). Which leads me to think its probably related to memory shuffling. (<http://bugs.squid-cache.org/show_bug.cgi?id=3189> appears to be the same and still unidentified) As for speed, if the CPU is maxed out by one particular action Squid wont have time for much other work. So things go slow. On the other hand Squid is also optimized for relatively high traffic usage. For very small client counts (such as under-10) it is effectively running in idle mode 99% of the time. The I/O event loop starts pausing for 10ms blocks waiting to see if some more useful amount of work can be done at the end of the wait. That can lead to apparent network slowdown as TCP gets up to 10ms delay per packet. But that should not be visible in CPU numbers. That said, 1 client can still max out Squid CPU and/or NIC throughput capacity on a single request if its pushing/pulling packets fast enough. If you can attach the strace tool to Squid when its consuming the CPU there might be some better hints about where to look. Cheers Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users