On 18/02/2016 2:36 a.m., Jester Purtteman wrote: > >>> cache_dir rock /var/spool/squid/rock/1 64000 swap-timeout=600 >>> max-swap-rate=600 min-size=0 max-size=128KB >>> >>> cache_dir rock /var/spool/squid/rock/2 102400 swap-timeout=600 >>> max-swap-rate=600 min-size=128KB max-size=256KB >>> >>> cache_dir aufs /var/spool/squid/aufs/1 200000 16 128 min-size=256KB >>> max-size=4096KB >>> >>> cache_dir aufs /var/spool/squid/aufs/2 1500000 16 128 min-size=4096KB >>> max-size=8196000KB >> >> NP: don't forget to isolate the AUFS cache_dir within each worker. >> Either with the squid.conf if-else-endif syntax, or ${process_id} macros. > > Right now I'm only running one worker, although I was hoping to gain a > little speed by having several dickers. Is that going to require the > additional squid-conf syntax? It *should* not matter if you have "workers 1" (or not configured). Once you go beyond that it will definitely be needed. > I had understood that to only be relevant > to multiple workers, and as I understand it, workers don't gracefully > share resources yet. "graceful" is not the word for it. More of an all or nothing situation on a per-component basis. Things like Rock where they do SMP it can be quite graceful, AUFS on the other hand is still a non-SMP component and does not even protect against broken config. > One of my future efforts will be to attempt to > divvy up workers by traffic (if I can figure out a way to manage it with > ACLs or something) so that windows updates and a few other > (stasticically big file) sites are given to one worker, and the rest go > to the other worker. You won't achieve that in any easy way. For teh same reason that you have never been able to configure in squid.conf for the taffic that you want to bypass the proxy entirely. By the time Squid worker has received the message its too late to send it anywhere else. With the Disker model though you dont have to bother. All workers take their HTTP message and pass it expicitly to the one Disker process that has that object cached. This is very similar in behaviour to how the CARP caching algorithm behaves with a frontend/worker handling the HTTP messages and backend/Disker doing the caching. But a huge amount more efficiently than a 2-layer CARP setup. > I'm hoping to be able to cache big files a little > more efficiently, currently they can lead to little seizures while squid > opens a 2-gb file into memory, seizures that update software doesn't > notice but end users do. I'm thinking isolation may be the answer there. > Just thinking out loud, I'll grep the docs tonight and think about that > more, pointers welcome. If I'm understanding you right the "seizure" you speak of is the (sometimes large) jitter Squid intoduces to packets and event processing delays when a large object is being transferred. What we know about that is that the memory handling algorithm is very inefficient in how it looks up the next bit of a stored object to send. Even though Squid-3 is much better than Squid-2 it still has issues in the area. That affects Squid just relaying any large object and is not particularly related to the caching of them. If you are allowing Squid to move very large objects into memory storage (via maximum_object_size_in_memory) that can cause unnecessary hiccups. Just reduce the max size directive and it should even out. <snip> > > It sounds like -march-native is probably unnecessary too, but I will > give '-g -O2 -Wall' a try and let you know if I come up with a less > explosive result. > Less explosive definitely. But even those are probably unnecessary. Have a look at any one of the compiler lines produced during the build to see what Squid is automagically adding for you. OR, the text output by ./configure displays a bunch of "BUILD " lines stating what the exact final set of flags, libraries, extra objects for each compiler tool is going to be just prior to the Makefiles being created. We have also tried to tune the default build (./configure && make) for maximum generic usefulness and performance "out of the box". Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users