Thanks for your reply! I asked about the "one cache_dir per worker", because I think only one cache_dir is not possible using workers, am I right? The server is overkill now for 1000+ users, but we will use it for our entire network of users (10 times more) after these initial tests, and the server will be used for network monitoring too. I will test your advice about Heap configuration for RAM. Our RAID now is just Volume (Raid-0), with 5 SAS 15.5K RPM disks, any specific advices? Thanks! - Hide quoted text - On Tue, Jul 5, 2011 at 10:25 AM, Chad Naugle <Chad.Naugle@xxxxxxxxxxx> wrote: > I would first start by enabling a RAM cache to see how differently the > machine behaves. In my opinion, the hardware sounds like overkill for > 1000+ users. Also, if you want to enable cache_dir's, create one on > separate partitions, depending on however you decide to slice them up. > I don't believe the "cache_dir per a worker" is a big improvement, over > the standard "single large cache_dir", or the like. The cache_dir's > biggest bottlenecks are Disk I/O, and all of it's overhead, and with > 16GB of RAM available, Squid should be able to use a good portion of > that for a _fast_ Heap LRU cache, such as 4GB. The disk cache, is > better served for lesser-used objects, and I would personally > recommended enabling Heap GDSF for them. That only being my opnion. > The cache_dir arrangement depends on your actual RAID configuration. > >>>> Agua Emagrece <aguaemagrece@xxxxxxxxx> 7/5/2011 8:35 AM >>> > Greetings, > > I would like to ask for some advices testing squid 3.2 beta on a > semi-production scenario. > We are already running it for 1,000+ users in our company, but without > caching (just using redirector for now). > We want to start using the caching capabilities of squid, but the > right way. We chose to test the 3.2 beta mainly because of the SMP > features. We set it to use 14 workers and, so far, no critical > problems at all. > The server is a 16 virtual proc with 16GB ram and fast SAS disk array > of 4TB. > > O.S.: Slackware 64 13.37 with custom 2.6.39.1 kernel > Proxy mode: Tproxy Bridge > > Squid compile info: > > Squid Cache: Version 3.2.0.7-20110509 > configure options: '--enable-linux-netfilter' '--disable-ipv6' > '--enable-http-violations' '--enable-dlmalloc' > '--enable-useragent-log' '--enable-cache-digests' > '--enable-follow-x-forwarded-for' '--enable-storeio=aufs,ufs' > '--enable-removal-policies=heap,lru' '--with-maxfd=16384' > '--enable-poll' '--with-filedescriptors=16384' '--enable-async-io=128' > '--disable-ident-lookups' '--enable-zph-qos' '--enable-truncate' > '--with-pthreads' '--with-large-files' '--enable-ssl' > '--with-openssl=/usr/include/openssl/' '--disable-htcp' > '--enable-inline' '--enable-underscores' '--enable-icap-client' > '--enable-carp' '--with-default-user=squid' > '--enable-ltdl-convenience' '--enable-delay-pools' '--disable-wccp' > '--disable-wccpv2' '--disable-auto-locale' > 'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig' > > My squid.conf is pretty simple and almost default for bridge+tproxy > configuration for now. No cache_dir, no RAM cache. Pure proxy with a > redirector and Workers set to 14 as I said before. > > The only problems I can see now are: > > 1) squidclient mgr:* is showing me duplicated or wrong information, > like 36,000+ clients accessing the proxy when we just have 1,000+ > 2) Some delay showing sites during times of the day, even when load > and link is low (we are not caching anything yet) > > As the rocksolid is not ready yet (correct me if I'm wrong or > misinformed about it, please), I need to use one cache_dir per worker, > right? Should I configure 14 cache_dirs or there is another way? Any > advices about my compile options? Our main objective is give our users > optimized http access and some link economy. Am I going the wrong way? > > Thank you very much! > > > Travel Impressions made the following annotations > ------------------------------------------------------------- > "This message and any attachments are solely for the intended recipient > and may contain confidential or privileged information. If you are not > the intended recipient, any disclosure, copying, use, or distribution of > the information included in this message and any attachments is > prohibited. If you have received this communication in error, please > notify us by reply e-mail and immediately and permanently delete this > message and any attachments. > Thank you." >