Search squid archive

Re: Testing Squid 3.2 - Some advices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/07/11 22:35, Agua Emagrece wrote:
Thanks for your reply!

I asked about the "one cache_dir per worker", because I think only one
cache_dir is not possible using workers, am I right? The server is
overkill now for 1000+ users, but we will use it for our entire
network of users (10 times more) after these initial tests, and the
server will be used for network monitoring too.
I will test your advice about Heap configuration for RAM.
Our RAID now is just Volume (Raid-0), with 5 SAS 15.5K RPM disks, any
specific advices?

#1 tip for speed: Don't use RAID underneath Squid unless you can afford the costs for high quality hardware.
  http://wiki.squid-cache.org/SquidFaq/RAID
(I tried to get some numbers to actually prove the performance difference for the nay-sayers. Unfortunately with off-the-shelf hardware Squid just kept _burning out the HDD_ when RAID was involved).

The rule-of-thumb is one cache_dir per physical disk spindle. With dedicated pairings. You can't achieve that configuration with RAID-0 obfuscating where each of your 5 disks are actually located.

When workers are added into the mixture they are able to have any number of cache_dir each. But to share a cache_dir between workers the "rock" storage type is required.


Thanks!
- Hide quoted text -

On Tue, Jul 5, 2011 at 10:25 AM, Chad Naugle wrote:
I would first start by enabling a RAM cache to see how differently the
machine behaves.  In my opinion, the hardware sounds like overkill for
1000+ users.  Also, if you want to enable cache_dir's, create one on
separate partitions, depending on however you decide to slice them up.
I don't believe the "cache_dir per a worker" is a big improvement, over
the standard "single large cache_dir", or the like.  The cache_dir's
biggest bottlenecks are Disk I/O, and all of it's overhead, and with
16GB of RAM available, Squid should be able to use a good portion of
that for a _fast_ Heap LRU cache, such as 4GB.  The disk cache, is
better served for lesser-used objects, and I would personally
recommended enabling Heap GDSF for them.  That only being my opnion.
The cache_dir arrangement depends on your actual RAID configuration.

Agua Emagrece 7/5/2011 8:35 AM>>>
Greetings,

I would like to ask for some advices testing squid 3.2 beta on a
semi-production scenario.
We are already running it for 1,000+ users in our company, but without
caching (just using redirector for now).
We want to start using the caching capabilities of squid, but the
right way. We chose to test the 3.2 beta mainly because of the SMP
features. We set it to use 14 workers and, so far, no critical
problems at all.
The server is a 16 virtual proc with 16GB ram and fast SAS disk array
of 4TB.

O.S.: Slackware 64 13.37 with custom 2.6.39.1 kernel
Proxy mode: Tproxy Bridge

Squid compile info:

Squid Cache: Version 3.2.0.7-20110509
configure options:  '--enable-linux-netfilter' '--disable-ipv6'
'--enable-http-violations' '--enable-dlmalloc'
'--enable-useragent-log' '--enable-cache-digests'

The useragent and referer logs are squid.conf options now. Not configure options.

'--enable-follow-x-forwarded-for' '--enable-storeio=aufs,ufs'
'--enable-removal-policies=heap,lru' '--with-maxfd=16384'
'--enable-poll' '--with-filedescriptors=16384' '--enable-async-io=128'

--with-maxfd and --with-filedescriptors are aliases of each other. Pick one. Or just use the squid.conf directive instead.

'--disable-ident-lookups' '--enable-zph-qos' '--enable-truncate'
'--with-pthreads' '--with-large-files' '--enable-ssl'
'--with-openssl=/usr/include/openssl/' '--disable-htcp'
'--enable-inline' '--enable-underscores' '--enable-icap-client'
'--enable-carp' '--with-default-user=squid'
'--enable-ltdl-convenience' '--enable-delay-pools' '--disable-wccp'
'--disable-wccpv2' '--disable-auto-locale'
'PKG_CONFIG_PATH=/usr/local/lib64/pkgconfig:/usr/lib64/pkgconfig'

My squid.conf is pretty simple and almost default for bridge+tproxy
configuration for now. No cache_dir, no RAM cache. Pure proxy with a
redirector and Workers set to 14 as I said before.

By "16 virtual proc" do you mean you actually have 16 CPU cores?
Each worker can consume a whole CPU core. Just like individual Squid instances before them. So doubling them up on a CPU is not useful and at least one left for the OS to work with is a good idea.


The only problems I can see now are:

1) squidclient mgr:* is showing me duplicated or wrong information,
like 36,000+ clients accessing the proxy when we just have 1,000+

Can you be a bit more specific please? We have gone to some trouble to aggregate correctly, but mistakes are possible and some reports are still per-worker blocks of stats. If something is clearly a bug please report it. Remember 3.2 is still in beta. This type of thing is why.


2) Some delay showing sites during times of the day, even when load
and link is low (we are not caching anything yet)

As the rocksolid is not ready yet (correct me if I'm wrong or
misinformed about it, please), I need to use one cache_dir per worker,
right? Should I configure 14 cache_dirs or there is another way?  Any
advices about my compile options? Our main objective is give our users
optimized http access and some link economy. Am I going the wrong way?

You can have as many cache_dir per worker as useful. Workers without a cache_dir will use memory caching.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.9


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux