Search squid archive

Re: how distribute squid loads to cpus and memories using SMP feature??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/25/2013 06:37 AM, Ahmad wrote:

> im trying to use smp feature on squid last version 3.3.9 compiled  , 
> i have server of Delr210 with 8 G rams  and i have cpu quad core 3450 intel
> xeon ,

> so ,
> my loads insquid is as below :
> 1-disk cahing    "i have 4 hardsisks each with 90 G cache dir""
> 2-memory caching   " i have 8 giga memory installed "
> 3-web filtering "using ACLS"
> 4-custom ACLS based on srx ips
> 
> 
>  i  read http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster

Do you need CARP? If you do not need it or are not sure, then I
recommend starting with a single SMP Squid instance (multiple workers,
one squid.conf) as discussed below.


> Q1-how many workers do i need ? " from /proc/meminfo i have 8 cores "

Please note that you do not have 8 physical cores. You have 4. You have
8 virtual (a.k.a., hyperthreaded) cores that are not very useful for
busy Squid workers (they only waste cycles on resource contention if two
busy Squid workers share the same physical core).


> Q2- what process is better to go frontend  and which loads better to  be in
> backend  ??


Assuming you want to optimize performance, you can try the following
mapping as a starting point (using hyperthreaded cores, and assuming
that your core #1 and #2 share the same physical core -- something you
need to verify because some CPUs have different virtual:real core mapping!):

  Core #1 - Left for OS, NICs, etc. Bind NIC to this core.
  Core #2 - Rock disker for cache_dir #1.

  Core #3 - Worker #1
  Core #4 - Rock disker for cache_dir #2.

  Core #5 - Worker #2
  Core #6 - Rock disker for cache_dir #3.

  Core #7 - Worker #3
  Core #8 - Rock disker for cache_dir #4.

If your tests (or live deployment) show that workers are overloaded,
then you can try adding Worker #4 on Core #2. If that is not enough, you
would need a beefier server.

If your tests (or live deployment) show that workers and diskers compete
for CPU cycles too much, then you would need to reduce disk caching or
get a beefier server.

Other adjustments may be necessary, of course -- the above is just a
starting point.


HTH,

Alex.





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux