Search squid archive

Re: squid becomes very slow during peak hours

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for replies,

1. i have tried squid 3.0 stable 14 for few weeks but the problems were there and performance issues was also severe. as we had previously 2.5 stable 10 running that's why i reverted to it temporarily. further i have squid 3.0/14 in place as i have install 2.5 in separate directry and i can squid 3.0/14 run it anytime. i will also welcome if you tell me the most stable version of squid. 

2. secondly we are using RAID 5 and have very powerfull machine at present as compared to previous one, and previous was working good with the same amount of traffic and less powerfull system.

3. thirdly i have gigabit network card but yes i have 100 mb ethernet channel, but as defined in step 2 same link was working superb in previous setup.

4. i could not get chris robertson question regarding processors, i have two dual core xeon processors(3.2 ghz) and i captured stats at peak hours when performance was degraded.


So what should i do???

Regards,

--- On Wed, 7/1/09, Chris Robertson <crobertson@xxxxxxx> wrote:

> From: Chris Robertson <crobertson@xxxxxxx>
> Subject: Re:  squid becomes very slow during peak hours
> To: squid-users@xxxxxxxxxxxxxxx
> Date: Wednesday, July 1, 2009, 2:25 AM
> goody goody wrote:
> > Hi there,
> >
> > I am running squid 2.5 on freebsd 7,
> 
> As Adrian said, upgrade.  2.6 (and 2.7) support kqueue
> under FreeBSD.
> 
> >  and my squid box respond very slow during peak
> hours. my squid machine have twin dual core processors, 4
> ram and following hdds.
> >
> > Filesystem     Size   
> Used   Avail Capacity  Mounted on
> > /dev/da0s1a    9.7G    241M 
>   8.7G     3%    /
> > devfs          1.0K 
>   1.0K     
> 0B   100%    /dev
> > /dev/da0s1f     73G 
>    35G     32G 
>   52%    /cache1
> > /dev/da0s1g     73G   
> 2.0G     65G 
>    3%    /cache2
> > /dev/da0s1e     39G   
> 2.5G     33G 
>    7%    /usr
> > /dev/da0s1d     58G   
> 6.4G     47G    12% 
>   /var
> >
> >
> > below are the status and settings i have done. i need
> further guidance to  improve the box.
> >
> > last pid: 50046;  load averages: 
> 1.02,  1.07,  1.02       
>                
>                
>                 up 
> >
> > 7+20:35:29  15:21:42
> > 26 processes:  2 running, 24 sleeping
> > CPU states: 25.4% user,  0.0% nice,  1.3%
> system,  0.8% interrupt, 72.5% idle
> > Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache,
> 112M Buf, 3708K Free
> > Swap: 4096M Total, 20K Used, 4096M Free
> >
> >   PID USERNAME      THR
> PRI NICE   SIZE    RES STATE 
> C   TIME   WCPU COMMAND
> > 49819 sbt    1 105   
> 0   360M   351M
> CPU3   3  92:43 98.14% squid
> >   487 root       
>     1  96    0  4372K 
> 2052K select 0  57:00  3.47% natd
> >   646 root       
>     1  96    0 16032K 12192K select
> 3  54:28  0.00% snmpd
> >   
> SNIP
> > pxy# iostat
> >       tty     
>        da0     
>       pass0         
>    cpu
> >  tin tout  KB/t tps 
> MB/s   KB/t tps  MB/s  us ni sy in
> id
> >    0  126
> 12.79   5 
> 0.06   0.00   0 
> 0.00   4  0  1  0 95
> >
> > pxy# vmstat
> >  procs      memory   
>   page             
>       disks 
>    faults      cpu
> >  r b w     avm   
> fre   flt  re  pi  po 
>   fr  sr da0
> pa0   in   sy   cs
> us sy id
> >  1 3 0  458044 103268   
> 12   0   0   0 
>  
> 30   5   0   0 
> 273 1721 2553  4  1 95
> >   
> 
> Those statistics show wildly different utilization. 
> The first (top, I 
> assume) shows 75% idle (or a whole CPU in use).  The
> next two show 95% 
> idle (in effect, one CPU 20% used).  How close (in
> time) were the 
> statistics gathered?
> 
> >
> > some lines from squid.conf
> > cache_mem 256 MB
> > cache_replacement_policy heap LFUDA
> > memory_replacement_policy heap GDSF
> >
> > cache_swap_low 80
> > cache_swap_high 90
> >
> > cache_dir diskd /cache2 60000 16 256 Q1=72 Q2=64
> > cache_dir diskd /cache1 60000 16 256 Q1=72 Q2=64
> >
> > cache_log /var/log/squid25/cache.log
> > cache_access_log /var/log/squid25/access.log
> > cache_store_log none
> >
> > half_closed_clients off
> > maximum_object_size 1024 KB 
> >   
> > if anyother info required, i shall provide.
> >   
> 
> The types (and number) of ACLs in use would be of interest
> as well.
> 
> > Regards,
> > .Goody.
> >   
> 
> Chris
> 
> 


      


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux