Search squid archive

Re: squid becomes very slow during peak hours

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Upgrade to a later Squid version!



adrian

2009/6/30 goody goody <thinkodd@xxxxxxxxx>:
>
> Hi there,
>
> I am running squid 2.5 on freebsd 7, and my squid box respond very slow during peak hours. my squid machine have twin dual core processors, 4 ram and following hdds.
>
> Filesystem     Size    Used   Avail Capacity  Mounted on
> /dev/da0s1a    9.7G    241M    8.7G     3%    /
> devfs          1.0K    1.0K      0B   100%    /dev
> /dev/da0s1f     73G     35G     32G    52%    /cache1
> /dev/da0s1g     73G    2.0G     65G     3%    /cache2
> /dev/da0s1e     39G    2.5G     33G     7%    /usr
> /dev/da0s1d     58G    6.4G     47G    12%    /var
>
>
> below are the status and settings i have done. i need further guidance to  improve the box.
>
> last pid: 50046;  load averages:  1.02,  1.07,  1.02                                                        up
>
> 7+20:35:29  15:21:42
> 26 processes:  2 running, 24 sleeping
> CPU states: 25.4% user,  0.0% nice,  1.3% system,  0.8% interrupt, 72.5% idle
> Mem: 378M Active, 1327M Inact, 192M Wired, 98M Cache, 112M Buf, 3708K Free
> Swap: 4096M Total, 20K Used, 4096M Free
>
>  PID USERNAME      THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
> 49819 sbt    1 105    0   360M   351M CPU3   3  92:43 98.14% squid
>  487 root            1  96    0  4372K  2052K select 0  57:00  3.47% natd
>  646 root            1  96    0 16032K 12192K select 3  54:28  0.00% snmpd
> 49821 sbt    1  -4    0  3652K  1048K msgrcv 0   0:13  0.00% diskd
> 49822 sbt    1  -4    0  3652K  1048K msgrcv 0   0:10  0.00% diskd
> 49864 root            1  96    0  3488K  1536K CPU2   1   0:04  0.00% top
>  562 root            1  96    0  3156K  1008K select 0   0:04  0.00% syslogd
>  717 root            1   8    0  3184K  1048K nanslp 0   0:02  0.00% cron
> 49631 x-man           1  96    0  8384K  2792K select 0   0:01  0.00% sshd
> 49635 root            1  20    0  5476K  2360K pause  0   0:00  0.00% csh
> 49628 root            1   4    0  8384K  2776K sbwait 1   0:00  0.00% sshd
>  710 root            1  96    0  5616K  2172K select 1   0:00  0.00% sshd
> 49634 x-man           1   8    0  3592K  1300K wait   1   0:00  0.00% su
> 49820 sbt    1  -8    0  1352K   496K piperd 3   0:00  0.00% unlinkd
> 49633 x-man           1   8    0  3456K  1280K wait   3   0:00  0.00% sh
>  765 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
>  766 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
>  767 root            1   5    0  3156K   872K ttyin  2   0:00  0.00% getty
>  769 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
>  771 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
>  770 root            1   5    0  3156K   872K ttyin  0   0:00  0.00% getty
>  768 root            1   5    0  3156K   872K ttyin  3   0:00  0.00% getty
>  772 root            1   5    0  3156K   872K ttyin  1   0:00  0.00% getty
> 47303 root            1   8    0  8080K  3560K wait   1   0:00  0.00% squid
>  426 root            1  96    0  1888K   420K select 0   0:00  0.00% devd
>  146 root            1  20    0  1356K   668K pause  0   0:00  0.00% adjkerntz
>
>
> pxy# iostat
>      tty             da0            pass0             cpu
>  tin tout  KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
>   0  126 12.79   5  0.06   0.00   0  0.00   4  0  1  0 95
>
> pxy# vmstat
>  procs      memory      page                    disks     faults      cpu
>  r b w     avm    fre   flt  re  pi  po    fr  sr da0 pa0   in   sy   cs us sy id
>  1 3 0  458044 103268    12   0   0   0    30   5   0   0  273 1721 2553  4  1 95
>
> pxy# netstat -am
> 1376/1414/2790 mbufs in use (current/cache/total)
> 1214/1372/2586/25600 mbuf clusters in use (current/cache/total/max)
> 1214/577 mbuf+clusters out of packet secondary zone in use (current/cache)
> 147/715/862/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
> 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
> 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
> 3360K/5957K/9317K bytes allocated to network (current/cache/total)
> 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
> 0/0/0 requests for jumbo clusters denied (4k/9k/16k)
> 0/7/6656 sfbufs in use (current/peak/max)
> 0 requests for sfbufs denied
> 0 requests for sfbufs delayed
> 0 requests for I/O initiated by sendfile
> 0 calls to protocol drain routines
>
>
> "netstat -an | grep "TIME_WAIT" | more " command 17 scroll pages of crt.
>
> some lines from squid.conf
> cache_mem 256 MB
> cache_replacement_policy heap LFUDA
> memory_replacement_policy heap GDSF
>
> cache_swap_low 80
> cache_swap_high 90
>
> cache_dir diskd /cache2 60000 16 256 Q1=72 Q2=64
> cache_dir diskd /cache1 60000 16 256 Q1=72 Q2=64
>
> cache_log /var/log/squid25/cache.log
> cache_access_log /var/log/squid25/access.log
> cache_store_log none
>
> half_closed_clients off
> maximum_object_size 1024 KB
>
> pxy# sysctl -a | grep maxproc
> kern.maxproc: 6164
> kern.maxprocperuid: 5547
> kern.ipc.somaxconn: 1024
> kern.maxfiles: 12328
> kern.maxfilesperproc: 11095
> net.inet.ip.portrange.randomtime: 45
> net.inet.ip.portrange.randomcps: 10
> net.inet.ip.portrange.randomized: 1
> net.inet.ip.portrange.reservedlow: 0
> net.inet.ip.portrange.reservedhigh: 1023
> net.inet.ip.portrange.hilast: 65535
> net.inet.ip.portrange.hifirst: 49152
> net.inet.ip.portrange.last: 65534
> net.inet.ip.portrange.first: 1025
> net.inet.ip.portrange.lowlast: 600
> net.inet.ip.portrange.lowfirst: 1023
>
>
> if anyother info required, i shall provide.
>
> Regards,
> .Goody.
>
>
>
>
>
>


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux