Search squid archive

RE: squid as http accelerator : now its again beca

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12-Mar-08 My Secret NSA Wiretap Overheard kk CHN Saying  :
> People ;
> 
>  I installed  squid 2.6 stable for my Webserver(FreeBSD-6.2 ,1 GB Ram)
> , where I runs a Plone 2.5 site on Zope2.9 .With other applications
> such as postfix,mailman  etc .
> 
> Squid-->Apache 2.2-->zope
> (Previosly it was apache -->>zope ;But it was too slow so I put squid
> as http accelerator)
> 
>  this setup worked fine for a  couple of weeks ,
> but yesetday  my site became very slow  again , I restarted squid
> ,apache, and zope server again , but  for some time(nearly 20 minutes
> ) it will be fast after  that again became slow .
> 
> What may be the issue , ? how can I improve the speed ?
> 
> This is my TOP out put
> 
> Any hints most welcome :
> 
> Thanks in advance
> KK


 Interesting that you have two devfs file systems, I have never seen one
dedicated to named before.

 What slice are you using for squid? If you are using /var, it could be the
problem.  Disks/slices generally dislike being run over 70-75% capacity.
 Even though you generally don't want that much variable data on a home slice, I
would suggest either splitting your squid partition to use /home and /var,
(always nice to split then anyway AFAIK) or making the one on /var smaller.


 Hope this helps


  Nicole



> 
> last pid:  1792;  load averages:  0.73,  0.50,  0.37
>                               up 0+01:08:08  12:59:26
> 145 processes: 1 running, 142 sleeping, 2 stopped
> CPU states:  5.1% user,  0.0% nice, 10.3% system,  0.0% interrupt, 84.6% idle
> Mem: 690M Active, 106M Inact, 133M Wired, 45M Cache, 110M Buf, 14M Free
> Swap: 2048M Total, 2048M Free
> 
>   PID USERNAME      THR PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
>   731 mailman         1   8    0 99732K 97904K nanslp   0:43  3.56% python
>   522 www             7  20    0   298M   295M kserel  14:37  0.78% python2.4
>  1764 root            1  96    0  2668K  1936K RUN      0:00  0.28% top
>   586 root            3  20    0 17272K  2588K kserel   1:06  0.00% gkrellmd
>   516 www             1   4    0 19308K 17568K select   0:33  0.00% python2.4
>   505 www             3  20    0  3212K  2024K kserel   0:32  0.00% pound
>   526 www             3  20    0 89988K 87408K kserel   0:17  0.00% python2.4
>   765 root            1   4    0 39340K 35628K select   0:12  0.00% perl5.8.8
>   727 mailman         1   8    0  9744K  7764K nanslp   0:09  0.00% python
>   728 mailman         1   8    0  8868K  7032K nanslp   0:08  0.00% python
>   600 squid           1   4    0 13072K 11800K kqread   0:08  0.00% squid
>   716 mysql           5  20    0 65220K 26708K kserel   0:06  0.00% mysqld
>   434 bind            1  96    0  7528K  6292K select   0:06  0.00% named
>   736 mailman         1   8    0  9468K  7632K nanslp   0:05  0.00% python
>   739 mailman         1   8    0  9380K  7448K nanslp   0:04  0.00% python
>   560 root            1   8    0  1236K   764K nanslp   0:04  0.00% powerd
>   726 mailman         1   8    0  9496K  7540K nanslp   0:04  0.00% python
>   605 root            1  96    0 25720K 24420K select   0:04  0.00% perl5.8.8
>  1392 tesac          1  96    0  2684K  1820K STOP     0:04  0.00% top
>   733 mailman         1   8    0  8388K  6512K nanslp   0:03  0.00% python
>   660 root            1  96    0  2812K  1564K select   0:02  0.00% master
>   502 www             6  20    0 33184K 23076K kserel   0:02  0.00% httpd
>   670 postfix         1  96    0  4316K  3052K select   0:02  0.00% qmgr
>   495 root            1   8    0 16160K  9588K nanslp   0:01  0.00% httpd
>   839 www             4  20    0 27420K 16040K kserel   0:01  0.00% httpd
>   501 www             4  20    0 23224K 13504K kserel   0:01  0.00% httpd
>   503 www             5  20    0 25468K 14676K kserel   0:01  0.00% httpd
>   365 root            1  96    0  1300K   848K select   0:01  0.00% syslogd
>   553 root            1  96    0  2920K  1480K select   0:01  0.00% ntpd
>  1390 tesac            1  96    0  6080K  2524K select   0:00  0.00% sshd
>  1142 postfix         1  96    0  2876K  1592K select   0:00  0.00%
> trivial-rewrite
> 
> Suspended
> 
> 
> This is my  df -h  Output
> 
>  df -h
> Filesystem     Size    Used   Avail Capacity  Mounted on
> /dev/ad4s1a    496M     91M    365M    20%    /
> devfs          1.0K    1.0K      0B   100%    /dev
> /dev/ad4s1f     19G    2.3G     15G    13%    /home
> /dev/ad4s1d    3.9G     71M    3.5G     2%    /tmp
> /dev/ad4s1e    9.7G    5.6G    3.3G    63%    /usr
> /dev/ad4s1g     39G     31G    4.5G    88%    /var
> devfs          1.0K    1.0K      0B   100%    /var/named/dev


--
                     |\ __ /|   (`\            
                     | o_o  |__  ) )           
                    //      \\                 
  -  nmh@xxxxxxxxxxxxxx  -  Powered by FreeBSD  -
------------------------------------------------------
 "The term "daemons" is a Judeo-Christian pejorative.
 Such processes will now be known as "spiritual guides"
  - Politicaly Correct UNIX Page




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux