Search squid archive

Fw: squid slow response time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





hello dear Amos and list
finally i found the problem of slowing squid on linux
the problem was not related to the squid.
it was related to the File descriptor in linux operating system.

as you know file descriptor in linux has a maximum value that indicates the maximum open files and sockets by the process.

you can see the OS max file descriptor with the following command

ulimit –n


in my boxes ( 5 vps with centos 5.8 ) the maximum file descriptor was 1024. so the squid could not to open more files and sockets when reached the maximum open files and sockets.


with increasing this value to 16384as shown


echo "fs.file-max = 64000" >> 
/etc/sysctl.conf
echo "* soft nofile 16384" >> 
/etc/security/limits.conf
echo "* hard nofile 16384" >> 
/etc/security/limits.conf
echo "ulimit -n 16384" >> /etc/profile


and adding the line in squid config files 

max_filedesc 16384


and also changing the ephemeral port count as shown

echo 
"1024 65000" > /proc/sys/net/ipv4/ip_local_port_range

and restarting the machine the problem of slow response time and slow squid was solved and squid is working so fast.

before changing the above parameters squid could open about 570 connections. but now it could open more than 1800 connections.

i suggest to mention above point in squid installation help for avoiding slow squid on servers with heavy traffic.


thanks all for helping.



________________________________
From: Amos Jeffries <squid3@xxxxxxxxxxxxx>
To: squid-users@xxxxxxxxxxxxxxx 
Sent: Sunday, 27 May 2012, 15:45
Subject: Re:  squid slow response time

On 24/05/2012 10:05 a.m., Ali Esf wrote:
> hello list and hello dear Amos
> thanks for your help.
> some of my problems with squid are solved but some of them not.
> 
> i compared squid on Linux Centos 5.8 with cc proxy on Microsoft windows server 2003
> 
> and understood that the ccproxy is more fast than squid on the same specification machine and supports more users.
> 
> i captured the screen of the cc proxy and squid.
> 
> http://up98.org/upload/server1/02/j/bpufq054uyf1qeamraj.jpg
> 
> the above
picture shows cc proxy  on windows.as you see it supports 64 users and 1264 connections and even more.
> 
> http://up98.org/upload/server1/02/j/kqlr5fcr2fvk1jafqva4.jpg
> 
> the above picture shows port 9090 that is configed for http proxy by squid by netstat command.
> it shows there are 574 connections through port 9090 and squid.
> 
> http://up98.org/upload/server1/02/j/hprnte4gldvsylb19xf.jpgthe above picture shows the number of users to port 9090 that are 37 users.

Ah, I see. You are confusing "users" with "TCP connections". There is no relationship in HTTP between number of users supported and number of connections supported.

The number of TCP connections as measured by netstat has only one limit: 65535 TCP
connections per receiving IP:port on the box. This will be true for both proxies I'm sure. What will be different is the HTTP keep-alive support. Which determines how and when connections are closed, and how many requests happen inside each before closure. Pipelining of requests also determines whether any requests are aborted and have to be retried.


What you are looking at depends entirely on what those 1264/64 means to ccproxy. Is that 1264 authenticated users using 64 concurrent TCP connections? or 1264 TCP connections and 64 currently alive? or is it 1264 requests received over 64 TCP connections?

Squid has a similar confusion looking solely at netstat numbers. One user can open 1 or more TCP connections, and any or none of them can be kept alive.


The amount of speed slowdown you can expect from one or both depends entirely on the amount of requests sent over the TCP connection. Which is where the questions above come in
very imporant:
* Authenticating requires minimum 1-2 requests per user.
* HTTP keep-alive feature permits one single TCP connection (netstat == 1) to handle many thousands or millions of requests. This will be different for each of the proxies, and depend on the type of requests being sent by the clients.


Say for example, you have 1500 users. 1480 connect at once and both proxies handle them fast. those clients disconnect. *only* 1 of them connects later but this one has a virus. The infected user can turn on the PC, not even open the browser, and the virus open a TCP connection and fills it with 10,000,000 small HTTP requests.
  How long is it going to process and reject 10 million requests? netstat shows 1 connection total. For that period all other users will see degraded service to some degree.


HTTP software (any type) is measured in requests-per-second as a simple consistant measure to avoid all this fuzzy
boundaries and calculation issues.

> 
> when the number of users increases the response time of squid become too slowly that sometimes takes 11 - 15 seconds to load the google web page.
> but i tested that the speed of download files through squid is great and the problem is when loading the pages when users get around 40.

Hmm, 40 (clients) * 2 (FD per client) * 65536 (buffer bytes per connection) == 5242880 (bytes of buffer) + size of objects requested == ? how many MB/GB of RAM do you have free?
DO NOT count swap and virtual memory, if Squid swaps the first thing to start thrashing I/O speed is the VMem pages used for memory cache and its index.



> 
> and also in cc proxy with even 64 users and more the speed of loading pages is great.it is as like as there is no any proxy.
> 
> 
> the machines specification is the same and are :
> ram = 1 GB
> port = 1 Gbps
>
cpu = Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz, 2 cores

The current stable release of Squid are single-core software. ccproxy has multi-core support.
What versions you test is *important* when comparing these things.

> os = CentOS Linux 5.8
> hard disk space = 30 GB
> --------------------------------------------------------------------------------
> we use squid just for proxy and not for catching. and need authentication just by user name and password through mysql database.
> here is the configuration::
> 
> 
> 
> cache deny all
<snip>
...

> cache_mem 800 MB

Um, you have caused Squid to allocate itself 800 MB of your 1024 MB on the box. Just for memory cache ... when caching there is disabled ("deny all").

Either remove the huge cache_mem allocation (non-caching proxy), or re-enable caching (cachign proxy) to see
what Squid can actually do when sufficient RAM is available.

Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux