On 24/05/2012 10:05 a.m., Ali Esf wrote:
hello list and hello dear Amos
thanks for your help.
some of my problems with squid are solved but some of them not.
i compared squid on Linux Centos 5.8 with cc proxy on Microsoft windows server 2003
and understood that the ccproxy is more fast than squid on the same specification machine and supports more users.
i captured the screen of the cc proxy and squid.
http://up98.org/upload/server1/02/j/bpufq054uyf1qeamraj.jpg
the above picture shows cc proxy on windows.as you see it supports 64 users and 1264 connections and even more.
http://up98.org/upload/server1/02/j/kqlr5fcr2fvk1jafqva4.jpg
the above picture shows port 9090 that is configed for http proxy by squid by netstat command.
it shows there are 574 connections through port 9090 and squid.
http://up98.org/upload/server1/02/j/hprnte4gldvsylb19xf.jpgthe above picture shows the number of users to port 9090 that are 37 users.
Ah, I see. You are confusing "users" with "TCP connections". There is no
relationship in HTTP between number of users supported and number of
connections supported.
The number of TCP connections as measured by netstat has only one limit:
65535 TCP connections per receiving IP:port on the box. This will be
true for both proxies I'm sure. What will be different is the HTTP
keep-alive support. Which determines how and when connections are
closed, and how many requests happen inside each before closure.
Pipelining of requests also determines whether any requests are aborted
and have to be retried.
What you are looking at depends entirely on what those 1264/64 means to
ccproxy. Is that 1264 authenticated users using 64 concurrent TCP
connections? or 1264 TCP connections and 64 currently alive? or is it
1264 requests received over 64 TCP connections?
Squid has a similar confusion looking solely at netstat numbers. One
user can open 1 or more TCP connections, and any or none of them can be
kept alive.
The amount of speed slowdown you can expect from one or both depends
entirely on the amount of requests sent over the TCP connection. Which
is where the questions above come in very imporant:
* Authenticating requires minimum 1-2 requests per user.
* HTTP keep-alive feature permits one single TCP connection (netstat
== 1) to handle many thousands or millions of requests. This will be
different for each of the proxies, and depend on the type of requests
being sent by the clients.
Say for example, you have 1500 users. 1480 connect at once and both
proxies handle them fast. those clients disconnect. *only* 1 of them
connects later but this one has a virus. The infected user can turn on
the PC, not even open the browser, and the virus open a TCP connection
and fills it with 10,000,000 small HTTP requests.
How long is it going to process and reject 10 million requests?
netstat shows 1 connection total. For that period all other users will
see degraded service to some degree.
HTTP software (any type) is measured in requests-per-second as a simple
consistant measure to avoid all this fuzzy boundaries and calculation
issues.
when the number of users increases the response time of squid become too slowly that sometimes takes 11 - 15 seconds to load the google web page.
but i tested that the speed of download files through squid is great and the problem is when loading the pages when users get around 40.
Hmm, 40 (clients) * 2 (FD per client) * 65536 (buffer bytes per
connection) == 5242880 (bytes of buffer) + size of objects requested ==
? how many MB/GB of RAM do you have free?
DO NOT count swap and virtual memory, if Squid swaps the first thing to
start thrashing I/O speed is the VMem pages used for memory cache and
its index.
and also in cc proxy with even 64 users and more the speed of loading pages is great.it is as like as there is no any proxy.
the machines specification is the same and are :
ram = 1 GB
port = 1 Gbps
cpu = Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 2 cores
The current stable release of Squid are single-core software. ccproxy
has multi-core support.
What versions you test is *important* when comparing these things.
os = CentOS Linux 5.8
hard disk space = 30 GB
--------------------------------------------------------------------------------
we use squid just for proxy and not for catching. and need authentication just by user name and password through mysql database.
here is the configuration::
cache deny all
<snip>
...
cache_mem 800 MB
Um, you have caused Squid to allocate itself 800 MB of your 1024 MB on
the box. Just for memory cache ... when caching there is disabled ("deny
all").
Either remove the huge cache_mem allocation (non-caching proxy), or
re-enable caching (cachign proxy) to see what Squid can actually do when
sufficient RAM is available.
Amos