Search squid archive

Re: Scalability in serving large ammount of concurrent requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have squid serving 

Proxy-Karam-Main ~ # netstat -s|grep -i estab
    41583 connections established

client_http.requests = 1364.257457/sec
client_http.kbytes_out = 16067.017328/sec

At peak time it reach 60-70K and sometimes even 208 Mbps. But it is serving 
real customers, and not as accelerator. And many tuneups is done.

For offloading i recommend to try nginx. It works well for static content and 
even work very good as fastcgi frontend.

On Saturday 02 May 2009 12:39:04 Roy M. wrote:
> Hey,
>
> On Sat, May 2, 2009 at 4:24 PM, Jeff Pang <pangj@xxxxxxxx> wrote:
> > We use Squid for reverse proxy for the popular webmail here, serving for
> > static resources like images/css/JS etc. Totally 24 squid boxes, each has
> > the concurrent connections more than 20,000. For small static objects,
> > Squid has much higher performance than Apache.
> >
> > But as I once submitted a message on the list, Squid can't get high
> > traffic passed through. I never saw Squid box has the traffic flow to
> > reach 200Mbits/Sec. While in some cases lighttpd (epoll +
> > multi-processes) can get much higher traffic than Squid.
>
> So did you tried Apache/Lighty + mod_proxy ?



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux