Search squid archive

Re: calculating hardware for 900 users for SQUID cache server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amos.
You mean it's possible to don't use disk cache ? My OpenBSD is already a
64bits system.
What's your recommandation ?
-- 
Cordialement,
Loïc BLOT, UNIX systems, security and network expert
http://www.unix-experience.fr 

Le dimanche 13 janvier 2013 à 18:58 +1300, Amos Jeffries a écrit :
> On 13/01/2013 5:44 p.m., John Joseph wrote:
> > Hi Loic
> > Your feedback was quite useful, I was able to come up with some values after checking your configuration.
> >
> > If by 3 years my users base increase to 4000 no, then will it be a over kill if I go with 32 GB ram.
> 
> 
> 4000 "users" (I assume that means, 1 client software == 1 user) all 
> simultaneously downloading (+1 client I/O buffer) large objects (>256KB) 
> which are *all* TCP_MISS (+1 server I/O buffer) and being REQMOD *and* 
> RESPMOD ICAP filtered (+2 network I/O buffers) will consume ~5GB of RAM 
> in Squid.
>   + 1x client I/O buffer, 64KB
>   +1x server I/O buffer, 64KB
>   +2x ICAP I/O bufers, 128KB
> 
> The likelyhood of that happening is relatively low if you are an ISP. 
> Far more likely is that you will have a mix of HIT/MISS, and notbe ICAP 
> fitering some or most requests. Squid can operate happyly with a few 
> hundred MB of RAM
> 
> The big RAM consumption comes from cache_mem, which is Squids in-memory 
> filesystem for cache storage of high demand objects. That can consume as 
> much or as little RAM as you can throw at it, up to a limit which is 
> higher than most can afford to purchase yet.
> 
> 
> As has been suggested earlier, *drop* the idea of "users" when 
> calculating HTTP requirements. Users are irrelevant. One single user can 
> completely max out a Gbps ethernet connection, and several thousand 
> users can happily co-share a 56Kbps uplink. The traffic request rate and 
> size are the key details for capacity planning, followed by the amount 
> of processing components you are going to be performing on that traffic.
> 
> IMO, unless you are expecting to face some particularly unusual 
> situation like hundreds of thousands of users or very high traffic rates 
> the commonly available hardware can handle the traffic easily. Look for 
> good I/O speeds with low latency on disk hardware, high write speeds for 
> any SSD hardware planned, and the rest can be governed by your available 
> budget.
> 
> > ----- Original Message -----
> > From: Loïc Blot
> >
> > Hello Joseph,
> > I use a Dell R320 (2, because failover), under OpenBSD 5.2 with 16GB RAM
> > and Two Intel PRO/1000 PT (82571EB) (Broadcom 5720 isn't supported).
> > I have 500-600 users/smartphones and 1GB of WAN bandwidth.
> > To improve perfs (and this explain why so many RAM), i
> > move /var/squid/cache to mfs (memory file system), and i use 4G "disk"
> > cache and 3.5G memory cache. Perfs are very great.
> 
> I'll bet. You realize they could possibly be even better by using a 
> 64-bit system and eliminating the "disk" cache? Squid swaps objects from 
> memory cache to "disk" cache and back again during regular operation. If 
> you have enough RAM to eliminate that, why bother forming a 
> configuration that keeps the overheads?
> 
> Amos



[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux