Search squid archive

Re: Performances: Global rule to size a server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le mardi 06 septembre 2011 à 16:15 +1200, Amos Jeffries a écrit :
> On Mon, 05 Sep 2011 20:23:02 +0200, David Touzeau wrote:
> > Dear
> >
> > I would like to create a kind of law calculation in order to quickly
> > calculate server performances to store squid...
> >
> > I know there is a lot of parameters that should  make e this 
> > calculation
> > more complex, but this is just to be generic.
> >
> 
>  "generic" is not possible. It boils down to what _your_ users are doing 
>  is different to _my_ users.
> 
> 
> > For example :
> >
> > I have 150 users :
> > -----------------------------------
> >
> > Memory : 750K per users = 150x750=112 Mb memory + 300Mb for the 
> > system =
> > 512 Mb minimal.
> > Hard disk cache : 50Mb/user  = 150*50 = 7.5Go minimal stored disk 
> > size
> > for cache..
> >
> > Is it make sense ?
> 
>  Your aim makes sense in a way. The metric of "per user" does not relate 
>  to resource usage though.
> 
>  This is solely because one user could be making no requests at all or 
>  several thousand per second. And they can switch between these 
>  behaviours and random values in between without notice. I have seen a 
>  network happily serving 15K users with one Squid on a 50MBit uplink, and 
>  also a GigE network brought to its knees by just a few users. "Normal" 
>  web traffic in both cases.
> 
> 
>  The Squid relevant metrics are closer tied to requests per second, or 
>  the internal Mbps line speed of HTTP requirements.
> 
> 
>  Minimal stored disk size is always zero. Maximal is best at <80% of the 
>  size of the disk you plug in. Moderated by at most 2^24 objects per 
>  cache_dir. This is a fixed limit, so performance there is relative to 
>  your average cacheable object size.  Disks are best sized along the 
>  lines of:  2^24 * avg object size. Overall disk count multiply that out 
>  by req-per-sec to the time you want things cached by or run out of $$ to 
>  buy disks.
> 
> 
>  Memory consumption is dominated by the disk index (10-15 MB index / 1 
>  GB of storage). And by cache_mem storage, which will suck up every spare 
>  byte you can give it until itself starts to suffer those 2^24 object 
>  limits. per-User/connection requirements are measured in KB, so these 
>  days not really worth worrying over.
> 
> 
>  Amos
> 

*Many thanks Amos

It is clear....
A kind of simple law should be useful when an IT need to know quickly
which server can be ordered according users number but i'm agree with
your arguments...










[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux