Search squid archive

Re: Squid sizing for url filtering and lots of users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>
> It's reading the FAQ that I supposed that in the worst case I need 400
> gigabytes of cache storage

 I stated : round about one week of traffic generated by this
 particular community.

> , and (about) 10 gigabyte of phys. mem. On the
> physical memory, I'm not so sure, because in the FAQ I read about 10 MB for
> every GB of disk cache storage, plus what is needed for cache_mem, plus tha
> RAM used by the OS to cache disk IO. From other sources I read 32MB per GB
> of disk storage.

  Trust the FAQ (only).

I don't want to underestimate the need for physical RAM, so
> I'm taking the "worst" case. I just don't know, and consequently I wonder,
> if Linux+Squid scales well to this amount of RAM and disk.
>
> >  - On average usage 12.000 users could lead to a 300reqs/sec range , on
> >    average, which is rather high-end.
> >    I would advise  a low-end server with highest cpu-Ghz available.
> >    In that case I would probably use 2 , with load balancing.
>
> Do you think that LVS would be a good choice for load balancing?
> And the servers (which can also be more than 2, if it is advisable) should
> form a cache array? This should give 2 benefits: if a client requests an
> object that is in the other server's cache, it is retrieved from there and
> not from the Internet; and the amount of cache storage should be reduced by
> roughly a factor of 2.
>
>

 I have never uses loadbalancing so I can't advise , however
 another interesting link I found w.r.t. load balancing software :

          http://www.inlab.de/balanceng/

 M.


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux