Hello all,
I am planning to use Squid as an HTTP accelerator to benefit from its
great caching capabilities. It will be setup on a Windows 2003 server.
Ideally, I would like to use a huge amount of disk space for caching
(the reason is I need to cache a very large amount of data long to
create and that almost never change). Something in the hundreds of
Gigabytes.
Since it is currently hard to test with these sizes, I am wondering if
anyone has some experiences of tips with such a setup. My interrogations
are:
- what is a realistic maximum size to give to a single cache_dir directive?
- is it better to have 10 cache-dir of 100GB or 1 cache-dir of 1TB (for
instance)?
- expected average size of entries in the cache will be around 100KB,
which means that a 100GB cache will hold around 1 million entries... are
there any issues (memory?) with these numbers? I've tested 100 000
entries with success (and very good speed) but 1 million or more?
By the way, there won't be many concurrent users, so the number of hits
won't be that much of a factor. Also, considering the size of files sent
and the time needed to create them (3 seconds on average), I don't think
that the number of hits will be a real issue.
So I am just wondering if a huge cache_dir will slow things and what
should be the optimal configuration.
Thanks to all,
Martin Sévigny
AJLSM