cache_dir ufs /usr/local/cache/1 3500 128 256
cache_dir ufs /usr/local/cache/2 2500 128 256
I'd strongly suggest using "aufs" instead of "ufs".
if I have 1 Mbps for a week my cache size should be 76 GB, but now I have 6
GB
My cache searching will decrease if my disk cache is so big ?!
You can, but why would you want to? The suggestion is one cache_dir per
spindle to spread the IO load. Putting multiple partitions on one
spindle makes about the same sense as multiple cache_dirs in the same
partition. Access to all of them will be contending for the limited IO
resources available.
What do you want to say with "spindle" word ? What means ?
This is entirely dependent on the filesystem you are using and the
number of objects you cache. The goal is to keep the number of files per
directory reasonable, because most filesystems are not optimized for a
"large" ratio (10s of thousands of files per directory).
I'm using Debian 5 with ext3 fs.
----- Origin
al Message -----
From: "Chris Robertson" <crobertson@xxxxxxx>
To: <squid-users@xxxxxxxxxxxxxxx>
Sent: Tuesday, June 23, 2009 11:07 PM
Subject: Re: cache size and structure
Riccardo Castellani wrote:
I'm preparing new squid machine and I'm defining cache size.
Old squid had 2 entries into "cache_dir directive" :
cache_dir ufs /usr/local/cache/1 3500 128 256
cache_dir ufs /usr/local/cache/2 2500 128 256
I'd strongly suggest using "aufs" instead of "ufs".
My cache traffic volume (I/O) is about 2 Mbps a week with peaks of 3
Mbps.
This squid cache is the parent of 2 other squid machines and it gives
answers to about 1000 users.
1- I read you suggest 1 cache_dir to same partition, why I can't use 2
folder in tha same partition ?!
You can, but why would you want to? The suggestion is one cache_dir per
spindle to spread the IO load. Putting multiple partitions on one spindle
makes about the same sense as multiple cache_dirs in the same partition.
Access to all of them will be contending for the limited IO resources
available.
2- What do you think my caches size ? 3500 and 2550 ?
Depends on your memory load. A larger cache leads to storing more
objects, which requires more memory to track. The suggestion I recall is
"a week's worth of traffic". If you are seeing an average of 2Mbit/s 24
hours a day, seven days a week, that would lead to a cache of around
150GB.
and its directory structure (128,256) ?!
This is entirely dependent on the filesystem you are using and the number
of objects you cache. The goal is to keep the number of files per
directory reasonable, because most filesystems are not optimized for a
"large" ratio (10s of thousands of files per directory).
Chris