Riccardo Castellani wrote:
cache_dir ufs /usr/local/cache/1 3500 128 256
cache_dir ufs /usr/local/cache/2 2500 128 256
I'd strongly suggest using "aufs" instead of "ufs".
if I have 1 Mbps for a week my cache size should be 76 GB, but now I
have 6 GB
My cache searching will decrease if my disk cache is so big ?!
As long as fetching from disk is faster than fetching from the network,
your service time will improve. Caching a week's worth of traffic is a
good guesstimate for a starting point for cache size. For best
performance, you are going to have to monitor the effect of any changes
you make.
You can, but why would you want to? The suggestion is one cache_dir
per spindle to spread the IO load. Putting multiple partitions on
one spindle makes about the same sense as multiple cache_dirs in the
same partition. Access to all of them will be contending for the
limited IO resources available.
What do you want to say with "spindle" word ? What means ?
A physical disk.
This is entirely dependent on the filesystem you are using and the
number of objects you cache. The goal is to keep the number of files
per directory reasonable, because most filesystems are not optimized
for a "large" ratio (10s of thousands of files per directory).
I'm using Debian 5 with ext3 fs.
Which is a fine (stable, well understood) choice. You might want to
look into the "noatime" and "nodiratime" mount directives, so you
eliminate the need to write to files and directories on access.
http://wiki.novell.com/index.php/File_System_Primer#File_System_Comparison
states that without a "recent" addition of "htrees" enabled (also known
as dir_index) you should not exceed 5,000 files per directory. "tune2fs
-l /dev/sdXY" will show if you have dir_index enabled (it will be listed
in "Filesystem features"). According to
http://www.nabble.com/Re%3A-Recommended-max.-limit-of-number-of-files-per-directory--p22722654.html,
if you have dir_index enabled you can successfully put 1,000,000 files
in a directory before "ext3 start to suck".
Chris