I was wondering how good can be having a huge storage size for caching. I mean: having, let's say, about 2 Terabytes of cache storage size can be helpful? Or do I have any kind of problems (speaking about FD for example)? Are there any studies on this matter? Going on...is it better, assuming to have such a big storage size, creating many cache directories (thus, I think, squid whenever needs to take anything from the cache can start many parallel search tasks on smaller search indexes, or Am I wrong?) or not? Looking at the squid conf, it seems that I can only specify a max objetc size for each cache dir, right? Am I missing something? This question deals with the idea of making two different cache dir, one for normal web doc and the other one for big multimedia files, thus I could even use two different replacement policies for each cache dir (let's say GDS for web doc and LRU for multimedia files), but it seems like I cannot specify different replacement policies for different cache dir right? Thus, here comes the final question, I was thinking to use two squid on the same machine, one like frontend to the other. The frontend one will cache web doc using GDS and the other squid will cache multimedia stuff using LRU. Both squid communicate with each other using ICP. But here comes the question: whenever the frontend squid ask to the other squid via ICP a multimedia file then the frontend squid will cache the file itself too right? This would be a waste of space moreover it will be unuseful two have two different squid for different caching stuff! Any suggestion/answer?!?! TIA! Marco Crucianelli