Hey Liam,
If you can run a test on the access.log it would supply a bit more
information without intruding to the url level:
cat access.log |awk '{print $4}'|sort|uniq -c
The result will be a tiny statistics about the "character" of your usage.
Since browsers tends to cache content them-self sometimes that cache is
giving you something you cannot just see by looking for a "HIT" in this
form or another.
I remember that most squid analytical tools are testing for "HIT"
objects ignoring all other sides of the cache.
If you have tried djmaza as I suggested and you have not seen a single
HIT when surfing it there is indeed something strange.
I myself use squid 3.4.5 and I do see that there is not HIGH rate of
HITs but I do understand why it can happens and sometimes even
understand why it happens.
Once you will have the results of the access.log parsing we will be smarter.
Eliezer
On 06/30/2014 05:05 AM, liam@xxxxxx wrote:
I have tried deleting the cache and setting its size to 10GB. I ran squid -z
again and it created the directories before squid -z froze. The maximum
object size is set to 5GB, and I have checked some sites using redbot.org to
see if the can be cached or not. It says that they can, and I have had the
squid proxy running for about 48 hrs now with about 50 clients connected. I
have scanned the access.log and there is not a single hit. Even if the same
page is requested many times.
Are there some settings that I am missing in squid.conf that is stopping the
cache from working? Do you know where I can obtain a already compiled x86
package for Debian 7 with --enable-ssl and --enable-ssl-crtd?
Thanks for your help so far.