On 25/08/2013 11:20 a.m., HillTopsGM wrote:
In connection with my last post, I also had this question:
Let's say that with my 4GB of RAM I decided to create a total cache storage
area that was 650GB; obviously the index would be much larger than could be
stored in RAM.
If my primary purpose was to 'archive' my windows updates, I'd expect that
it would take the system only a couple of seconds to review the index that
would spill over to the drive, and then we'd be back in business for the
updates - no?
Sort of. This "couple of seconds delay" would happen on *every* HTTP
request to the proxy.
I simply want the Proxy to help serve updates of all programs - Windows,
Browser updates like Firefox, Thunderbird, Adobe Reader, Skype, nVidia
Driver updates (100's of MB at a crack), etc, etc.
I was thinking of creating a rule (maybe someone could help be write it so
it makes sense) that all sites would be accessed directly and told NOT TO BE
cached.
You seem to have the common misunderstanding about what DIRECT is. HTTP
permits an arbitrarily long chaining of proxies:
client->A->B->C->D->E->F->..... -> origin server
always_direct causes Squid to ignore any cache_peer which you have
configured and use DNS lookup to fetch the object DIRECT-ly from the
origin. Giving an error if the DNS produces no results or is not working.
never_direct does the opposite and forces Squid to ignore DNS for the
domain being requested and just send to cache_peer. Giving an error if
the cache_peer are unavailable.
So:
* Squid always services the request recieved.
* "cache deny xxx" prevents the reponse matching xxx being stored is all.
* "refresh_pattern" operates on already stored content in determining
whether it can be a HIT or needs REFRESH-ing.
Amos