Search squid archive

Re: Shared cache directory.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/06/17 04:05, Eduardo Carneiro wrote:
Hi everyone.

Squid version 3.5.19.

I need to set up a load balancing. It would be something like, 3 servers
decentralizing the accesses. I already have that solution.

But when I decentralize the cache directories, my "HIT rates" decrease.

Correct.



I'd like to know if there is any way to have more than one squid server
sharing the same cache directory.

Not in the way you seem to be thinking of.



I have already tried it using cache_peer, with "follow_x_forwarded_for"
function to do this. But, due to the use of ssl_bump, this solution did not
answer me because, in https requests, the client IP was not shown.

Doing SSL-Bump effectively requires that the proxy terminating the TLS be the one caching. Passing the traffic in to a peer has major problems with cert mimic'ing.


If you are intercepting port 443, you should be able to LB by destination-IP to maximize the hit ratio.

That implies a traditional CARP cache_peer installation which is the solution for this problem in plain-HTTP should work almost as well for HTTPS. Just do the CARP based on destination-IP for the fake CONNECT requests the 'intercept' https_port generates - and put SSL-Bump in the backends which are caching.


Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux