Search squid archive

Re: Question regarding failover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Markus Meyer wrote:
Hi all,

I searched for answers to my questions but wasn't able to find anything useful. If there is documentation around please give me a hint.

I'm currently testing Squid3 as reverse proxy and we're using two backend webservers from which Squid should pull the files. Squid is configured like this:

http_port 80 accel vport defaultsite=origname

'origname' is a little weird for a public domain name. I hope its an obfuscated name and not whats actually there.

# define the backend servers and swap between them
cache_peer server1 parent 80 0 no-query originserver round-robin
cache_peer server2 parent 80 0 no-query originserver round-robin

Hopefully that's correct but it seems to work ;)
What I'd like to know is if the above still works when one of the servers goes down and how it works.

When one of the peers is down it will be detected and all requests sent to the other until it comes back. Whereapon they both get used again.


Also I want to break up the current RAID-5 and use the disks instead as single disks so that I have four or five "cache_dir" entries. What happens if one disk breaks? Does Squid still work and how?

No. Unfortunately the effect on squid of one small dir breaking is still currently the same as one large dir breaking.

What it does gives you is the ability to quickly update the squid.conf (comment outthe broken dir) and restart with the remaining cache_dir. Also a very small gain in CPU and other resources which RAID was taking away from Squid.


Amos
--
Please use Squid 2.7.STABLE3 or 3.0.STABLE7

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux