Serge Fonville wrote:
Hi, I am in the process of setting up a two node cluster. Since I am setting up Squid on the private side. I am looking into how data is stored. Basically I am trying to detemine what data can be shared between instances of Squid. Say a host opens a connection to a remote location and collects all kinds of state information and that squid goes down. will that mean that all information is kept? Can a download that is broken (due to the squid going down) halfway, still continue while happily using the other (which has the same IP)
The download-in-progress will be interrupted. If a range request is made, asking for the "rest" of the object, the remaining Squid will honor that request, but the object fragment will not be cached (depending on your range_offset_limit).
Are there any specific things I need to look in to? I intend to set up a cluster consisting of the following GlassFish PostgreSQL Nagios Postfix Squid Heartbeat Subversion Named DHCPd TFTPd Apache HTTPd DRBD (dual primary with either GFS2 or OCFS2) ldirectord or keepalived (all traffic is being balanced between the two real servers and both nodes should be active) It will probably run on either Gentoo or Centos x64 What are the important thing in regard to squid that I need to take special attention to. Things I can imagine IPaddress sharing
Works fine.
Storage (cache) sharing
Not supported.
Synchronization of configuration
To a point. "cache_peer" and either "visible_hostname" or "unique_hostname" will be different. But you can use an "include" to achieve that.
Sharing of logon data to squid (I intend to use form based authentication for squid)
This will have to be accomplished by your external_acl_type helper.
I am aware of the fact that the majority of this setup is in no way relevant to squid, but it may impact it (I can not yet determine that) I am especially interested in anything that is relevant to impact in regard to availability, performance and load balancing Any help is greatly appreciated!! Regards, Serge Fonville
Chris