On Wed, 8 Jun 2011 11:06:33 -0600, Matthew Scalf wrote:
I am running squid 3.2. I started with this so I could use the random
acl to "load balance" requests across 20 outgoing IP addresses. I
definitely got that working, but what I found was that the randomness
across the 20 IP's wasn't very even;y distributed. So I opted to
setup
ACLRandom uses your OS random() function. It is unlikely, but possible
that function is producing non-random numbers.
It fetches a new random value with every test against the ACL. So ...
if you call it multiple times in a row you need to be aware of the
mathematical effects of multiplying probabilities together.
This is why the example configuration #3 has (1:3 vs 1:2 vs everything)
to split evenly into thirds. (NP: I've just corrected a typo in example
2 which should have shown the same).
the more complex cache peer configuration using round robin so it
would be equally distributed. I now have that working and when I test
this I get some interesting results. I basically have a php script
that connects and requests another php script that checks the ip
address and feeds the result back to the main script. If I set this
to
run 100 times, the ip's are equally distributed. This is the same all
the way up to about 900 runs. Once you hit that sweet spot, between
800 and 900, the distribution becomes slightly uneven. By a very
small
percentage, but uneven still. What is causing this? I am scratching
my
head because it's not like one of the peers stops accepting
connections completely, otherwise the results would be more off. I
"plain" round-robin or weighted-round-robin ? or with weight=N bias?
have it setup so that there is a main squid instance listening on a
public ip and port. Then I have a second squid instance listening on
an internal port for the 20 ip's. I origionally tried to setup squid
so that the same instance provided both the parent and child peers,
but the acl's for that become to complex to manage efficiently. Any
help would be greatly appreciated.
Matt