Search squid archive

2 node squid cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm struggling with the following;

What I would like to set up is a redundant pair of squid servers as a frontproxy accelerator for my web servers. I also would like to set this up so that squidserver1 checks with squidserver2 if it has a 'fresh' page before consulting the backend webserver. But of course, to prevent loops, if squidserver2 doesn't have the page it shouldn't try to resolve it, squidserver1 should get the page from the backend itself.

Now I came a long way. In fact, I think I'm close (but yet so far..).

I tried to set this up with siblings. So the part that matters from my squid configuration (on both squids):

http_port                       squid-test:80 vhost
icp_port                        3130
udp_incoming_address            squid-test
cache_peer sibling-test sibling 80 3130 login=PASS cache_peer backend-test parent 80 0 originserver no-query login=PASS

In the hostsfile of squidserver1 sibling-test resolves to squidserver2 and vice versa. squid-test resolves to the local ip address, backend-test resolves to the webserver. Now this works exactly like I'd want it to work for pages that have not been in the cache. However if a page has been in the cache but is stale, then the sibling is never consulted. So here goes, first 3 working scenario's to show how far I've got, and finally the scnenario I'd like to solve:

Scenario 1 (expected):
squidserver1: page is fresh
squidserver2: page is fresh
- Client requests the page from squidserver1
- Squidserver1 detects the page is fresh and delivers it to the client

Scenario 2 (expexted):
squidserver1: page is not present in the cache
squidserver2: page is fresh
- Client requests the page from squidserver1
- Squidserver1 detects the page is not present in the cache
- Squidserver1 sends an ICP request to squidserver2 to check if it has a fresh copy of the page
- Squidserver2 replies with ICP_HIT
- Squidserver1 requests the page from squidserver2
- Squidserver1 receives the page from squidserver2, stores it locally and delivers it to the client

Scenario 3 (expexted):
squidserver1: page is not present in the cache
squidserver2: page is expired
- Client requests the page from squidserver1
- Squidserver1 detects the page is not present in the cache
- Squidserver1 sends an ICP request to squidserver2 to check if it has a fresh copy of the page
- Squidserver2 replies with ICP_MISS
- Squidserver1 consults the backend webserver for the page
- Squidserver1 receives the page from the backend, stores it locally and delivers it to the client

Scenario 4 (strange):
squidserver1: page is stale
squidserver2: page is fresh
- Client requests the page from squidserver1
- Squidserver1 detects the page is expired
- Squidserver1 *DOES NOT* send an ICP request to squidserver2 to check if it has a fresh copy of the page (it sends nothing to squidserver2, also checked with tcpdump)
- Squidserver1 consults the backend webserver for the page
- Squidserver1 receives the page from the backend, stores it locally and delivers it to the client

Logging of squidserver1 during Scenario 4 shows that it does not see any neighbors:
2009/02/09 13:06:21| neighborsCount: 0
2009/02/09 13:06:21| peerSelectIcpPing: counted 0 neighbors
I can assure you however that the neighbor is there, I've tested all 4 scenario's multiple times.

So, I'd like to solve Scenario 4. What did I miss ?

If you'd like more information let me know.

Thanks!

Marc

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux