Search squid archive

Re: How to use tcp_outgoing_address with cache_peer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26/04/2013 10:57 p.m., Alex Domoradov wrote:
On Fri, Apr 26, 2013 at 12:31 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
On 26/04/2013 8:37 p.m., Alex Domoradov wrote:
First of all - thanks for your help.

Problem #1: Please upgrade your Squid.

   Squid-2.6 has been 3 years since the last security update, nearly 5
years
since your particular version was superceded.
ok, I will update to the latest version

On 24/04/2013 12:15 a.m., Alex Domoradov wrote:
Hello all, I encountered the problem with configuration 2 squids. I
have the following scheme -


http://i.piccy.info/i7/0ecd5cb8276b78975a791c0e5f55ae60/4-57-1543/57409208/squids_schema.jpg

Problem #2: Please read the section on how RAID0 interacts with Squid ...
http://wiki.squid-cache.org/SquidFaq/RAID

Also, since youa re using SSD, see #1. The older Squid like 2.6 push
*everything* through disk which reduces your SSD lifetime a lot. Please
upgrade to a current release (3.2 or 3.3 today) which try to avoid disk a
lot more in general and offer cache types like rock for even better I/O
savings on small responses.
ok. The main reason why I choose raid0 is to get necessary disk space ~400
Gb.

It does not work the way you seem to think. 2x 200GB cache_dir entries have
just as much space as 1x 400GB. Using two cache_dir allows Squid to balance
teh I/O loading on teh disks while simultaenously removing all processing
overheads from RAID.
If I understood you correctly in my environment will be more
preferable to use something like

SSD1 /dev/sda
SSD2 /dev/sdb

# mount /dev/sda /var/spool/squid/ssd1
# mount /dev/sdb /var/spool/squid/ssd2

and point squid to use 2 separate disk space
cache_dir aufs /var/spool/squid/ssd1 200000 16 256
cache_dir aufs /var/spool/squid/ssd2 200000 16 256

Yes that is the idea.

<snip>
and that's a problem. I see the following packets on my external interface

# tcpdump -nnpi bond1.2000 port 3128
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond1.3013, link-type EN10MB (Ethernet), capture size 65535
bytes
13:19:43.808422 IP yyy.yyy.yyy.239.36541 > 192.168.220.2.3128: Flags
[S], seq 794807972, win 14600, options [mss 1460,sackOK,TS val
3376000672 ecr 0,nop,wscale 7], length 0
13:19:44.807904 IP yyy.yyy.yyy.239.36541 > 192.168.220.2.3128: Flags
[S], seq 794807972, win 14600, options [mss 1460,sackOK,TS val
3376001672 ecr 0,nop,wscale 7], length 0
13:19:46.807904 IP yyy.yyy.yyy.239.36541 > 192.168.220.2.3128: Flags
[S], seq 794807972, win 14600, options [mss 1460,sackOK,TS val
3376003672 ecr 0,nop,wscale 7], length 0

So as I understand connection to my parent go through table ISP2
(because tcp_outgoing_address set src ip for the packets to the
yyy.yyy.yyy.239) and external interface bond1.2000 when I expected
that it would be established via internal interface bond0.

The golden question then is whether you see those packets arriving on the
parent machine?
  And what happens to them there.

Amos
Those packets doesn't rich the final destination - 192.168.220.2.
Because according to the rules in the table ISP2 all packets with src
yyy.yyy.yyy.239 go to default gateway of ISP, in my situation is
yyy.yyy.yyy.254 and i think it's just dropped.

So you see the packets "exiting" machine A and never arriving on machine "B". With only a wire between them that is unlikely. As I said before the firewall has to be getting in the way somewhere. There are several layers of components involved with the firewall these days and each end of the link has its own version of the same layers of components.

I'm afraid it is going to be up to you to track down exactly which one is getting in the way. The best I can do is point you at a few things which are commonly causing trouble to other Squid users with this type of setup: * rules in the Squid box capturing the Squid outbound packets and sending them back to Squid.
 * rules on the sending box preventing delivery (DROP/REJECT on outbound)
 * rules on receiving box preventing arrivals (DROP/REJECT on inbound).

You focused earlier on the routing rules as evidence that the looping back to Squid was not happening, but it is both routing rules, and NAT rules involved can do it. The NAT ones have the nasty property of changing the packets which can make the NATed packet be missed by monitoring tools (tcpdump etc). So take extra care there. These are most often the problem as the slight mistake of omitting or adding a NIC interface or IP range to the NAT rules can have major effects on what they are capturing.

The DROP rules at both sides could be hidden anywhere in the firewall complexity. So a full audit is sometimes required to find things hidden away. Alternatively there are things like rp_filter or SELinux which are automatic components doing the same things as a DROP rule for certain things - often without reporting what they are dropping. If something like that is happening it can be *very* difficult to identify. Good luck if its one of these.

The one thing you have demonstrated is that the problem is something in the operating system *outside* of Squid. So this is not really an appropriate place for detailed tracking any more. The best place if you still need further help would be the help groups for the developers of the networking stack on your machine(s). From the tools you've mentioned already I guess that would be netfilter.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux