Search squid archive

Re: How to use tcp_outgoing_address with cache_peer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 29, 2013 at 4:57 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
> On 26/04/2013 10:57 p.m., Alex Domoradov wrote:
>>
>> On Fri, Apr 26, 2013 at 12:31 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx>
>> wrote:
>>>
>>> On 26/04/2013 8:37 p.m., Alex Domoradov wrote:
>>>>
>>>> First of all - thanks for your help.
>>>>
>>>>> Problem #1: Please upgrade your Squid.
>>>>>
>>>>>    Squid-2.6 has been 3 years since the last security update, nearly 5
>>>>> years
>>>>> since your particular version was superceded.
>>>>
>>>> ok, I will update to the latest version
>>>>
>>>>> On 24/04/2013 12:15 a.m., Alex Domoradov wrote:
>>>>>>
>>>>>> Hello all, I encountered the problem with configuration 2 squids. I
>>>>>> have the following scheme -
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://i.piccy.info/i7/0ecd5cb8276b78975a791c0e5f55ae60/4-57-1543/57409208/squids_schema.jpg
>>>>
>>>>
>>>>> Problem #2: Please read the section on how RAID0 interacts with Squid
>>>>> ...
>>>>> http://wiki.squid-cache.org/SquidFaq/RAID
>>>>>
>>>>> Also, since youa re using SSD, see #1. The older Squid like 2.6 push
>>>>> *everything* through disk which reduces your SSD lifetime a lot. Please
>>>>> upgrade to a current release (3.2 or 3.3 today) which try to avoid disk
>>>>> a
>>>>> lot more in general and offer cache types like rock for even better I/O
>>>>> savings on small responses.
>>>>
>>>> ok. The main reason why I choose raid0 is to get necessary disk space
>>>> ~400
>>>> Gb.
>>>
>>>
>>> It does not work the way you seem to think. 2x 200GB cache_dir entries
>>> have
>>> just as much space as 1x 400GB. Using two cache_dir allows Squid to
>>> balance
>>> teh I/O loading on teh disks while simultaenously removing all processing
>>> overheads from RAID.
>>
>> If I understood you correctly in my environment will be more
>> preferable to use something like
>>
>> SSD1 /dev/sda
>> SSD2 /dev/sdb
>>
>> # mount /dev/sda /var/spool/squid/ssd1
>> # mount /dev/sdb /var/spool/squid/ssd2
>>
>> and point squid to use 2 separate disk space
>> cache_dir aufs /var/spool/squid/ssd1 200000 16 256
>> cache_dir aufs /var/spool/squid/ssd2 200000 16 256
>
>
> Yes that is the idea.
I see, and on which disks in that case would be place files? It would
be something like round robin?

> So you see the packets "exiting" machine A and never arriving on machine
> "B". With only a wire between them that is unlikely.
> As I said before the firewall has to be getting in the way somewhere. There
> are several layers of components involved with the firewall these days and
> each end of the link has its own version of the same layers of components.
>
> I'm afraid it is going to be up to you to track down exactly which one is
> getting in the way. The best I can do is point you at a few things which are
> commonly causing trouble to other Squid users with this type of setup:
>  * rules in the Squid box capturing the Squid outbound packets and sending
> them back to Squid.
>  * rules on the sending box preventing delivery (DROP/REJECT on outbound)
>  * rules on receiving box preventing arrivals (DROP/REJECT on inbound).
>
> You focused earlier on the routing rules as evidence that the looping back
> to Squid was not happening, but it is both routing rules, and NAT rules
> involved can do it. The NAT ones have the nasty property of changing the
> packets which can make the NATed packet be missed by monitoring tools
> (tcpdump etc). So take extra care there. These are most often the problem as
> the slight mistake of omitting or adding a NIC interface or IP range to the
> NAT rules can have major effects on what they are capturing.
>
> The DROP rules at both sides could be hidden anywhere in the firewall
> complexity. So a full audit is sometimes required to find things hidden
> away. Alternatively there are things like rp_filter or SELinux which are
> automatic components doing the same things as a DROP rule for certain things
> - often without reporting what they are dropping. If something like that is
> happening it can be *very* difficult to identify. Good luck if its one of
> these.
>
> The one thing you have demonstrated is that the problem is something in the
> operating system *outside* of Squid. So this is not really an appropriate
> place for detailed tracking any more. The best place if you still need
> further help would be the help groups for the developers of the networking
> stack on your machine(s). From the tools you've mentioned already I guess
> that would be netfilter.
>
> Amos
I see, all I want to know is that is there exist any settings in squid
to avoid such problems without modification routing tables. Something
like parent_tcp_outgoing_address ;)

As a solution I have added route to parent via external table

# ip ro add 192.168.220.2 dev bond0 table IPS1
# ip ro add 192.168.220.2 dev bond0 table ISP2
# ip ro add 192.168.220.2 dev bond0 table ISP3

and apply SNAT rules to outgoing packets

# iptables -t nat -I POSTROUTING -p tcp -s xxx.xxx.xxx.62 -d
192.168.220.2 --dport 3128 -j SNAT --to-source 192.168.220.1

# iptables -t nat -I POSTROUTING -p tcp -s yyy.yyy.yyy.239 -d
192.168.220.2 --dport 3128 -j SNAT --to-source 192.168.220.1

# iptables -t nat -I POSTROUTING -p tcp -s zzz.zzz.zzz.10 -d
192.168.220.2 --dport 3128 -j SNAT --to-source 192.168.220.1




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux