Re: gretap tunnel redirecting 2 different networks on destination host to nics

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/04/2018 08:00 PM, Marc Roos wrote:
yes

Thank you for confirming that I correctly understand your end goal.

No

Okay. So there may be a possibility that there was a loop, which could have triggered STP to block something.

Don’t know exactly how these are created, those are not under my control. I do know if I put them both in stp bridge, the whole network goes down.

Hum.

I would not expect that the network would go down, particularly with STP enabled, with two interfaces from the same broadcast domain bridged togehter.

Stp was always on.

bridge name     bridge id               STP enabled     interfaces
br0             8000.0050568776db       yes             eth2
                                                        tun1

Okay.

I think that should be possible

Good.

I feel like 802.1Q VLANs would be cleaner than other solutions. As a benefit, 802.1Q VLANs would minimize the need to change anything when adding additional hosts (you mentioned below).

I actually use one range to nat and port forward, and block some brute forcing on the ports. But didn’t mention it for the purpose of keeping this issue simple.

Okay. So there is more than just the DROP policies. I'll tryst that you have than handeled.

At the moment this seems to be working fine, when using just one range. And having the eth and tun in a bridge. (every vm has its own macvtap on the tun1 interface) The idea behind this is that I can just deploy whatever vm I have, Whithout having to change to much in its configuration (eg switching to a different tun interface) If I have another host, I will create a new tunnel, and add that as tun2 to the existing bridge on server B.

Okay.

I would have thought that picking the proper back end network to connect the VM to would be part of creating the VM. I'm guessing that there's something I'm not aware of in how you deploy VMs that complicate this.

Meanwhile I have been reading about the pseudo bridge.

If I add these lines
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/eth2/proxy_arp

And a host route on server B to the vm's ip on dev br0 I can get a ping out, problem is that it stalls and then again pings.

Please elaborate on "stalls and then again pings". Are you talking something like every other ping? Or a given number work, then about that many fail, and then start working again, cyclically?

I'm not a fan of Proxy ARP. IMHO Proxy ARP /is/ routing. It's just a trick to enable routing when sending systems think that the destination is subnet local and don't know that they need to use a router.

I think I'd rather use bridging and EBTables over Proxy ARP. I feel like Proxy ARP is pretending to be a layer 2 connection, when it is not. Comparitively bridging is a true layer 2 connection and EBTables adds some inteligence to what is and is not bridged.

That's just my personal preference.  Your mileage may vary.

If I add ip ranges of either network to server B's interfaces. The ping to that ip is performing ok. So it looks like the setup on server A is sort of ok. It looks like the networks on the ethX of server B 'forget' that there is a vm located somewhere behind the tunnel

Ya.... I don't know if that's a complication of Proxy ARP, or the tunnel, or macvtap, or something else. - I'd need to see packet captures demonstrating the problem at various places along the way between the source client and the destination VM.

This would be the ideal situation. I cannot remember working with ebtables Should try something like this brouter?

I wasn't originally thinking about a Bridging Router (BROUTER), but I think that could be made to work.

I was originally thinking about putting eth1, eth2, and tun1 into a bridge and then leveraging EBTables rules to block bridged traffic between eth1 and eth2 to prevent potential loops. You may also want to add EBTables rules only allow traffic between eth1 & tun1 for 172.16.1.0/24 traffic and eth2 & tun1 for 10.11.12.0/24 traffic. Filtering like that should minimize the possibility that traffic originating from server A could end up being flooded to both eth1 and eth2.

Remember that the bridge on server B will forward unknown destinations out all interfaces (other than the incoming interface). Thus if you aren't careful, you could end up with traffic from a VM on server A destined to 172.16.1.25 being sent out both eth1 and eth2. (Or vice versa.)

Aside: Adding a second VM host could complicate the EBTables rules in that you would likely want to extend the rules to allow communications between VMs on the 172.16.1.0/24 network on one host to be able to communicat with other VMs on the same network on the other host(s) in addition to the main networks (eth1 & eth2).

This type of configuration change would need to be tweaked for each and every additional VM server. I expect that the number of sets of rules will multiply like links in a full mesh; N x (N-1) / 2. Conversely, extending the independent networks between server B and the VM hosts with something like VLANs (or VXLAN) would scale linierly.

I suppose that you could combine the V(X)LAN and bridging solution to maintain the single device on the VM hosts. I.e. extend the networks (eth1 & eth2) to each VM host and move the bridging + EBTables rules to each VM host. That would mean that each VM host would have the same simple rules.

The decision to do BRouting in addition to bridging will depend on if the 172.16.1.0/24 or 10.11.12.0/24 network need to access server A's IP address on the 192.168.1.0/24 network. In other words, do you need to route from 172.16.1.0/24 or 10.11.12.0/24 to 192.168.1.0/24?

Question: Is 192.168.1.0/24 as the underlying transport between server A & B that the GRETAP rides over top of? Or are they inside the GRETAP tunnel? — My understanding of GRETAP is that the 192.168.1.0/24 IPs would need to be bound to the bridge interfaces if the IPs are inside the tunnel.

That would be indeed a solution if it is not possible to work with 'one' connection between server A and B. I rather stay with one tunnel because then I can change ip addresses in the vm without having to change their configuration file on the host.

Remember that 802.1Q VLANs don't care about IP addresses. VLANs look like virtual patch cords. So, you plug (bridge) vPatch 1 into eth1 and vPatch 2 into eth2. You don't need to mess with tunnels or IP addresses on the network between servers A & B (et al). VLANs + bridges really do extend the separate eth1 & eth2 to server A.



--
Grant. . . .
unix || die

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux