RE: gretap tunnel redirecting 2 different networks on destination host to nics

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> How can I get the 10.11.12.x traffic received on tun1 at host B to 
eth2, 
> > and traffic 172.16.1.x to eth1?
> 
> Based on your last paragraph and macvtap comment it sounds like you 
are 
> wanting to extend the physically separate 172.16.1.0/24 and 
> 10.11.12.0/24 networks from server B to VMs on server A.  Is that 
correct?

yes

> 
> > When I put the tun1 interface of server B in a bridge with eth1 I am 

> > able to ping several 172.16.1.x ip's from server A. And 
communication 
> > on this network seems to be ok.
> 
> Okay.
> 
> > When I add eth2 to the bridge, the whole network goes down. (Because 
of a 
> > 'loop'?)
> 
> Did you remove eth1 from the bridge before starting your test?

No

> 
> Are eth1 and eth2 physically connected to the same network?  Thus 
> forming a loop when eth1 and eth2 are bridged together?

Don’t know exactly how these are created, those are not under my 
control.
I do know if I put them both in stp bridge, the whole network goes down.

> 
> Is (Rappid) Spanning Tree Protocol enabled anywhere?  -  I'm guessing 
> not since things apparently go down because of a loop.

Stp was always on.

bridge name     bridge id               STP enabled     interfaces
br0             8000.0050568776db       yes             eth2
                                                        tun1


> 
> > I thought of creating a 2nd gretab tunnel and use each tunnel for a 
> > network, but I think there is probably a better solution.
> 
> I'm guessing the 2nd gretap tunnel between the same endpoints (IPs on 
> server A & B) won't work as desired.
> 
> How will servers A & B differentiate the gretap tunnels?  -  My 
> understanding is that GRE(TAP) uses the source & destination IP 
> addresses for the tunnel endpoints to differentiate.  Seeing as how 
> these would be the same....
> 
> I agree that there are other solutions that are likely better.  VLANs 
> and VXLAN immediately come to mind.
> 
> Please confirm if server A and server B have a layer 2 connection 
(that 
> will support 802.1Q VLANs) between them.

I think that should be possible

> > I also don’t think iptables should be necessary, because I don’t 
> > want to do any natting (However I have default policy DROP on INPUT, 

> > OUTPUT, FORWARD)
> 
> IPTables (and EBTables) are used for more than just NATing.  You can 
> also use it to steer traffic based on source IP.
> 
> Why do you have the IPTables policies set to DROP?

I actually use one range to nat and port forward, and block some brute
forcing on the ports. But didn’t mention it for the purpose of keeping
this issue simple.

> 
> > I have a server A that sends 172.16.1.x and 10.11.12.x traffic via a 

> > gretab tunnel 192.168.1.x to server B. (Putting vms with a macvtap 
on 
> > tun1 on host A)
> 
> I don't think that's going to work as desired.

At the moment this seems to be working fine, when using just one range. 
And having the eth and tun in a bridge. (every vm has its own 
macvtap on the tun1 interface)
The idea behind this is that I can just deploy whatever vm I have, 
Whithout having to change to much in its configuration (eg switching to
a different tun interface)
If I have another host, I will create a new tunnel, and add that as tun2
to the existing bridge on server B.

Meanwhile I have been reading about the pseudo bridge. 
If I add these lines
echo 1 > /proc/sys/net/ipv4/conf/br0/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/eth1/proxy_arp
echo 1 > /proc/sys/net/ipv4/conf/eth2/proxy_arp
And a host route on server B to the vm's ip on dev br0

I can get a ping out, problem is that it stalls and then again pings.

If I add ip ranges of either network to server B's interfaces. The
ping to that ip is performing ok. 
So it looks like the setup on server A is sort of ok.
It looks like the networks on the ethX of server B 'forget' that there 
is
a vm located somewhere behind the tunnel

> 
> Consider for a moment, if you will, using a single macvtap for the VMs 

> that need to access mulitple network segments if the VMs were living 
on 
> server B.  Either all of the VMs accessing 172.16.1.0/24 would work 
and 
> all of the VMs accessing 10.11.12.0/24 would fail -or- all of the VMs 
> accessing 172.16.1.0/24 would fail and all of the VMs accessing 
> 10.11.12.0/24 would work.  Having a singular macvtap and putting 
> multiple VMs on it that need to access separate physical networks is 
> veyr likely not going to work.  (Or I have the wrong understanding of 
> macvtap.)
> 
> >              +-------------+                             
+------------+
> >   172.16.1.x |      B      |                             |      A    
 |
> >       -------|eth1         |         192.168.1.x GRETAP  |           
 |
> >              |         tun1|-----------------------------|tun1       
 |
> >   10.11.12.x |             |                             |           
 |
> >       -------|eth2         |                             |           
 |
> >              +-------------+                             
+------------+
> 
> It would be possible to use a single gretap tunnel, bridged with eth1 
& 
> eth2, /and/ EBTables to prevent the loop between eth1 & eth2 while 
still 
> allowing each of them to communicate with tun1.

This would be the ideal situation. I cannot remember working with 
ebtables
Should try something like this brouter?
http://ebtables.netfilter.org/examples/basic.html#all


> /If/ servers A & B share a layer 2 network that will pass 802.1Q VLAN 
> tagged frames, I'd think seriously about using multiple VLANs.  One 
VLAN 
> for 172.16.1.0/24 (eth1) and another for 10.11.12.0/24 (eth2).  Then 
> have two macvtap adapters on server A, one connected to each VLAN 
> interface.  -  You would need to bridge eth1 to one VLAN interface and 

> eth2 to the other VLAN interface on server B.

That would be indeed a solution if it is not possible to work with 'one'
connection between server A and B. I rather stay with one tunnel because
then I can change ip addresses in the vm without having to change their
configuration file on the host.

> If servers A & B /don't/ share a layer 2 network, I would consider 
> VXLAN.  VXLANs will work very similarly to VLANS.  Instead of using 
VLAN 
> interfaces, you would use VXLAN Tunnel Endpoint (a.k.a. VTEP) 
> interfaces.  You would still need to bridge eth1 to vtep1 and eth2 to 
> vtep2 on server B.
> 
> Both the VLAN and VXLAN solution would very likely require mulitple 
> macvtap configurations on server A, as each would be a logical 
extension 
> of the 172.16.1.0/24 (eth1) and 10.11.12.0/24 (eth2) networks.  (See 
> previous comments.)  -  In short, you need to separate the network 
> traffic somewhere (server A or B) and somehow (EBTables or multiple 
> macvtap interfaces).
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux