Does "iif bond0" in a route rule identify packets received on the specific interface identified?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am somewhat new to networking in general and iproute2 in particular
but am hoping this audience might be able to help with the questions
I'm left with after some trial and error (apologies in advance for the
lengthy email - hopefully this will allow for quicker in-line
responses...)

I'm using iproute-2.6.18-13 on Linux 2.6.32-400.26.3 and have been
trying to use "ip rule" and "ip route" to setup appropriate routing on
a system with multiple bonded Ethernet interfaces. What I am trying to
achieve is a rule + route combination that routes traffic received via
a specific interface back out through the interface it came in on.

My test server, "routeTest" has three bonded interfaces "bond0" .. "bond2":
- bond0 has a CIDR 10.141.132.0/23 and ip address 10.141.133.124
- bond1 has a CIDR 172.16.0.0/16 and ip address 172.16.0.10
- bond2 has a CIDR 10.141.174.0/23 and ip address 10.141.174.4

I have set bond1 as the default gateway:

> # route -n
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
> 10.141.174.0    0.0.0.0         255.255.254.0   U     0      0        0 bond2
> 10.141.132.0    0.0.0.0         255.255.254.0   U     0      0        0 bond0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 bond1
> 172.16.0.0      0.0.0.0         255.255.0.0     U     0      0        0 bond1
> 0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 bond1

I am trying to reach the bond0 and bond2 addresses from a third
machine whose 10.141.167.77 ip address is on a network with CIDR
10.141.160.0/21.

In the above configuration, where default gateway is the IP address on
bond1, I cannot (which is as expected) successfully ping either bond0
or bond2 addresses from the third machine.

The following additional configuration to allow access from
10.141.167.77, works as expected:

> # ip rule add from 10.141.132.0/23 table 219
>
> # ip rule list
> 0:      from all lookup 255
> 32765:  from 10.141.132.0/23 lookup 219
> 32766:  from all lookup main
> 32767:  from all lookup default
>
> # ip route add default via 10.141.132.1 dev bond0 table 219
>
> # ip route list
> 10.141.174.0/23 dev bond2  proto kernel  scope link  src 10.141.174.4
> 10.141.132.0/23 dev bond0  proto kernel  scope link  src 10.141.133.124
> 169.254.0.0/16 dev bond1  scope link
> 172.16.0.0/16 dev bond1  proto kernel  scope link  src 172.16.0.10
> default via 172.16.0.1 dev bond1
>
> # ip route list table 219
> default via 10.141.132.1 dev bond0

If I set a ping running (ping -n 2000 -w 2000 10.141.133.124) from the
third system beforehand then as soon as I get as far as running the
"ip route add" command in the above sequence, the ping springs to life
with responses (following a similar sequence with appropriate values
for the bond2 network does the same for ping working to the
10.141.174.4 address)

So far, so good.

However, if possible, I would like to leverage an 'ip rule' and 'ip
route' configuration that does not reference IP or Network addresses,
just the interfaces, and this is where I start running into
difficulties...

I reset the routing environment (deleted the route and rule entries
and called "ip route flush cache") and issue the following commands:

> # ip rule add dev bond0 table 219
>
> # ip rule list
> 0:      from all lookup 255
> 32765:  from all iif bond0 lookup 219
> 32766:  from all lookup main
> 32767:  from all lookup default
>
> # ip route list
> 10.141.174.0/23 dev bond2  proto kernel  scope link  src 10.141.174.4
> 10.141.132.0/23 dev bond0  proto kernel  scope link  src 10.141.133.124
> 169.254.0.0/16 dev bond1  scope link
> 172.16.0.0/16 dev bond1  proto kernel  scope link  src 172.16.0.10
> default via 172.16.0.1 dev bond1
>
> # ip route add default via 10.141.132.1 dev bond0 table 219
>
> # ip route list table 219
> default via 10.141.132.1 dev bond0

The only change to the working scenario is to the ip rule entry, which changes:

> From: 32765:  from 10.141.132.0/23 lookup 219
>   To: 32765:  from all iif bond0 lookup 219

While interpreting the new rule I am hoping that the "iif bond0"
equates to the arrival interface and that overall the new rule implies
that:

"a packet from any network address arriving via bond0 will use routing
table 219".

Is that a correct interpretation?

I guess not since that routing rule doesn't seem to match the arriving
packets from ping on the third machine -- though I've kept route table
219 the same as in the working configuration, pings from
10.141.167.77, continues to time out after the same "ip route" command
is executed in the new sequence.

As a variation (to make the new command look as similar as possible to
the one that worked) I tried:

> # ip rule add from 0.0.0.0/0 dev bond0 table 219

but the result looked the same in "ip rule list" (from all iif bond0
lookup 219).

Can somebody help me understand what I've misinterpreted about "ip
rule add dev bond0 table 219" applying to all traffic?

Ultimately, I was hoping to be able to use the following rule/route combination:

> # ip rule add dev bond0 table 219
> # ip route add default dev bond0 table 219

My hope for the above is to setup routing so that the response to any
packet received via bond0 is also routed back out though bond0 - if
this cannot be achieved via the rule/route configuration as I hoped,
is there another way to achieve this?

Thanks in advance!
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux