[LARTC] Routing/forwarding/shaping problems in v2.2.x (Long - sorry)

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings from a newbie!
(Well, to this list anyway)

I'm having a problem and I hope someone here might be able to help...

I am strongly expecting an answer along the lines of "upgrade to v2.4.x", but 
I would REALLY preffer to avoid that for now...

The setup:

"Home brewed" v2.2.24 (will patch to v2.2.25 later today) with the DS8 patch 
applied. Currently downloading the DS9/rbtree/htb3 patches to be applied 
later (obviously, unpatching the old DS8 first), and see if at least some of 
my problems go away.

Multiple cable/DSL lines with multiple default routes and equal cost 
multipath.

The problems:

1) Ingress shaping/policing doesn't seem to work at all.

I haven't tried outgoing shaping/policing, but prioritizing traffic definitely 
works, and since activating it, my bandwidth usage has quite obviously jumped 
to closer to my available limits (from the bandwidth usage graphs) on 
cable/DSL connections. ssh and ping latency has also gone through the floor 
after increasing the priority for those protocols (using a variant of the 
wonderhaper). So that clearly seems to work.

When applying ingres shaping (policing filter) all executes fine without 
reporting any errors, but

tc -s -d qdisc show dev eth1
and
tc -s -d filter show dev eth1

both say that no traffic has been caught by the rules (which just cannot be 
right because I am using a u32 filter with src 0.0.0.0/0, and the same filter 
works for outgoing traffic with dst 0.0.0.0/0.

Here is a script snippet:

tc qdisc add    dev $DEV ingress        \
                handle ffff:

tc filter add   dev $DEV                        \
                parent ffff:                    \
                protocol ip                     \
                prio 50                         \
                u32 match ip                    \
                src 0.0.0.0/0                   \
                police rate ${DOWNLINK}Kbit     \
                burst $[8*DOWNLINK]Kbit         \
                drop flowid :1


Can anyone hazard a guess as to why this is not doing what it should be? Is 
this a know bug in DS8 and DS9 will fix it? I will try it anyway, just to 
make sure, but some encouraging news would be nice. :-)

1.2) If the ingress traffic shaping is unfixable in v2.2.x, would it be 
possible to instead set up a dummy network device, and set up an egress 
shaper on the physical interface, and forward everything to the dummy 
intgerface, and then use the dummy interface as the default gateway? 
Effectively this would do the same thing as setting up two routers 
bacl-to-back, and using only egress shaping on both routers to achieve 
ingress shaping? Kind of like having a logical, rather than a physical 
router?

2) ipmasqadm portfw unstable/unreliable

I have tried to use this approach to forward ports from the firewall to an 
internal server. It works OK initially, but within minutes, things start 
going wrong. Some connections get through on one interface but not the other. 
Later, connections from the same host will work on a different interface, but 
not the one it worked on initially.

This can be temporarily made to go away by doing
# ip route flush cache
a few times, but the problem will always return. After about a month of 
uptime, this will get to the point where most connections fail on most 
interfaces, and the only cure I have found is a reboot.

Connections that are actually physically proxied by using NetCat (where the 
source address doesn't need to be preserved) or redir (with --transproxy 
where the source address does need to be preserved) work fine, and don't 
experience the same problem at all.

Obviouly, I would prefer to use the "ipmasqadm portfw" solution because it is 
massively more memory/CPU efficient than a generic user-space tcp proxy 
deamon running from inetd.

3) teql vs. multiple default routes
Not really a problem, more of a matter of exploring options, and I haven't 
found any particularly detailed information on the subject.

Can a virtual load ballancing device be used to aggregate multiple ethernet 
interfaces? Presumably load balancing of incoming things is still done in the 
traditional way (DNS+stuff), and a teql interface would only be used to load 
balance connections initiated from the network itself?

Presumably, the advantage of using a teql interface over multiple default 
routes is that load ballancing would be done per session, rather than per 
route? Are there any routing issues from the external point of view with 
using teql, or is it completely transparent to the outside world? What 
concerns me is that the source IP address might be a private IP address of 
the teql interface. Are there any potential issues with this, or can standard 
ipchains/masquerading rules be used to overcome any potential problems?

TIA.

Gordan


[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux