shaping fails when using p2p apps?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there,

We're running a small ISP and all the users are shaped to 384/512/768k
both ways (whichever package they choose).
The router is a linux (debian sarge), the kernel is 2.4.25 right now.
All users are getting 10.1.1.* ip addresses (eth1) and eth0 connects
to the isp using ethernet (via a media converter, it's fiber from
there). They're nat's using iptables masquerade.

I've tried using both HTB and CBQ (on both interfaces, so it works
both ways), and there are _SOME_ kind of traffic that is not shaped.
As far as I know it's DC++ (a p2p app) outgoing traffic to many
different IP addresses in the same time, for example:

# tcpdump -n -i eth1 ether src or dst 00:0D:88:8B:41:3C
09:40:01.758736 IP 82.141.185.62.4000 > 10.1.1.61.2458: . ack 789 win 63452
09:40:01.780666 IP 24.90.104.98.14623 > 10.1.1.61.2844: . ack 16385 win 65535
09:40:01.789314 IP 10.1.1.61.2844 > 24.90.104.98.14623: P 23317:24577(1260) ack 0 win 24791
09:40:01.790533 IP 10.1.1.61.2844 > 24.90.104.98.14623: P 24577:25365(788) ack 0 win 24791
09:40:01.796050 IP 10.1.1.61.2562 > 213.112.223.88.7778: . ack 297 win 24466
...
for some reason this is the only case i've seen such huge
tcp window sizes, but it's true I haven't been looking
for too long ;)

the script (perl) I wrote basically fetches the shaping data from
the database and spits out a shellscript that is |sh -d.
The one that sets HTB shaping looks like (i've removed
burst/cburts now but it's the same nonetheless):

tc qdisc add dev eth0 root handle 100: htb
tc qdisc add dev eth1 root handle 200: htb

tc class add dev eth0 parent 100: classid 100:$htbindex htb rate $row->{shaper}kbit prio 0
tc filter add dev eth0 protocol ip parent 100: prio 0 u32 match ip src $row->{ip} flowid 100:$htbindex
tc class add dev eth1 parent 200: classid 200:$htbindex htb rate $row->{shaper}kbit prio 0
tc filter add dev eth1 protocol ip parent 200: prio 0 u32 match ip dst $row->{ip} flowid 200:$htbindex

($htbindex is just ++-ing, $row is a hashref from the DB)

the same thing with CBQ looks like:

tc qdisc add dev eth0 root handle 100: cbq avpkt 100000 bandwidth 100mbit
tc qdisc add dev eth1 root handle 200: cbq avpkt 100000 bandwidth 100mbit

tc class add dev eth0 parent 100: classid 100:$cbqindex cbq rate $row->{shaper}kbit allot 1500 prio 5
bounded isolated
tc filter add dev eth0 parent 100: protocol ip prio 16 u32 match ip src $row->{ip} flowid 100:$cbqindex
tc class add dev eth1 parent 200: classid 200:$cbqindex cbq rate $row->{shaper}kbit allot 1500 prio 5
bounded isolated
tc filter add dev eth1 parent 200: protocol ip prio 16 u32 match ip dst $row->{ip} flowid 200:$cbqindex

Now everything works fine but people using P2P applications (mostly
DC) are able to upload way over the limit. Any clues why this could
be? I will try to mark the packets with iptables and shape based on
that but I still dont understand why this is happening.

I've also tried brute force ingress filtering (drop anything over the
limit) but the whole thing became kind of erratic, 384kbit uploads
were maxed at 20-30k/sec with huge latencies.

it was like:

tc qdisc add dev eth0 handle ffff: ingress
tc qdisc add dev eth1 handle ffff: ingress

tc filter add dev eth0 parent ffff: protocol ip prio 16 u32 match ip dst $row->{ip} police rate
$row->{shaper}kbit burst 10k drop flowid :ffff
tc filter add dev eth1 parent ffff: protocol ip prio 16 u32 match ip src $row->{ip} police rate
$row->{shaper}kbit burst 10k drop flowid :ffff

Any help would be really appreciated.

BR,
-
diab


_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux