HTB ATM MPU OVERHEAD (without any patching)

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I know there is that handy patch available to very efficiently use ATM bandwidth, but I was wondering what the best values to use with a non-patched iproute2 would be. Anyone here care to check my logic in coming up with these numbers and perhaps suggest better values?

My transmit speed is 768kbps per ADSL line (I have two). This is the HTB shaping I do on the interface (logic used for this follows):

# create HTB root qdisc
tc qdisc add dev eth0 root handle 1: htb default 22
# create classe
tc class add dev eth0 parent 1: classid 1:1 htb \
 rate 695kbit
# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
 rate 50kbit \
 ceil 695kbit \
 prio 0 \
 mpu 96 \
 overhead 24
# GAMING
tc class add dev eth0 parent 1:1 classid 1:21 htb \
 rate 600kbit \
 ceil 695kbit \
 prio 1 \
 mpu 96 \
 overhead 24
# NORMAL
tc class add dev eth0 parent 1:1 classid 1:22 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 2 \
 mpu 96 \
 overhead 24
# LOW PRIORITY (WWW SERVER, FTP SERVER)
tc class add dev eth0 parent 1:1 classid 1:23 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 3 \
 mpu 96 \
 overhead 24
# P2P
tc class add dev eth0 parent 1:1 classid 1:24 htb \
 rate 15kbit \
 ceil 695kbit \
 prio 4 \
 mpu 96 \
 overhead 24

Here's the logic I used to come up with these numbers::

The maximum real bandwidth, assuming no waste of data, is 768 * 48 / 53 = 695. This accounts for the fact that ATM packets are 53 bytes, with bytes being overhead. So that's the overall rate that I'm working with.

I then set the MPU to 96 (2 * 48) since the minimum ethernet packet (64 bytes) uses two ATM packets (each having 48 bytes of data). I use 48 instead of 53 here because we already accounted for the ATM overhead in the previous calculation.

For the overhead I use 24, since nearly each ethernet packet is going to end up splitting an ATM packet at some point. Just going with law of averages (instead of a real world statistical analysis), I'm going with each ethernet packet wasting (on average) half of an ATM packet. Again using 48 as the ATM packet size (since we accounted for ATM overhead already), 48 / 2 = 24.

A theoretical comparison of sending 10,000 bytes via various packet would therefore look like this:

1000 packets of 10 bytes each = 1000 (packets) * 96 (mpu) + 1000 * 24 (overhead) = 120,000 bytes
100 packets of 100 bytes each = 100 (packets) * 100 (bytes) + 100 * 24 (overhead) = 12,400 bytes
10 packets of 1000 bytes each = 10 (packets) * 1000 (bytes) + 10 * 24 (overhead) = 10,240 bytes


Which of course indicates just how much bandwidth small packets waste...

So, is this logic crap or what? Should this at least be close to optimum or did I forget something important or make an erroneous assumption?

I run some game servers so I was able to do a real world test. With the game servers maxed out, I started an FTP upload... the latency for the players went up slightly, but not too much.. maybe just 30ms or so. So from that perspective this seems to be working well.

One thing I wish I could do would be to query the DSL modem to see exactly how much bandwidth usage it is reporting, but unfortunately my ISP now uses these crappy new ADSL modems that don't support SNMP :( :( My old SDSL router did, and I miss that feature a lot, but not enough to buy new ADSL modems myself.

Chris

_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux