Re: HTB ATM MPU OVERHEAD (without any patching)

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Bennett wrote:
I know there is that handy patch available to very efficiently use ATM bandwidth, but I was wondering what the best values to use with a non-patched iproute2 would be. Anyone here care to check my logic in coming up with these numbers and perhaps suggest better values?

My transmit speed is 768kbps per ADSL line (I have two).

Is it your sync speed or what speed it's advertised by ISP - they may be different and if you patch/allow for overhead properly you can approach sync speed. Most modems tell you this.


Also bps means bytes/sec to TC I assume you mean kbit and depending on TC version old TC k or K = 1024, newer TC k=1000 I think K still 1024.

# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
 rate 50kbit \

OK as long as you can be sure voip+ack+icmp never > 50kbit you will queue if other classes are full otherwise.



Here's the logic I used to come up with these numbers::

The maximum real bandwidth, assuming no waste of data, is 768 * 48 / 53 = 695. This accounts for the fact that ATM packets are 53 bytes, with bytes being overhead. So that's the overall rate that I'm working with.

Depending on sync speed mentioned above that may be OK for ATM data rate ..


I then set the MPU to 96 (2 * 48) since the minimum ethernet packet (64 bytes) uses two ATM packets (each having 48 bytes of data). I use 48 instead of 53 here because we already accounted for the ATM overhead in the previous calculation.


For the overhead I use 24, since nearly each ethernet packet

HTB uses IP packet length, to which you then need to add your fixed overhead - for pppoe that may include eth header + other have a look at jesper's table in his thesis.


http://www.adsl-optimizer.dk/

On top of that you need to account for the fact that depending on original packet length upto 47 bytes padding gets added to make up a complete cell.

To be safe you need overhead alot bigger than 24.

I run some game servers so I was able to do a real world test. With the game servers maxed out, I started an FTP upload... the latency for the players went up slightly, but not too much.. maybe just 30ms or so. So from that perspective this seems to be working well.

You could tweak up htb and rules to help this.

In net/sched/sch_htb.c there is a define hysteresis it defaults to 1, which makes htb dequeue packets in pairs even if you specify quantum = MTU on leafs. Setting it to 0 fixes this - and specifying quantum = MTU on leafs and burst/cburst 10b on bulk classes let's me never delay an interactive packet longer than the transmit time of a bulk size packet.

Andy.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux