Re: HTB ATM MPU OVERHEAD (without any patching)

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I was able to do some further testing today with a full crew of players on my game servers. I cleaned up my script a bit to make it easier to modify the MPU and OVERHEAD, and also added both settings to the root class for completeness sake (not sure that matters at all). I'll include the final script at the end for posterity.

The final result (in my case) that worked best:

I kept rate at 695 (which is my provisioned upload rate of 768 * 48 / 53, to account for ATM overhead)
I left to MPU at 96 (since it takes two ATM packets (2 * 48) for the minimum ethernet packet)
For overhead, I went with 50. This is higher than the calculated the "optimum" of 24 (half of an ATM packet).


I thought about why the overhead value of 50 works best, and I really can't say for sure. For a while I was thinking about the possibility that I had the calculation wrong and that an overhead of 48 might make more sense (since a full packet is still necessary to carry any non-packet-aligned leftover data), but 48 + x still means that we accounted for x more bytes than necessary (with x being between 0-47), which means that overall it should average out to an overhead of 24 for all packets. So I just don't know. Apparently, for whatever reason, there appears to be more overhead than expected, though.

This is working beautifully in any case. A usual glance at my players has them showing pings from 70 up to 300 or so (people with higher pings tend to leave since it becomes unplayable). For this series of tests I latched onto a few player with particularly low pings, and then started an FTP transfer to my shell account at my ISP. With these settings there was a slight initial bump of just a few milliseconds, but after a minute the pings actually returned to where they had been in the beginning. So yes, I am saying that there was ZERO latency increase for the players. I'm amazed.

While the overhead of 50 is higher than expected, it does appear to be very close to optimum. Even dropping it to 40 causes noticeable differences if I let the FTP transfer run for a long time. In essence the overhead value is serving here as the tuning between large and small packets, with higher overheads giving more weight to the smaller packets. I don't want to give too little weight to the small packets because then there is not enough bandwidth reserved for them. Too much weight causes too much bandwidth to be reserved, forcing the big packets to needlessly slow down. In my case 50 works just perfectly, though for other people different values may be found to be appropriate.

I'm sure that an IPROUTE2 patched with the Wildgoose ATM patch works even better, but for an unpatched IPROUTE2 I think this is working pretty darn well.

This is the final cleaned up partial script I ended up with:

CEIL=695
MPU=96
OVERHEAD=50

# create HTB root qdisc
tc qdisc add dev eth0 root handle 1: htb default 22
# create classes
tc class add dev eth0 parent 1: classid 1:1 htb \
rate ${CEIL}kbit \
mpu ${MPU} \
overhead ${OVERHEAD}
# create leaf classes
# ACKs, ICMP, VOIP
tc class add dev eth0 parent 1:1 classid 1:20 htb \
rate 50kbit \
ceil ${CEIL}kbit \
prio 0 \
mpu ${MPU} \
overhead ${OVERHEAD}
# GAMING
tc class add dev eth0 parent 1:1 classid 1:21 htb \
rate 600kbit \
ceil ${CEIL}kbit \
prio 1 \
mpu ${MPU} \
overhead ${OVERHEAD}
# NORMAL
tc class add dev eth0 parent 1:1 classid 1:22 htb \
rate 15kbit \
ceil ${CEIL}kbit \
prio 2 \
mpu ${MPU} \
overhead ${OVERHEAD}
# LOW PRIORITY (WWW SERVER, FTP SERVER)
tc class add dev eth0 parent 1:1 classid 1:23 htb \
rate 15kbit \
ceil ${CEIL}kbit \
prio 3 \
mpu ${MPU} \
overhead ${OVERHEAD}
# P2P (BITTORRENT, FREENET)
tc class add dev eth0 parent 1:1 classid 1:24 htb \
rate 15kbit \
ceil ${CEIL}kbit \
prio 4 \
mpu ${MPU} \
overhead ${OVERHEAD}
# attach qdiscs to leaf classes - using SFQ for fairness
tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev eth0 parent 1:21 handle 21: sfq perturb 10
tc qdisc add dev eth0 parent 1:22 handle 22: sfq perturb 10
tc qdisc add dev eth0 parent 1:23 handle 23: sfq perturb 10
tc qdisc add dev eth0 parent 1:24 handle 24: sfq perturb 10
# create filters to determine to which queue each packet goes
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 22 fw flowid 1:22
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 23 fw flowid 1:23
tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 24 fw flowid 1:24


----- Original Message ----- From: "Jason Boxman" <jasonb@xxxxxxxxxx>
To: <lartc@xxxxxxxxxxxxxxx>
Sent: Monday, April 11, 2005 7:14 PM
Subject: Re: HTB ATM MPU OVERHEAD (without any patching)


Those are always the thoughts I had. I never successfully played around with
overhead or MPU though. Did you compare results with and without using
overhead and mpu settings?

_______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux