Re: HTB MPU

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jason Boxman wrote:
On Friday 14 May 2004 03:05, Ed Wildgoose wrote:
<snip>

I appears that you could change the patch in tc/core in fn
tc_calc_rtable, from:

 + if (overhead)
 +     sz += overhead;

to something like:

 + if (overhead)
 +     sz += (((sz-1)/mpu)+1) * overhead;


I did that and recompiled iproute2. I kicked my rate up to my actual connection, 256Kbps, and I was nailed as usual. No measurable change using the above with an mpu of 54 for each class. Nothing changed at my handicapped rate of 160kbit either.

tc qdisc add dev eth0 root handle 1: htb default 90
tc class add dev eth0 parent 1: classid 1:1 htb rate 160kbit ceil 160kbit \
  mpu 54
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 64kbit ceil 64kbit \
  mpu 54 prio 0
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 80kbit ceil 160kbit \
  mpu 54 prio 1
tc class add dev eth0 parent 1:1 classid 1:50 htb rate 8kbit ceil 160kbit \
  mpu 54 prio 1
tc class add dev eth0 parent 1:1 classid 1:90 htb rate 8kbit ceil 160kbit \
  mpu 54 prio 1

<snip>

Can someone with a working setup try this out and see if it helps?


No joy. I had more success modifying the HTB_HYSTERESIS compile time option.

What would be nice is something that would calculate the actual PPPo(E|A) overhead on the fly at runtime and schedule accordingly.

Afterall, this whole [your rate] * 0.8/.75/.65 (I'm stuck with the latter value) is kind of a hack. If a scheduler existed that understood the packets were ATM'd and the overhead imposed therein, you could simply specify your rate as what it really is. By using a fraction of your actual egress bandwidth you're configuring for the worst case scenario. In reality, depending on your traffic I think you can approach your actual rate more closely.

(The classical example being sending an unloaded TCP ACK costing your two ATM cells and essentially wasting an entire ATM cell. But in some situations your traffic might be mostly large IP packets and then your waste overhead is greatly reduced...)

Anyway, is there any known work on such a scheduler? I'd be interested in beta testing anything under development.

Reading your other post I see your small traffic is ~100b - this would use three cells, so as a temporary kludge you could set your mpu to 159 and see how it goes.


AFAIK the author of the HTB patch is looking into modifying it to do the sums properly for DSL. There isn't one answer though - Eds' formula is fine doing the cells bit, but before this you need to add a ppp overhead to the IP packet length and this varies with pppoa+vc mux/pppoe/bridged pppoe and probably other varieties of dsl implementations.

Andy.

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux