OK, here it is. Near perfect bandwidth calculation for ADSL users. Patch iproute2 with the HTB stuff and then this:
It's still a hack (as far as I can tell) because we are patching the rates tables, and hence I think it is only loosly coupled with the actual calculation of bytes in each bucket.
However, it works very nicely for me! I have only been lightly testing with downloading stuff (hence packet dropping to slow rates), and I can set the max download rate to within a few kbyte/sec of the maximum and still keep near min ping times. I assume that the remaining sliver of bandwidth is taken up passing packets which arrive in a slight cluster, and for packets which I later need to drop (since I'm testing on an incoming interface and dropped packets don't count for bandwidth used calcs). However, I seem to be able to get *extremely* close to the max with this patch
Obviously all the numbers are hard coded, but they should be suitable for all ATM users. PPoE users will need to do something different (if someone can supply the details then I will see what we can do to make a more generic patch and use module params.
Note: That this code will probably affect the policer and CBQ modules in the same way as HTB, however, I don't have such a setup, so I can't test effectiveness (or detriment...). Feedback appreciated
Note also that rates in your scripts will now be expressed in terms of the ATM bandwidth, ie you put in something like the bandwidth you paid for, but (of course) you get roughly (bw * 48/53) passing through (this is normal, it's the overhead of running ATM).
--- iproute2-2.4.7.20020116/tc/tc_core.c 2000-04-16 18:42:55.000000000 +0100
+++ iproute2/tc/tc_core.c 2004-06-18 12:20:39.912974518 +0100
@@ -59,10 +59,19 @@
while ((mtu>>cell_log) > 255)
cell_log++;
}
+
+ // HACK - UK ATM Params
+ int encaps_cell_sz = 53;
+ int encaps_cell_overhead = 5;
+ int encaps_data_sz = encaps_cell_sz - encaps_cell_overhead;
+ int proto_overhead = 10; // PPP Overhead
+
for (i=0; i<256; i++) {
- unsigned sz = (i<<cell_log);
- if (sz < mpu)
- sz = mpu;
+ unsigned sz = ((i+1)<<cell_log)-1;
+ sz = sz + proto_overhead;
+ sz = ( (int)((sz-1)/encaps_data_sz) + 1) * encaps_cell_sz;
+// if (sz < mpu)
+// sz = mpu;
rtab[i] = tc_core_usec2tick(1000000*((double)sz/bps));
}
return cell_log;
_______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/
Nice one Ed :-)
After a bit of messing about -
Patch would't apply and I couldn't see why. Then did it by hand and had to move vars to top of function to get it to compile.
I set my uprate to 280kbit in TC = 286720 bit/s I am synced at 288000 - as you probably are, in UK, on what BT call 250/500 and isps call 256/512. I left a bit of slack just to let buffer empty if the odd packet extra slips through. FWIW maxing downlink (576000 for me) will probably mess up - you need to be slower or you don't get to build up queues and will often be using your isp's buffer.
I've been maxing uplink with bt for the last couple of hours and it's working fine -
100 packets transmitted, 100 packets received, 0% packet loss round-trip min/avg/max/stddev = 15.586/44.518/67.984/13.367 ms
It's just as it should be for my MTU.
When I get some time later I'll start hitting it with lots of small packets aswell.
Andy.
_______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/