I still think that you need to throttle back on down rates - when really full you may find that new/bursty connections mean that you loose control of the queue. Of course having twice as much bandwidth always helps.
Agree that it might be best in practice, but baring the initial surge you can now really crank everything up to the limit. Actually, just starting one or two connections (wget), I didn't notice more than a slight blip on ping speeds on startup - for single users you could probably leave it quite close to the max (as long as you can tolerate very brief increases in latency)
Latency is what I care about most - 300ms blip is really annoying for me, I suppose what I would see as 300 you would see as 150 which I could live with.
If I ran close on ingress it would hurt too much - but two long connections would be OK. It's new connections and many bursty connections that are the problem.
If you let things get really bad and you start having ISP drop packets then you will loose fairness/control aswell.
Can you test how big your ISP/BT buffer is - Mine is 600ms I wonder if yours is half or the same.
I am only testing uploads - looking at some more pings, it does appear that they are not quite as random as they were - but apart from the odd double dequeue (in a way I think you can expect this with HTB using quanta/DRR rather than per byte), the max is right. I suspect this is nothing to do with ISP/Teleco end. I could actually do slightly better on my latency, but I am running at 100Hz - and I can tell with pings - they slowly rise then snap down by 10ms. This is nothing to do with TC, I normally run 500 but forgot to set it for this kernel.
Aha, I switched off the HTB_HYSTERIS option and now my pings are nailed to the floor. I'm also on a 2.6 kernel which I *think* means 1000Hz scheduling by default?
My downlink is clear, I am using 1478 MTU (so I don't waste half a cell per packet). Just did another hundred to my first hop -
Hmm, quick question. I'm using Nildram as ISP, and I can't ping at more than 1430 packet sizes and non-fragment bit set.
Hmm ISTR reading of an ISP, maybe nildram, that enforced tweaks - thought they chose 1478 but alot of people like 1430.
However, according to
iptraf on the bridge (behind adsl router), I am receiving 1492 byte packets quite regularly... Since the ADSL router is doing NAT, does this likely imply that it is doing packet reassembly before the bridge sees each packet?
I know that for me, if I set ppp0 to say 576 and eth0 and LAN machines to 1500, then linux will frag and reconstruct ICMP - but let larger packets through.
I just changed the MTU on the router, but I don't see any difference in the size of packets. Shouldn't the router MTU be enough to trigger the initial connection to drop to a smaller MTU?
No - well it won't for me. You can use mss clamping to make tcp comply, but I change LAN machines.
I did find in the worst case I could do better than RED (not much though) and now I do per IP for bulk so Its harder to get the right settings with more than one RED, that may have different bandwidths at any given time. I also reduced the number of classes so each could have a higher rate.
Can you describe a little more about how you are improving on RED?
I would quite like to tweak the incoming to have 3 buckets. One for ACK's etc, the second for normal incoming and the last for bulk incoming. The idea would be to drop the bulk in preference to the normal incoming. I appreciate this won't work all that well, but it would be nice to say that my web browsing takes slight preference over a bulk ftp download. Or perhaps that incoming connections to a server machine take big preference over downloads to a local client machine (think office environment).
I would be interested on your ideas to achieve that.
I'll answer this later - it's hard to get on a PC in my house on a Sunday - "Not my turn" apparently :-)
Andy.
_______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/