Re: patch: HTB update for ADSL users

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




After a bit of messing about -
Patch would't apply and I couldn't see why. Then did it by hand and had to move vars to top of function to get it to compile.


Hmm, perhaps it got corrupted because of the change in line endings when I pasted it in on a windows machine? Piece of cake to apply manually. If I can get some PPoE settings then I will make a more generic patch and stick it on a website.

Can you paste the compile errors and tell me versions of gcc please? I can't see any probs with that code though? (They will be params passed in later anyway)

I set my uprate to 280kbit in TC = 286720 bit/s I am synced at 288000 - as you probably are, in UK, on what BT call 250/500 and isps call 256/512. I left a bit of slack just to let buffer empty if the odd packet extra slips through. FWIW maxing downlink (576000 for me) will probably mess up - you need to be slower or you don't get to build up queues and will often be using your isp's buffer.

I've been maxing uplink with bt for the last couple of hours and it's working fine -


Yes, excellent isn't it! I tested download rates (bearing in mind the difficulty of controlling those, and could get within a sliver of full bandwidth before the rate rises!

I see a two stage rise in ping times. First it stays on 30ms, then it rises to 60ms-90ms, then it queues like crazy. Interesting the kind of three step ramp up. I have a hunch that packets don't arrive smoothly and queuing occurs at the ISP end (once we get near the limit) even though the average rate is below the max rate...? (ie from time to time you start to see two packets ahead of you instead of just one)

100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max/stddev = 15.586/44.518/67.984/13.367 ms

It's just as it should be for my MTU.


Hmm, what's your MTU? Those numbers look extremely low for 1500 byte packets (at least if you have a little downlink congestion as well?)

When I get some time later I'll start hitting it with lots of small packets aswell.


I have a 1meg downstream with 256 upstream, and I turned on bittorrent to try and flog the connection a bit. Upstream was maxed out, but downstream was only half full. However, ping times are 20-110. I think they ought to be only 20-80 ish, and I'm trying to work out why there is some excess queuing (1500 ish mtu). My QOS is based on the the (excellent) script from:
http://digriz.org.uk/jdg-qos-script/


Basically, HTB in both directions. RED on the incoming stream (works nicely). Outgoing classifies into 10 buckets, and ACK + ping are definitely going out ok in the top prio bucket, and the rest is going out in the prio 10 bucket.... But still these high pings... Hmm


I would be interested to hear if anyone has a CBQ based setup and can tell me if that patch works for them? Or even whether it works on the incoming policer properly?


It looks as though this is an adequate way to tackle the problem. The alternative would be to hook into the enqueue side of the qdisk, calculate a new size value then and fix the code to refer to this value from then on. It would be quite invasive though because it modifies kernel headers. I would need someone who understands the scheduler in more detail to guide me as to whether it was neccessary

Ed W

_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux