Re: HTB and PRIO qdiscs introducing extra latency when output interface is saturated

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jonathan Lynch wrote:
I did the same tests that I outlined earlier, but this time by setting
hysteresis to 0. The config for the core router is included at the
bottom. The graphs for the delay of the voip stream and the traffic
going through the core router can be found at the following addresses.

http://140.203.56.30/~jlynch/htb/core_router_hysteresis.png
http://140.203.56.30/~jlynch/htb/voip_stream_24761_hysteresis.png


The max delay of the stream has dropped to 1.8ms. Again the jitter seems
to be around 1ms. There seems to be a pattern going whereby the delay
reaches about 1.6ms then drops back to 0.4 ms, jumps back to 1.6ms and
then back to 0.4ms repeatedly and then it rises from 0.5ms gradually and
repeats this behaviour. Is there any explanation to this pattern ?

Would it have anything go to do with burst being 1ms ?

Yes I suppose if you could sample truly randomly you would get a proper distribution - I guess the pattern arises because your timers are synchronised for the test.


When the ceil is specified as being 90mbit, is this at IP level ? What does this correspond to when a Mbit = 1,000,000 bits. Im a bit
confused with the way tc interprets this rate.

Yes htb uses ip level length (but you can specify overhead & min size) , the rate calculations use a lookup table which is likely to have a granularity of 8 bytes (you can see this with tc -s -d class ls .. look for /8 after the burst/cburst).

There is a choice in 2.6 configs about using CPU/jiffies/gettimeofday - I use CPU and now I've got a ping that does < 1 sec I get the same results as you.


If the ceil is based at IP level then the max ceil is going to be a
value between 54 Mbit and 97 Mbit (not the tc values) for a 100 Mbit
interface depending on the size of the packets passing through, right ?

Minimum Ethernet frame
148,809 * (46 * 8) =   148,809 * 368 = 54,761,712 Mbps

Maximum Ethernet frame
8,127 * (1500 * 8) =   8,127 * 12,000 =  97,524,000 Mbps

If you use the overhead option I think you will be to overcome this limitation and push the rates closer to 100mbit.


About the red settings, I dont understand properly how to configure the
settings. I was using the configuration that came with the examples.

I don't use red it was just something I noticed - maybe making it longer would help, maybe my test wasn't rerpresentative.

FWIW I had a play around with HFSC (not that I know what I am doing really) and at 92mbit managed to get -

rtt min/avg/max/mdev = 0.330/0.414/0.493/0.051 ms loaded
from
rtt min/avg/max/mdev = 0.114/0.133/0.187/0.028 ms idle

and that was through a really cheap switch.

Andy.

_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux